Oracle’s wasn’t but I haven’t used it in a very long time so that may not be longer be true.
The problem though was that it had a single shared pool for all queries and it could only run a query if it was in the pool, which is how out DB machine would max out at 50% CPU and bandwidth. We had made some mistakes in our search code that I told the engineer not to make.
Postgres’s PREPARE is per-connection so it’s pretty limited, and then connection poolers enter the fray and often can’t track SQL-level prepares.
And then the issue is not dissimilar to Postgres’s planner issues.
Oracle’s wasn’t but I haven’t used it in a very long time so that may not be longer be true.
The problem though was that it had a single shared pool for all queries and it could only run a query if it was in the pool, which is how out DB machine would max out at 50% CPU and bandwidth. We had made some mistakes in our search code that I told the engineer not to make.
Unless you cache query plans like other RDBMS's then the client manually managing that goes away and its not limited to a single connection.
MS SQL still has prepared statements and they really haven't been used in 20 years since it gained the ability to cache plans based on statement text.