On the data warehousing side, I think the story looks like this:
1) Cloud data warehouses like Redshift, Snowflake, and BigQuery proved to be quite good at handling very large datasets (petabytes) with very fast querying.
2) Customers of these proprietary solutions didn't want to be locked in. So many are drifting toward Iceberg tables on top of Parquet (columnar) data files.
Another "hidden" motive here is that Cloud object stores give you regional (multi-zonal) redundancy without having to pay extra inter-zonal fees. An OLTP database would likely have to pay this cost, as it likely won't be based purely on object stores - it'll need a fast durable medium (disk), if at least for the WAL or the hot pages. So here we see the topology of Cloud object stores being another reason forcing the split between OLTP and OLAP.
But how does this new world of open OLTP/OLAP technologies look like? Pretty complicated.
1) You'd probably run PostGres as your OLTP DB, as it's the default these days and scales quite well.
2) You'd set up an Iceberg/Parquet system for OLAP, probably on Cloud object stores.
3) Now you need to stream the changes from PostGres to Iceberg/Parquet. The canonical OSS way to do this is to set up a Kafka cluster with Kafka Connect. You use the Debezium CDC connector for Postgres to pull deltas, then write to Iceberg/Parquet using the Iceberg sink connector. This incurs extra compute, memory, network, and disk.
There's so many moving parts here. The ideal is likely a direct Postgres->Iceberg write flow built-into PostGres. The pg_mooncake this company is offering also adds DuckDB-based querying, but that's likely not necessary if you plan to use Iceberg-compatible querying engines anyway.
Ideally, you have one plugin for purely streaming PostGres writes to Iceberg with some defined lag. That would cut out the third bullet above.
> There's so many moving parts here.
Yep. At the scope of a single table, append-only history is nice but you're often after a clone of your source table within Iceberg, materialized from insert/update/delete events with bounded latency.
There are also nuances like Postgres REPLICA IDENTITY and TOAST columns. Enabling REPLICA IDENTITY FULL amplifies you source DB WAL volume, but not having it means your CDC updates will clobber your unchanged TOAST values.
If you're moving multiple tables, ideally your multi-table source transactions map into corresponding Iceberg transactions.
Zooming out, there's the orchestration concern of propagating changes to table schema over time, or handling tables that come and go at the source DB, or adding new data sources, or handling sources without trivially mapped schema (legacy lakes / NoSQL / SaaS).
As an on-topic plug, my company tackles this problem. Postgres => Iceberg is a common use case.
[0] https://docs.estuary.dev/reference/Connectors/materializatio...
can you explain this please "not having it means your CDC updates will clobber your unchanged TOAST values" ?
They’re referring to this: https://debezium.io/blog/2019/10/08/handling-unchanged-postg...
Funny timing, just took a fresh look at this topic in this new post earlier this week: https://www.morling.dev/blog/backfilling-postgres-toast-colu....
This may be helpful for you https://clickhouse.com/docs/integrations/clickpipes/postgres...
totally agreed on 3. You're also missing the challenges of dealing with updates/deletes; and managing the many small files.
CDC from OLTP to Iceberg is extremely non-trivial.
The small writes problem that Iceberg has is totally silly. They spend so much effort requiring a tree of metadata files, but you still need an ACID DB to manage the pointer to the latest tree. At that point, why not just move all that metadata to the DB itself? It’s not sooo massive in scale.
The current Iceberg architecture requires table reads to do so many small reads, of the files in the metadata tree.
The brand new DuckLake post makes all this clear.
https://duckdb.org/2025/05/27/ducklake.html
Still Iceberg will probably do just fine because every data warehousing vendor is adding support for it. Worse is better.
This is essentially what Crunchydata does with their Crunchydata Warehouse product. It’s really cool.
Their product looks promising. It looks like the PostGres schema and writes have to be "Iceberg-aware": special work to get around the fact that a small write results in a new, small Parquet file. That's not the end of the world - but perhaps ideally, you wouldn't be aware of Iceberg much at all when using PostGres. That might be a dream though.
Fully using PostGres without awareness of Iceberg would require full decoupling, and a translation layer in between (Debezium, etc). That comes with its own problems.
So perhaps some intimacy between the PostGres and Iceberg schemas is a good thing - especially to support transparent schema evolution.
DuckLake and CrunchyBridge both support SQL queries on the backing Iceberg tables. That's a good option. But a big part of the value of Iceberg comes in being able to read using Spark, Flink, etc.
I'd argue the bigger value is keeping the data in one storage place and bringing the compute to it. Works especially well for Big Corp use cases where entire divisions of the corp go their own way. Throw in M&A activity and it is a good hedge for the unknown (I.e you might be an Databricks and Azure shop and you just bought a Snowflake & AWS company). Keep the data in an open table format, and let everyone query using their preferred engine to their hearts desire.
There's two problems being discussed in this article and thread:
1) Combining OLTP and OLAP databases into one system
2) Using an open data format to be able to read/write from many system (OLTP/PostGres, analytics engine/Spark)
> I'd argue the bigger value is keeping the data in one storage place and bringing the compute to it.
Yes, I agree with you. This observation is the idea behind #2, and why Iceberg has so much momentum now.