For analytical are exploratory work, it still would be much more sensible to use DynamoDB streams with a simple Lambda and write it to an Aurora Serverless Postgres/MySQL table.
The tradeoff of doing aggregations and joins in memory just isn’t worth it.
I agree for cases that need real-time replication and are querying at high frequency. There is a trade-off, though. You're adding three services to maintain, there's replication lag, and Aurora's running continuously whether you're querying or not. For teams that need occasional analytical access to their DynamoDB data without building and operating a replication pipeline, I still think the in-memory trade-off is worth it. Different teams, different trade-offs.