I started https://www.vantage.sh/ - a cloud cost platform that tracks Infra & AI spend.

The $100k/dev/year figure feels like sticker shock math more than reality. Yes, AI bills are growing fast - but most teams I see are still spending substantially lower annually, and that's before applying even basic optimizations like prompt caching, model routing, or splitting work across models.

The real story is the AWS playbook all over again: vendors keep dropping unit costs, customers keep increasing consumption faster than prices fall, and in the end the bills still grow. If you’re not measuring it daily, the "marginal cost is trending down" narrative is meaningless - you’ll still get blindsided by scale.

I'm biased but the winners will be the ones who treat AI like any other cloud resource: ruthlessly measured, budgeted, and tuned.

Ironically, except for Graviton (and that's also plateauing; plus it requires that you're able to use it), basically no old AWS service has been reduced in cost since 2019. EC2, S3, etc.

Look at the early days of AWS vs recent years. The fact that AWS services have been basically flat since 2019 in a high-inflation environment is actually pretty dang good on a relative basis.

Considering the AWS profit margins I'm fairly sure there plenty more cost reductions that could have been passed along.

Dude, thank you for this service. I use ec2instance.info and vantage.sh for Azure all of the time.