@dang this is a very interesting and relevant doc. I think it needs another chance at making it to the front page.

This is a fairly easy to read doc discussing some of the challenges with using AI tooling in a forward thinking and disciplined way. Coming from Thoughtworks it also gives a bit of gravitas and legitimacy.

There's good stuff in here. It would be a shame for the larger HN community to miss out on this conversation.

Ok, let's give it a try. (Btw, @dang doesn't work reliably - for that you need to email hn@ycombinator.com. I only saw this by accident.)

I think the original title is better than the current one, though: "The future of software engineering – [Thoughtworks] retreat findings and strategic insights"

Why do you think it is better?

> Coming from Thoughtworks it also gives a bit of gravitas

Why? I thought the opposite. Consultancies, of which thoughtworks is one, publish thought leadership as marketing material.

Mainly because Martin Fowler is part of their C suite

I agree that it's marketing material, but that doesn't instantly make it garbage. I've been reading their quarterly Thoughtworks Radar for a while now and it's clearly put together by people who understand the industry.

> "Where does the rigor go?"

> Engineering quality doesn't disappear when AI writes code. It migrates to specs, tests, constraints, and risk management.

These are generic "thoughts" you can get from any agency pushing AI SDLC. The pages I read through left me wondering if there was even a real retreat.

You're right that this isn't some groundbreaking revelation. If you're using AI enough to be feeling it, you're feeling/seeing what they're talking about. The purpose of a paper/retreat like this it get it all together and written down on paper, then to disseminate it to the wider world. I think the paper does a good job of collecting info that isn't wrong, and which has enough info to help guide folks making decisions.

It's drivel.