Something I'm finding odd is this seemingly perpetually repeating claim that the latest thing that came out actually works, unlike the last thing that obviously didn't quite work.

Then next month, of course, latest thing becomes last thing, and suddenly it's again obvious that actually it didn't quite work.

It's like running on a treadmill towards a dangling carrot or something. It's simultaneously always here in front of our faces but also not here in actual hand, obviously.

The tools are good and improving. They work for certain things, some of the time, with various need for manual stewarding in the hands of people who really know what they're doing. This is real.

But it remains an absolutely epic leap from here to the idea that writing code per se is a skill nobody needs any more.

More broadly, I don't even really understand what that could possibly mean on a practical level, as code is just instructions for what the software should do. You can express instructions on a higher level, and tooling keeps making that more and more possible (AI and otherwise), but in the end what does it mean to abstract fully away from the instruction in the detail? It seems really clear that will never be able to result in getting software that does what you want in a precise way rather than some probabilistic approximation which must be continually corrected.

I think the real craft of software such that there is one is constructing systems of deterministic logic flows to make things happen in precisely the way we want them to. Whatever happens to tooling, or what exactly we call code or whatever, that won't change.

that's a good take

> getting software that does what you want

so then we become PMs?