Based on the comments here, it's surprisingly anything in society works at all. I didn't realize the bar was "everything perfect every time, perfectly flexible and adaptable". What a joy some of these folks must be to work with, answering every new technology with endless reasons why it's worthless and will never work.

I think perhaps you underestimate how antithetical the current batch of LLM AI's is to what most programmers strive for every day, and what we want from our tools. Its not about losing our job, its about "correctness". (or as said below - deterministic)

In a lot of jobs, particularly in creative industries, or marketing, media and writing, the definition of a job well done is a fairly grey area. I think AI will be mostly disruptive in these areas.

But in programming there is a hard minimum of quality. Given a set of inputs, does the program return the correct answer or not? When you ask it what 2+2, do you get 4?

When you ask AI anything, it might be right 50% of the time, or 70% of the time, but you can't blindly trust the answer. A lot of us just find that not very useful.

I am a SWE myself and use LLMs to write ~100% of my code. That does not mean I fire and forget multiplexed codex instances. Many times I step through and approve every edit. Even if it was nothing but a glorified stenographer - there are substantial time savings in being able to prototype and validate ideas quickly.

> But in programming there is a hard minimum of quality. Given a set of inputs, does the program return the correct answer or not? When you ask it what 2+2, do you get 4?

Whether something works or not matters less than whether someone will pay for it.

Modt of the time when using AI I have a lot more than 1 shot to ensure everything is correct.