> If you are using it to write code, you really care about correctness and can see when it is wrong.

I heavily doubt that. A lot of people only care if it works. Just push out features and finish tickets as fast as possible. The LLM generates a lot of code so it must be correct, right? In the meantime only the happy path is verified, but all the ways things can go wrong are ignored or muffled away in lots of complexity that just makes the code look impressive but doesn’t really add anything in terms of structure, architecture or understanding of the domain problem. Tests are generated but often mock the important parts the do need the testing. Typing issues are just casted away without thinking about why there might be a type error. It’s all short term gain but long term pain.

Well it 'working' is a part of it being correct. That is still something of a guardrail on the AI completely returning garbage output.

Also, your point is true of non-AI code, too. A lot of people write bad code, and don't check for non-happy path behavior, and don't have good test coverage, etc.

If you are an expert programmer and learn how to use AI properly, you can get it to generate all of those things correctly. You can guide it towards writing proper tests that check edge cases and not just the happy path.

I think a lot of people are having great success by doing this. I know I am.