>Genuine question - are people comfortable with this?

It's a question of degree, but in general, yeah. I'm totally comfortable being reliant on other entities to solve complex problems for me.

That's how economies work [1]. I neither have nor want to acquire the lifetime of experience I would need to learn how to produce the tea leaves in my tea, or the clean potable water in it, or the mug they are contained within, or the concrete walls 50 meters up from ground level I am surrounded by, or so on and so forth. I can live a better life by outsourcing the need for this specialized knowledge to other people, and trade with them in exchange for my own increasingly-specialized knowledge. Even if I had 100 lifetimes to spend, and not the 1 I actually have, I would probably want to put most of them to things that, you know, aren't already solved-enough problems.

Everyone doing anything interesting works like this, with vanishingly few exceptions. My dad doesn't need to know how to do algebra to get his taxes done, he just has an accountant. And his accountant doesn't need to know how to rewire his turn of the century New England home. And if you look at the exceptions, like that really cute 'self sufficient' family who uploads weekly YouTube videos called "Our Homestead Life"... It often turns out that the revenue from that YouTube stream is nontrivial to keeping the whole operation running. In other words, even if they genuinely no longer go to Costco, it's kind of a gyp.

[1]: https://www.youtube.com/watch?v=67tHtpac5ws

> My dad doesn't need to know how to do algebra to get his taxes done, he just has an accountant.

This is not quite the same thing. The AI is not perfect, it frequently makes mistakes or suboptimal code. As a software engineer, you are responsible for finding and fixing those. This means you have to review and fully understand everything that the AI has written.

Quite a different situation than your dad and his accountant.

I see your point. I don't think it's different in kind, just degree. My thought process: First, is my dad's accountant infallible?

If not, then they must themselves make mistakes or do things suboptimally sometimes. Whose responsibility is that - my dad, or my dad's accountant?

If it is my dad, does that then mean my dad has an obligation to review and fully understand everything the accountant has written?

And do we have to generalize that responsibility to everything and everyone my dad has to hand off work to in order to get something done? Clearly not, that's absurd. So where do we draw the line? You draw it in the same place I do for right now, but I don't see why we expect that line to be static.

> This means you have to review and fully understand everything that the AI has written.

Yes, and people who care and is knowledgeable do this already. I do this, for one.

But there’s no way one is giving as thorough a review as if one had written code to solve the problem themselves. Writing is understanding. You’re trading thoroughness and integrity for chance.

Writing code should never have been a bottle neck. And since it wasn’t, any massive gains are due to being ok with trusting the AI.