> What I've come to the conclusion is this: the best programs and the best tools are the ones that are lovingly (and perhaps with a bit of hate too) crafted by their developers.

I think this is an example of correlation but not causation. Obviously it's true to some extent in the sense that all things being equal more care is good, but I think all you're probably saying here is that good products are built well and products that are built well tend to be built by developers that care enough to make the right design decisions.

I don't think there's any reason AI couldn't make as good (or better) technical decisions than a human developer that's both technically knowledgable and who cares. I think I personally care a lot about the products I work on, but I'm far from infallible. I often look back at decision I've made and realise I could have done better. I could imagine how an AI with knowledge of every product on Github, a large collection of technical architecture documentation and blog posts could make better decisions than me.

I suppose there's also some creativity involved in making the "right" decisions too. Sometimes products have unique challenges which have no proven solutions that are considered more "correct" than any other. Developers in this cases need to come up with their own creative solutions and rank them on their unique metrics. Could an AI do this? Again, I think so. At least LLMs today seem able to come up with solutions to novel problems, even if they're not always great at this moment in time. Perhaps there are limits to the creativity of current LLMs though and for certain problems which require deep creativity humans will always outperform the best models. But even this is probably only true if LLM architecture doesn't advance – assuming creativity is a limitation in the first place, which I'm far from convinced of.