I absolutely love the utility aspect of LLMs but part of me is curious if moving faster by using AI is going to make these sorts of failure more and more often.

If true then what "utility" is there?

More visibility for the general person to see how brittle software is?