it's been grinding my gears so much lately that people keep trying to compare "blurry jpeg machines" to human intelligence and development.
llms don't learn. nor do they operate with any sort of intent towards precision.
we can develop around, plan for and predict most common human errors. also, humans typically get smarter and learn from their mistakes.
llms will go on making the same ridiculous mistakes, confidently making up bullshit frameworks methods and code, and no matter how much correction you try to offer, they will never get any better until the next multi-billion dollar model update. and even then, it's more of a crossed finger situation than an inevitability improvement and growth.
I hate hate hate hate hate that AI seems to be increasing Dunning Krueger's effect on all our lives...