After three months of seeing what agentic engineering produces first-hand, I think there's going to be a pretty big correction.
Not saying that AI doesn't have a place, and that models aren't getting better, but there is a seriously delusional state in this industry right now..
And we haven't even started to see the security ramifications... my money is on the black hats in this race.
We are starting to see them, also the bugs too.
But to your point I think this year it's quite likely we'll see at least 1 or 2 major AI-related security incidents..
I've been predicting a "challenger disaster" moment: https://simonwillison.net/2026/Jan/8/llm-predictions-for-202...
My money is on a lot more than 1 to 2!
Yes I'm with you. I spent the last 2 months heavily doing "agentic engineering" and I don't think it's optimal to work like that as a default.
LLMs are for sure useful and a productivity boost but generating 99% of your code with it is way overdoing it.