Your point about the overwhelming proliferation of AI tools and not knowing which are worth any attention and which are trash is very true I feel that a lot today (my solution is basically to just lean into one or two and ask for recommendations on other tools with mixed success).
The “I’m so tired of being told we’re in another paradigm shift” comments are widely heard and upvoted on HN and are just so hard to comprehend today. They are not seeing the writing on the wall and following where the ball is going to be even in 6-12 months. We have scaling laws, multiple METR benchmarks, internal and external evals of a variety of flavors.
“Tools like codex can be useful in small doses” the best and most prestigious engineers I know inside and outside my company do not code virtually at all. I’m not one of them but I also do not code at all whatsoever. Agents are sufficiently powerful to justify and explain themselves and walk you through as much of the code as you want them to.
Yeah, I’m not disputing that AI-assisted engineering is a real shift. It obviously is.
My issue is that we’ve now got a million secondary “paradigm shifts” layered on top: agent frameworks, orchestration patterns, prompt DSLs, eval harnesses, routing, memory, tool calling, “autonomous” workflows… all presented like you’re behind if you’re not constantly replatforming your brain.
Even if the end-state is “engineers code less”, the near-term reality for most engineers is still: deliver software, support customers, handle incidents, and now also become competent evaluators of rapidly changing bot stacks. That cognitive tax is brutal.
So yes, follow where the ball is going. I am. I’m just not pretending the current proliferation is anything other than noisy and expensive to keep up with.