> It can code in an autocomplete sense.

I just (right before hopping on HN) finished up a session where an agent rewrote 3000 lines of custom tests. If you know of any "autocomplete" that can do something similar, let me know. Otherwise, I think saying LLMs are "autocomplete" doesn't make a lot of sense.

That’s neat, but it’s important to note that agentic systems aren’t just comprised of the LLM. You have to take into account all the various tools the system has access to, as well as the agentic harness used to keep the LLM from going off the rails. And even with all this extra architecture, which AI firms have spent billions to perfect, the system is still just… fine. Not even as good as a junior SWE.

That’s impressive. I don’t object to the fact that they make humans phenomenally productive. But “they code and think” makes me cringe. Maybe I’m confusing lexicon differences for philosophic battles.

Yes, I think it is probably a question of semantics. I imagine you don't really take issue with the "they code" part, so it's the "they think" thing that bothers you? But what would you call it if not "thinking"? "Reasoning"? Maybe there is no verb for it?