Great message but I wonder if the people who do everything via LLM would even care to read such a message. And at what point is it hard/impossible to judge whether something is entirely LLM or not? I sometimes struggle a lot with this being OSS maintainer myself
"the people who do everything via LLM". That's a bit of a straw man characterization. I don't believe that there are many professional developers "do everything with an LLM'. I don't even know what that statement means.
There may not be many, but these people do exist.
I watched someone ask Claude to replace all occurrences of a string instead of using a deterministic operation like “Find and Replace” available in the very same VSCode window they prompted Claude from.
Agreed. Imo, it is important to distinguish which parts of “everything” carry the weight of the concern. By doing that - we may be able to remove “LLM” from that equation entirely.
The direct problem isn’t that people are using LLMs for everything - it’s that some people can’t be bothered to provide reasonable diligence. Phrasing that concern by blaming LLMs implies that these were perfectly diligent human workers before LLMs came along. Do we really believe that to be the case?
On a widely used open source project I maintain I've been seeing PRs in the last month that are a little off (look okayish but are trivial or trying to solve problems in weird ways), and then when I look at their account they started opening PRs within the last few weeks, and have opened hundreds of PRs spread over hundreds of repositories.
They do exist; if "professional" means "hired" it has no bearing on quality, it is not in any shape equivalent to "judicious" nor "careful". If salary goes into "push features" that's gonna be the only incentive.
I know a few of them. Anything than can be done by an LLM, will done by an LLM. They practically worship Claude
If you follow any of the programming subs on Reddit, especially the AI-oriented ones, there are a lot of people that at least claim to not code at all anymore-- 100% of their contributions are from prompts.
[dead]
I wonder if checking for false statements or hallucinations is the first step to detect entirely LLM