This was incredibly vague and a waste of time.
What type of code? What types of tools? What sort of configuration? What messaging app? What projects?
It answers none of these questions.
This was incredibly vague and a waste of time.
What type of code? What types of tools? What sort of configuration? What messaging app? What projects?
It answers none of these questions.
Yeah, i’ve gone to the point where I will just stop reading AI posts after a paragraph or two if there are no specifics. The “it works!” / “no it doesn’t” genre is saturated with generality. Show, don’t tell, or I will default to believing you don’t have anything to show at all.
That was very vague, but I kinda get where they're coming from.
I'm now using pi (the thing openclaw is built on) and within a few days i build a tmux plugin and semaphore plugin^1, and it has automated the way _I_ used to use Claude.
The things I disagree with OP is: The usefulness of persistent memory beyond a single line in AGENTS.md "If the user says 'next time' update your AGENTS.md", the use of long-running loops, or the idea that everything can be resolved via chat - might be true for simple projects, but any original work needs me to design the 'right' approach ~5% of the time.
That's not a lot, but AI lets you create load-bearing tech-debt within hours, at which point you're stuck with a lot of shit and you dont know how far it got smeared.
[1]: https://github.com/offline-ant
Would you describe your Claude workflow?
Well, note that the previous post was about how great the Rabbit R1 is…
I am somewhat worried that this is the moment AI psychosis has come for programmers.
Add to that worry the suspicion that half this push is just marketing stunts by AI companies.
(Not necessarily this specific post).
Yeah… I'm using Claude Code almost all day every day, but it still 100% requires my judgment. If another AI like OpenClaw was just giving the thumbs up to whatever CC was doing, it would not end well (for my projects anyway).
Exactly. Posts that say "I got great results" are just advertisements. Tell me what you're doing that's working good for you. What is your workflow, tooling, what kind of projects have you made.
>Over the past year, I’ve been actively using Claude Code for development. Many people believed AI could already assist with programming—seemingly replacing programmers—but I never felt it brought any revolutionary change to the way I work.
Funny, because just last month, HN was drowning in blog posts saying Claude Code is what enables them to step away from the desk, is definitely going to replace programmers, and lets people code "all through chatting on [their] phone" (being able to code from your phone while sitting on the bus seems to be the magic threshold that makes all the datacenters worth it).
There is no code, there are no tools, there is no configuration, and there are no projects.
This is an AI generated post likely created by going to chatgpt.com and typing in "write a blogpost hyping up [thing] as the next technological revolution", like most tech blog content seems to be now. None of those things ever existed, the AI made them up to fulfill the request.
> There is no code, there are no tools, there is no configuration, and there are no projects.
To add to this, OpenClaw is incapable of doing anything meaningful. The context management is horrible, the bot constantly forgets basic instructions, and often misconfigures itself to the point of crashing.
It didn’t seem entirely AI generated to me. There were at least a few sentences that an LLM would never write (too many commas).
There is zero evidence this is the case. You are making up baseless accusation, probably due to partisan motivations.
edit: love the downvotes. I guess HN really is Reddit now. You can make any accusation without evidence and people are supposed to just believe it. If you call it out you get downvoted.
Is there any evidence the opposite is the case?
It doesn’t work like that. The burden is on the person making the claim. If you are going to accuse someone of posting an AI-written article you need you show evidence.
It's a losing strategy in 2026 to assume by default that any questionable spam blog/comment/etc content is written by an actual human unless proven otherwise.
Besides, if there are enough red flags that make it indistinguishable from actual AI slop, then chances are it's not worth reading anyway and nothing of value was lost by a false positive.
Please don't tell me you read that article and thought it was written by a person. This is clearly AI generated.
Did they even end up launching and maintaining the project? Did things break and were they able to fix it properly? The amount of front-loaded fondness for this technology without any of the practical execution and follow up really bugs me.
It's like we all fell under the spell of a terminal endlessly printing output as some kind of measurement of progress.
It's AI slop itself. It seems inevitable that any AI enthusiast ends up having AI write their advocacy too.
I just give the link to those posts to my AI to read it, if it's not worth a human writing it, it's not worth a human reading it.
Does it matter?
It reads like articles that pretended blockchain was revolutionary. Also the article itself seems like AI slop.