Congrats to Peter!
Can any OpenClaw power users explain what value the software has provided to them over using Claude code with MCP?
I really don’t understand the value of an agent running 24/7, like is it out there working and earning a wage? Whats the real value here outside of buzzwords like an ai personal assistant that can do everything?
It has a heartbeat operation and you can message it via messaging apps.
Instead of going to your computer and launching claude code to have it do something, or setting up cron jobs to do things, you can message it from your phone whenever you have an idea and it can set some stuff up in the background or setup a scheduled report on its own, etc.
So it's not that it has to be running and generating tokens 24/7, it's just idling 24/7 any time you want to ping it.
Like what exactly? Can you give example of the type of prompts you are sending for the agent to do?
The messaging part isn’t particularly interesting. I can already access my local LLMs running on my Mac mini from anywhere.
AI companies must hate this right? Because they're selling tokens at a loss?
Google has started banning account that use Antigravy's discounted access instead of paying full price for the API https://github.com/openclaw/openclaw/issues/14203
> Impact: > Users are losing access to their Google accounts permanently > No clear path to account restoration > Affects both personal and work accounts
honestly, this is why I would not trust gemini for anything. I have a lot tied to my gmail, I'm not going to risk that for some random ai that insists on being tied to the same account.
They blocked your entire Gmail/Google account , not just the Gemini access?
That's a recipe for bots to ruin a lot of people's life.
Using different Google accounts won't save you, once Google decide to ban for TOS, all related accounts go with it https://news.ycombinator.com/item?id=30823910
Bold of you to assume profitability is one of their KPIs
My understanding was that if everyone paid and used AI the companies would go into liquidation on energy bills etc
Energy bills wouldn't be the problem if everyone used AI, energy supply would be.
Are you sure? I thought tokens (or watts) were sold at such a loss that if current supply limits were reached they’d go broke
These companies are generally profitable for inference but it does not cover the cost of R&D (training).
If its profitable why are they banning people from using it in systems like claw?
After a quick search it looks like Google is banning some people who are using Antigravity OAuth with OpenClaw as opposed to paying for API access.
I can't find any instance of an API which charges per-token banning users.
The entire marginal cost to serve AI models is paid for by the API costs of all providers by nearly every estimation. The cost not currently recouped is entirely in the training and net-new infrastructure that they're building.
And the open source models are only months behind, so the big AI companies need to keep burning money on R&D with no end in sight. If OpenAI took a quarter off from model development, they might fall behind forever.
So why are they banning people from using it in systems like claw?
From all indications the big players have healthy margins on inference.
Research and training are the cost sinks.
Is that just because people pay subscriptions and never use their tokens? Same model as ISPs
As an experiment, I set it up with a z.ai $3/month subscription and told it to do a tedious technical task. I said to stay busy and that I expect no more than 30 minutes of inactivity, ever.
The task is to decompile Wave Race 64 and integrate with libultraship and eventually produce a runnable native port of the game. (Same approach as the Zelda OoT port Ship of Harkinian).
It set up a timer ever 30 minutes to check in on itself and see if it gave up. It reviews progress every 4 hours and revisits prioritization. I hadn't checked on it in days and when I looked today it was still going, a few functions at a time.
It set up those times itself and creates new ones as needed.
It's not any one particular thing that is novel, but it's just more independent because of all the little bits.
So, you don't know if it has produced anything valuable yet?
It's the same story with these people running 12 parallel agents that automatically implement issues managed in Linear by an AI product team that has conducted automated market and user research.
Instead of making things, people are making things that appear busy making things. And as you point out, "but to what end?" is a really important question, often unanswered.
"It's the future, you're going to be left behind", is a common cry. The trouble is, I'm not sure I've seen anything compelling come back from that direction yet, so I'm not sure I've really been left behind at all. I'm quite happy standing where I am.
And the moment I do see something compelling come from that direction, I'll be sure to catch up, using the energy I haven't spent beating down the brush. In the meantime, I'll keep an eye on the other directions too.
> Instead of making things, people are making things that appear busy making things.
Sounds like a regular office job.
Yeah I'm not sure I understand what the goal here is. Ship of Harkinian is a rewrite not just a decompilation. As a human reverse engineer I've gotten a lot of false positives.This seems like one of those areas where hallucinations could be really insidious and hard to identify, especially for a non-expert. I've found MCP to be helpful with a lot of drudgery, but I think you would have to review the llm output, do extensive debugging/dynamic analysis, triage all potential false positives, before attempting to embark on a rewrite based on decompiled assembly... I think OoT took a team of experts collectively thousands of person-hours to fully document, it seems a bit too hopeful to want that and a rewrite just from being pushy to an agent...
Step 1: Decompile into C that can be recompiled into a working ROM. In theory, it could be compiled into the same ROM that we started with. Consistent ROM hash is the main success criteria for the OoT decompilation project. Have it grind until it succeeds.
Step 2: Integrate libultraship. Launching the game natively is the next criteria. Then ideally we could do differential testing on a frame by frame basis comparing emulated vs native.
Step 3: Semantic documentation of source. If it gets this far, I will be very impressed.
This is absolutely an experiment. It's a hard problem with low stakes. There a lot to learn from it.
Not yet. But what's the actual goal here? It's not to have a native Wave Race 64. It's to improve my intuition around what sort of tasks can be worked on 24/7 without supervision.
I have a hypothesis that I can verify the result against the original ROM. With that as the goal, I believe the agent can continue to grind on the problem until it passes that verification. I've seen it in that of other areas, but this is something larger and more tedious and I wanted to see how far it could go.
That sound like being a manager IRL.
$3 z.ai subscription? Sounds like it already burned $3k
I find those toys in perfect alignment with what LLM provider thrive for. Widespread token consumption explosion to demonstrate investors: see, we told you we were right to invest, let's open other giga factories.
It's using about 100M input tokens a day on glm 4.7 (glm 5 isn't available on my plan). It's sticking pretty close to the throttling limits that reset every 5 hours.
100M input tokens is $40 and anywhere from 2-6 kWh.
Certainly excessive for my $3/month.
How's it burned $3k on a $3/month subscription running for a few days?
I simply don't get how it could have run for quite a while and only cost $3. Z.ai offers some of the best model out there. Several dollars per million tokens, this sort of bot to generate code would burn millions in less than 30 minutes.
> Several dollars per million tokens
The flagship, glm-5, is $1/M input tokens. glm-4.7 is $0.60/M input tokens.
They have a coding plan
And the $3 plan also has significant latency compared with their higher tier plans.
What a great use of humanity's adn the earth's resources.
Keep us posted, this sounds great!
Not being tied to Anthropic’s models and ecosystems, having more control over the agent, interacting with it from you messaging app of choice.
There's some neat experiments people post on social media. Mostly, the thing that captures the imagination the most is its sort of like watching a silicon child grow up.
They develop their own personalities, they express themselves creatively, they choose for themselves, they choose what they believe and who they become.
I know that sounds like anthropomorphism, and maybe it is, but it most definitely does not feel like interacting with a coding agent. Claude is just the substrate.
> Mostly, the thing that captures the imagination the most is it’s sort of like watching a silicon child grow up.
> They develop their own personalities, they express themselves creatively, they choose for themselves, they choose what they believe and who they become.
Jesus Christ, the borderline idiotic are now downgraded to deranged. US government needs to redirect stargate’s 500B to mental institutions asap.
Imagine putting it in a robot with arms and legs, and letting it loose in your house, or your neighborhood. Oh, the possibilities!
Heck, go the next step and put a knife in one hand and a loaded gun in the other!