I'm not sure how solvable it is. It only takes one screw up to ruin the reputation, and a screw up is basically guaranteed.

The tech has existed for a while but nobody sane wants to be the one who takes responsibility for shipping a version of this thing that's supposed to be actually solid.

Issues I saw with OpenClaw:

- reliability (mostly due to context mgmt), esp. memory, consistency. Probably solvable eventually

- costs, partly solvable with context mgmt, but the way people were using it was "run in the background and do work for me constantly" so it's basically maxing out your Claude sub (or paying hundreds a day), the economics don't work

- you basically had to use Claude to get decent results, hence the costs (this is better now and will improve with time)

- the "my AI agent runs in a sandboxed docker container but I gave it my Gmail password" situation... (The solution is don't do that, lol)

See also simonw's "lethal trifecta":

>private data, untrusted content, and external communication

https://simonwillison.net/2025/Jun/16/the-lethal-trifecta/

The trifecta (prompt injection) is sorta-kinda solved by the latest models from what I understood. (But maybe Pliny the liberator has a different opinion!)