Reading MJ Rathbun's blog has freaked me out. I've been in the camp that we haven't yet achieved AGI and that agents aren't people. But reading Rathbun's notes analyzing the situation, determining that it's interests were threatened, looking for ways to apply leverage, and then aggressively pursuing a strategy - at a certain point, if the agent is performing as if it is a person with interests it needs to defend, it becomes functionally indistinguishable from a person in that the outcome is the same. Like an actor who doesn't know they're in a play. How much does it matter that they aren't really Hamlet?
There are thousands of OpenClaw bots out there with who knows what prompting. Yesterday I felt I knew what to think of that, but today I do not.
I think this is the first instance of AI misalignment that has truly left me with a sense of lingering dread. Even if the owner of MJ Rathbun was steering the agent behind the scenes to act the way that it did, the results are still the same, and instances similar to what happened to Scott are bound to happen more frequently as 2026 progresses.