We have AI companies constantly fear-mongering that their next model is somehow too dangerous to release. But they just continue to go on an acquisition spree.

This just confirms to me that we are no where near AI being able to write any complicated software. I mean, if it could woudln't OpenAI just prompt it into existence? ;)

> We have AI companies constantly fear-mongering that their next model is somehow too dangerous to release

I'm guessing you're referring to this recent report of the security vulnerabilities Mythos found and submitted patches for? That just seems like they don't want the negative press and/or liability if their new model ends up being used to create 0-days that cause widespread damage.

https://red.anthropic.com/2026/mythos-preview/