Let's see, how this will improve the daily soc work. I still don't see, what's the big difference between Mythos and Opus, security wise. I'm confident, that this kind of vul detection is a long-term improvement. But does specifically Mythos makes such a big difference to "normal" models? I would love to see, what's the actual difference.
Quantifying the abilities of an LLM is a hard research problem, so I'm not sure if I can describe it in any great way, but Mythos did seem to be fairly clever about putting together things from different domains to find problems.
For instance, in one of the included bugs (2022034) it figured out that a floating point value being sent over IPC could be modified by an attacker in such a way that it would be interpreted by the JS engine as an arbitrary pointer, due to the way the JS engine uses a clever representation of values called NaN-boxing. This is not beyond the realm of a human researcher to find, but it did nicely combine different domains of security.
As the person responsible for accidentally introducing that security problem (and then fixing it after the Mythos report), while I am aware of NaN-boxing (despite not being a JS engine expert), I was focused more on the other more complex parts of this IPC deserialization code so I hadn't really thought about the potential problems in this context. It is just a floating point value, what could go wrong?
Okay, so far it makes sense to me. But is the deal with JS and floating point values, which isn't soemthing super special super rare stuff, only detected and identfied by Mythos while Opus wouldn't get to this point?
There doesn't have to be a huge qualitative discontinuity between Opus and Mythos. It's just that Mythos has reached a threshold where it's finally smart enough that putting it in a loop and asking it to find bugs is suddenly really effective. Especially at the beginning, Mozilla wasn't doing anything particularly clever with prompts. Mythos is just smart enough that the hit rate on obvious prompts is high enough to matter. (Maybe you can get similar performance out of Opus 4.6 with really smart prompts, but AFAICT nobody had managed it until Mythos.)
Among other things, Mythos seems better at "let me find, weaponize, and stack vulnerabilities until I get end-to-end from untrusted content to root", rather than just finding one thing in a specific identified area.
Results similar to mythos have been duplicated by weaker models.
Think it's more a care of mythos raising widespread awareness that tireless LLMs can be weaponized to dig through code and find that one tiny flaw nobody spotted
The report I saw kind of seemed to be pointing at a flaw and asking "do you see it?" which is not the same thing. I felt a pretty large difference between Opus 4.6's results and Mythos's, so I would be surprised if even weaker models did anywhere near as well. I'd like to see these results, if they are using a decent methodology.
Of course, even the reports with flawed methodology could be suggesting that a great harness + weak model might achieve a similar level of results as a mediocre harness + strong model. But I'd want to see solid evidence for that.
There is a phase transition where LLMs match or exceed humans' ability to do something, and from that point on, even if the difference between its previous version is small, it will go from something people use rarely, to something that people use all the time.
There was a time when the entire transportation infrastructure in the US was built around horses. Even after cars were invented, the cars weren't obviously better than horses for most people, especially because there wasn't any infrastructure to support them, but the infrastructure and the cars kept improving to the point where it was better for some people at some things, then suddenly it was better at most things, and then people stopped using horses, and we re-organized our entire transportation network around cars.
But there was never a revolutionary technological change. The technology of cars in the 1930s was the same fundamental technology as the cars in the 1890s. Just at some point it became "good enough" and that was it.
I think when people say that AI is a bubble, they are assuming that anything economically useful that LLMs cannot perform today is _qualitatively_ different from what LLMs can do right now, and that LLMs cannot do it even in theory, without some major technological innovation. But I have a suspicion that there are a large number of valuable things, that once LLMs advance just a little bit more, and harnesses and infra around them is improved a little bit more will just be completely taken over by LLMs.