Everyone talked about the marketing stunt that was Anthropic's gated Mythos model with an 83% result on CyberGym. OpenAI just dropped GPT 5.5, which scores 82% and is open for anybody to use.

I recommend anybody in offensive/defensive cybersecurity to experiment with this. This is the real data point we needed - without the hype!

Never thought I'd say this but OpenAI is the 'open' option again.

The real 'hype' was that the oh-snap realization that Open AI would absolutely release a competitive model to Mythos within weeks of Anthropic announcing there's, and that Sam would not gate access to it. So the panic was that the cyber world had only a projected 2 weeks to harden all these new zero days before Sam would inevitably create open season for blackhats to discover and exploit a deluge of zero-days.

The GPT-5.5 API endpoint started to block me after I escalated with ever more aggressive use of rizin, radare2, and ghidra to confirm correct memory management and cleanup in error code branches when working with a buggy proprietary 3rd party SDK. After I explained myself more clearly it let me carry on. Knock on wood.

So there is a safety model watching your behavior for these kinds of things.

So you're saying that blackhats will be required to do a small bit of roleplay if they want the model to assist them? I'm not against public access BTW just pointing out how absurd that PR oriented "safety" feature is. "We did something don't blame us" sort of measure.

It isn't even my intent to naysay their approach. They probably have to do something along those lines to avoid being convicted in the court of public opinion. I just think it's an absurd reality.

It's a liability shield and helps to avoid unsavory headlines in the news

Does that mean that we're likely to see Mythos released soon?

The prevailing theory is that Anthropic doesn't have sufficient compute capacity to support Mythos at scale, which is the real reason it hasn't released.

It's almost embarrassing how susceptible we are to these marketing campaigns.

Dunno about you, but I didn’t fall for it. I’m reminded of how they were “afraid” to release GPT-2 because of the “power” it had. Hype train!

Lack of information, lack of knowledge.

The “AI” “technology” is an easy excuse to create artificial information gap in the era of the interconnected.

[deleted]

> Never thought I'd say this but OpenAI is the 'open' option again.

Compared to Anthropic, they always have been. Anthropic has never released any open models. Never released Claude Code's source, willingly (unlike Codex). Never released their tokenizer.

What's "open" about any of these companies?

I'm tired of words being misused. We have hoverboards that do not hover, self-driving cars that do not, actually, self-drive, starships that will never fly to the stars, and "open"… I can't even describe what it's used for, except everybody wants to call themselves "open".

[deleted]

Doesn't OpenAI get mad if you ask cybersecurity questions and force you to upload a government ID, otherwise they'll silently route you to a less capable model?

> Developers and security professionals doing cybersecurity-related work or similar activity that could be mistaken by automated detection systems may have requests rerouted to GPT-5.2 as a fallback.

https://developers.openai.com/codex/concepts/cyber-safety

https://chatgpt.com/cyber

I don't like this trend, but I get why they require it. The alternative seems to just ban cybersecurity-related questions.

Anthropic has started to ask for IDs for use of their products period

I don't like that trend. I get why they're doing it, but I don't like it

Are you in the UK? I've not had this happen to me (I'm not in the UK) so I'm wondering if the Online Safety Act has affected this, as it has with other products.

I am from the UK and have not had this happen to me (Yet? perhaps)

They flatout gate any API access of the main models behind Persona ID verification. Entirely.

From my experience OpenAI has become very sensitive when it comes to using their tools for security research. I am using MCP servers for tools like IDA Pro or Ghidra (for malware analysis) and recently received a warning:

> OpenAI's terms and policies restrict the use of our services in a number of areas. We have identified activity in your OpenAI account that is not permitted under our policies for: - Cyber Abuse

I raised an appeal which got denied. To be fair I think it's close to impossible for someone that is looking at the chat history to differenciate between legitimate research and malicious intent. I have also applied for the security research program that OpenAI is offering but didn't get any reply on that.

Seems like OpenAI only acts Open for theatric and attentional purposes though, i.e. when backed into a corner and its for their image.

isnt it like cyber question are being routed to dumper models at openai?

Do you have a source for that?

Neither the release post, nor the model card seems to indicate anything like this?

Anything that even vaguely smells like security research, reverse engineering or similar "dual-use" application hits the guardrails hard and fast. "Hey codex, here is our codebase, help us find exploitable issues" gives a "I can't help you with that, but I'm happy to give you a vague lecture on memory safety or craft a valgrind test harness"

it's still somewhat gated behind "trusted access" for cyber, see https://chatgpt.com/cyber

Being "more" open than something totally closed doesn't make you open. The name is still bs

> Anthropic's gated Mythos model

aka the perfect marketing ploy

Reminds me of Gmail's early invite only mode.

[deleted]

I ignore any hype news.

Anthropic is the embodiment of bullshitting to me.

I read Cialdini many decades ago and I am bored by Anthropic.

OpenAI is very clever. With the advent of Claude OpenAI disappeared from the headlines. Who or what was this Sam again all were talking about a year ago?

OpenAI has a massive user advantage so that they can simply follow Anthropic’s release cycle to ridicule them.

I think it is really brutal for Anthropic how they are easily getting passed by by OpenAI and it is getting worse with every new GPT version for Anthropic.

OpenAI owns them.

Who's Sam again? oh that person whose house was molotov'd last week? Or the person who had an expose written in the new yorker calling him a sociopath? I forget.