> We are releasing GPT‑5.5 with our strongest set of safeguards to date

...

> we’re deploying stricter classifiers for potential cyber risk which some users may find annoying initially

So we should be expecting to not be able to check our own code for vulnerabilities, because inherently the model cannot know whether I'm feeding my code or someone else's.

Hopefully not, because checking your codebase for vulnerabilities is really valuable.

I hope it’s just limits on pentesting and stuff, and not for code analysis and review.

But how do it know?