FWIW, Claude Sonnet 4.5 and ChatGPT 5 Instant both search the web when asked about this case, and both tell the cautionary tale.

Of course, that does not contradict a finding that the base models believe the case to be real (I can’t currently evaluate that).

You can just ask it not to search the web. In the case of GPT5, it believes it's a real case if you do that: https://chatgpt.com/share/68e8c0f9-76a4-800a-9e09-627932c1a7...

Because they will have been fine tuned specifically to say that. Not because of some extra intelligence that prevents it.

Well, yes. Rather than that being a takedown, isn’t this just a part of maturing collectively in our use of this technology? Learning what it is and is not good at, and adapting as such. Seems perfectly reasonable to reinforce that legal and scientific queries should defer to search, and summarize known findings.

Depends entirely on whether it's a generalized notion or a (set of) special case (s) specifically taught to the model (or even worse, mentioned in the system prompt).

It’s not worth much if a human has to fact check the AI and update it to tell it to “forget” certain precepts.