I mean, that’s kinda the whole issue — they used to respect safety work, but now don’t. Namely:

  The Financial Times reported last week that "OpenAI slash[ed] AI model safety testing time" from months to days.
The direction is clear. This isn’t about sorting people based on personal preference for corporate structure, this is about corporate negligence. Anthropic a) doesn’t have the most advanced models, b) has far less funding, and c) can’t do “doom analysis” (and, ideally, prevention!) on OAI’s closed source models, especially before they’re officially released.