I might be completely wrong, but to me, “Catastrophic Risk Official“ sounds like a completely made up position.

And I don’t even mean made in order to show responsibility. I mean made up in order to generate hype.

Yes, you are completely wrong.

There was actually an internal team with real technical benchmarks around LLM alignment, deception, misuse. It's been gutted and most of the key actors have left.

This doesn't sound like a job that needs a dramatic title like "Catastrophic Risk Official" to me

So the name being wrong means the department should be gutted?

Overly-serious naming is hardly a reason to throw the baby out with the bathwater.

> So the name being wrong means the department should be gutted?

All I said was that I like pancakes, where are you getting "I hate waffles" from?

> Overly-serious naming

It is actually completely un-serious naming imo, which may have contributed to higher ups at the company wondering what this person even did and how valuable they were

Dude you have no idea what you are talking about. Just stop talking and read for a few minutes.

OpenAI was founded around the principle avoiding catastrophic risk. 0 people at OpenAI are confused by the goal. It is the governance board's (nominal) primary and only goal. Sidelining that team was an intentional goal by a CEO who wants to pivot into a for-profit company.

I just cannot explain how wrong you are on this. Please have some intellectual humility.

Catastrophic risk was never a serious possibility from anything OpenAI could ever produce, so it's not surprising at all that this is being sidelined as they pivot to a for-profit company

Anyone who joined OpenAI in the past thinking they were seriously going to produce "Catastrophic Risk" from their research into language models is a joke

If you want me to believe "catastrophic risk" is an outcome from your company, I would assume you are developing autonomous weapons, not glorified chatbots

Please be serious

There are toys for small children that look like real world objects like phones or steering wheels so they think they do the same thing as adults.

[deleted]

I would expect that being on HN, commenters would read the article first, rather than simply post shallow takes like this one.

His title was `Former Head of Preparedness at OpenAI`. I make no other commentary on the article itself.