Isn't AI safety mostly a marketing thing? Like, we employ these safety people to make sure our chat bot does not turn into Skynet, implying the chat bot could turn into Skynet i.e. it's powerful and magic and please give us money.

Maybe the text prediction programs are too familiar to people for the Skynet marketing to bite like it used to.

Or maybe it was not just a marketing thing and the AI bros really did believe we were a few GPUs and some training data away from AGI, but now they no longer believe this.

> we employ these safety people to make sure our chat bot does not turn into Skynet

i think it's mostly about not showing up in some NYT article titled "look what crazy thing i got this AI to say". There were a bunch of those early on and it really hurt the cause. Microsoft had some famous ones, even prior to chatgpt, where the AI got pretty testy in the chat.

https://en.wikipedia.org/wiki/Tay_(chatbot)