There is some unintentional good marketing here -- the model is so good its dangerous.
Reminds me of the book 48 Laws of Power -- so good its banned from prisons.
There is some unintentional good marketing here -- the model is so good its dangerous.
Reminds me of the book 48 Laws of Power -- so good its banned from prisons.
Unintentional? This sort of marketing has been both Antrhopic's and OpenAI's MO for years...
Agree. I think they're intentionally sitting on the fence between "These models are the most useful" and "These models are the most dangerous".
They want the public and, in turn, regulators to fear the potential of AI so that those regulators will write laws limiting AI development. The laws would be crafted with input from the incumbents to enshrine/protect their moat. I believe they're angling for regulatory capture.
On the other hand, the models have to seem amazingly useful so that they're made out to be worth those risks and the fantastic investment they require.
They should pick a lane because it’s not very believable if you put these things into defense systems and in the next minute claim that humanity is existentially threatened. Either you’re lying, or ruthless, or stupid.
The new Power Mac® G4 with Velocity Engine®. So powerful, the government classifies it as a supercomputer and a potential weapon.
TIL about AltiVec: https://apple.fandom.com/wiki/AltiVec
Business Negging
https://www.lesswrong.com/posts/WACraar4p3o6oF2wD/sam-altman...