> “We felt that it wouldn't actually help anyone for us to stop training AI models,”
Is the implication here that Anthropic admits they already can't meet their own risk and safety guidelines? Why else would they have to stop training models?
> “We felt that it wouldn't actually help anyone for us to stop training AI models,”
Is the implication here that Anthropic admits they already can't meet their own risk and safety guidelines? Why else would they have to stop training models?