> D.3. Limitations of Outputs; Notice to Users. It is Customer’s responsibility to evaluate whether Outputs are appropriate for Customer’s use case, including where human review is appropriate, before using or sharing Outputs. Customer acknowledges, and must notify its Users, that factual assertions in Outputs should not be relied upon without independently checking their accuracy, as they may be false, incomplete, misleading or not reflective of recent events or information. Customer further acknowledges that Outputs may contain content inconsistent with Anthropic’s views.

Must be nice being able to ruthlessly lie with "this is the future" marketing claims, while hiding behind this term of service.

Maybe I'm misreading but that is an absurd ToS in this context. So they're telling us they have a solution to a problem, but don't trust it enough to solve it? I tend to be averse to analogies but this feels like hiring an engineering team to build a bridge, and they tell you they're not liable if the bridge fails and collapses when used to spec.

If you don't actually believe in your product's capabilities, why sell it?

To make a lot of money.

'"Claude for Engineers" coming to build a bridge in a town near you! You heard it here first'.

The short answer is that presumably people are willing to pay for it

So they can get training data I assume.

It is a far bit tougher to actually get the clankers to speak accurately. I understand the legal perspective, with OpenAI talking about depression use cases, these companies who are running computers for users have to worry the software might harm the user(through themselves) and the leagl fallout needs protected.

It amazes me that we are going to litigate this like they did with cars over horses, or machines vs human labor. I honestly don't think Claude should be running companies.