AI guardrails continue to make safety improvements — comparing a rapidly evolving advanced technology to a drug is a broken analogy to me. One gets safer over time; the other gets more dangerous.
But also, the risk profile and statistics are radically different: alcohol is inherently dangerous (toxic) to everyone. Chatbots are just another tool — there are a small percentage of people with unhealthy relationships to any tool, but that does not make the tool a dangerous drug.
The underlying models are improving at the same time as the guardrails and I'm not convinced the guardrails will keep up, especially given the perverse incentives. At some point the endless investor billions will dry up and a whole bunch of folks will be desperate to monetize their AI projects any way possible.