This is touched upon in the article:

> Last year, OpenAI released estimates on the number of ChatGPT users who exhibit possible signs of mental health emergencies, including mania, psychosis or suicidal thoughts.

> The company said that around 0.07% of ChatGPT users active in a given week exhibited such signs.

0.07% doesn't sound like much, but ChatGPT has about a billion WAU, which means -seventy million- 700,000 people per week.

Is that different to the number of people who have that going on in their life even without AI though? If it's 0.01% outside of AI, and 0.07% of AI users, then either AI attracts people with those conditions or AI increases the likelihood of having them. That's worth studying.

It's also possible that 0.1% of people have them and AI is actually reducing the number of cases...

For the US it's estimated to be about 23% of the population that have a mental illness, and WHO says 12-15% globally or about 1 in 8 people. About 14% of the global population experience suicidal ideation at some point in time. That rate increases for adolescents and young adults, up to 22%.

I'd be interested in such a study, but OTOH mental illness conditions being present in nearly a quarter of the world, I'm surprised there haven't been more incidents like this (unless there have been, and they just haven't been reported by the news).

If the estimate is 1/5 people are mentally ill, the definition needs some readjustment. That is such an inclusive number that it must be counting otherwise fine people who....like to count their tic tacs so get labelled as slightly OCD. Had a bummer of a day, so I am prone to depression?

There was a recent study about 99% of people have an abnormal shoulder: https://news.ycombinator.com/item?id=47064944 . We are all unique in our own way, but labeling everyone as ill does not seem productive.

700,000

Still, a lot

Whoops yes, thank you. Too much LLM usage has made me start doing math about as well as them.

That number terrifies me not because it is so high, but because it exists.

What is stopping an entity (corporate, government, or otherwise) from using a prompt to make sweeping decisions about whether people are mentally or otherwise "fit" for something based on AI usage? Clearly not the technology.

I'm not saying mental health problems don't exist, but using AI to compute it freaks me out.

A rational lender increases interest rates when prospective borrowers are less likely to be around to pay the bill. Confiding in an LLM that is integrated with a consumer tracking apparatus is a great way to ruin your life.

We could already use social media posts to detect mental illness, by admission as people talk openly about their diagnosis, but also by analysis of the content/tone/frequency of their posts that don't mention mental illness.

Data brokers already compile lists of people with mental illness so that they can be targeted by advertisers and anyone else willing to pay. Not only are they targeted, but they can get ads/suggestions/scams pushed at them during specific times such as when it looks like they're entering a manic phase, or when it's more likely that their meds might be wearing off. Even before chatbots came into the mix, algorithms were already being used to drive us toward a dystopian future.