Which makes the odd HN AI booster excitement about LLMs as therapists simultaneously hilarious and disturbing. There are no controls for AI companies using divulged information. Theres also no regulation around the custodial control of that information either.

The big AI companies have not really demonstrated any interest in ethic or morality. Which means anything they can use against someone will eventually be used against them.

> HN AI booster excitement about LLMs as therapists simultaneously hilarious and disturbing

> The big AI companies have not really demonstrated any interest in ethic or morality.

You're right, but it tracks that the boosters are on board. The previous generation of golden child tech giants weren't interested in ethics or morality either.

One might be mislead by the fact people at those companies did engage in topics of morality, but it was ragebait wedge issues and largely orthogonal to their employers' business. The executive suite couldn't have designed a better distraction to make them overlook the unscrupulous work they were getting paid to do.

> The previous generation of golden child tech giants weren't interested in ethics or morality either.

The CEOs of pets.com or Beanz weren't creating dystopian panopticons. So they may or may not have had moral or ethical failings but they also weren't gleefully buildings a torment nexus. The blast radius of their failures was less damaging to civilized society much more limited than the eventual implosion of the AI bubble.