[flagged]

I think the concern for the AI companies is that they're going to get sued into oblivion if they ever get found out. The bigger concern is not the AI companies doing it themselves, but insiders finding it in the chat logs and selling it on the black market.

They're definitely going to monetize personal accounts with targeted advertising though. Nothing is stopping "ChatGPT Suggested" (i.e. sponsored) links in chat replies.

Where the TOS allows them to, I would assume they are (and they aren't trade secrets anymore since reasonable measures weren't taken to protect them). Where the TOS forbids them from doing so - I'd generally assume the reputational risk is not worth the marginal value that could be extracted from the IP and trade secrets. Especially for companies with huge other businesses like Google. I'd worry more about Anthropic and OpenAI just because there's not this other huge billion dollar business that relies on trust. If I had trade secrets worth billions where money could easily be extracted from them (I'm thinking something like a hedgefunds current trading strategy) the trust wouldn't extend that far even for Google.

I also find OpenAI's past business dealings suspect (with regards to effectively stealing value from the public non-profit and transferring it into a privately owned company) which makes me trust them less than Anthropic.

I'd assume the NSA has access to anything they are interested in that you send to a US company.

This comes up in every thread but nobody can advance an argument about how this passes the basic conspiracy theory test:

If they’re vacuuming up all the IP that comes in even when the user sets it to off, it must be going into a giant data store somewhere. There must be a large number of engineers and managers involved in all of the processes that get the data from the client to the cloud to the data store, all of whom are bought into this conspiracy. For this to work, all of them must always keep the secret forever and never leak any incriminating evidence anywhere.

Having a damning and easily provable secret hanging over the company would be an easy way for any former employee to leak it to the press and sink the stock price when they go public, making a huge profit by shorting the stock and wiping out gains before the insider lockup expires.

You really have to believe a lot of people, including ex-employees and any disgruntled employees they fired, are so committed to keeping the secret for some reason and that management is so short-sighted that they’re taking a company destroying risk for some marginal, minuscule improvement in their training set.

Meta did this and got caught. It's one search away. Did you make chatgpt write all that gibberish?

You’re talking about something else.

The GP comment was suggesting that when you use these tools they’re vacuuming up your IP while you use them. It’s the “Trojan horse” conspiracy theory of LLM coding tools that comes up in every thread.

[deleted]

They just label under the “good of humanity” cult values. You think they can’t keep a secret at the same order of magnitude of Cambridge analytica in Facebook, for less time?

Their contracts prohibit it for certain classes of users, but if you're really worried about it Anthropic also offers a "bring your own cloud" version where the data supposedly never leaves your infrastructure.

Thats why you don't use it with IP and trade secrets OR have a lawyer looking over it.

I work for a very big company, very big companies trust other very big companies. After all we are not google, so we work with Microsoft and others.

Nothing new hear tbh

For chats, unless you explicitly disable it they absolutely are.

Otherwise, these companies do have auditors.