That's my main criticism as well. Even before we get to the ethical implications of AIs communicating on your behalf without a disclaimer, LLM writing is just poor and making me read through it is disrespectful of my time.

I recently had a colleague send me a link to a ChatGPT conversation instead of responding to me. Another colleague organised a quiz where the answers were hallucinated by Grok. In some Facebook groups I'm in where people are meant to help each other, people have started just pasting the questions into ChatGPT and responding with screenshots of the conversation. I use LLMs almost daily, but this is all incredibly depressing. The only time I want to interact with an LLM is when I choose to, not when it's forced on me without my consent or at least a disclaimer.

> I recently had a colleague send me a link to a ChatGPT conversation instead of responding to me.

I find this kind of thing interesting anywhere someone is being paid more than minimum wage: a really good way to make your boss think that they can replace you with ChatGPT is for you to perform it at ChatGPT’s level. I do give them points for not trying to hide it, but it really seems shortsighted not to consider that each time you do that, you’re raising the question of why they shouldn’t cut out the middleman.

> I recently had a colleague send me a link to a ChatGPT conversation instead of responding to me.

I honestly would rather this than my colleague sending me text that is obviously from chatgpt, but not stating it upfront. Or even the "I asked chatgpt and it said this.." along with pasting 10 paragraphs of stuff they didn't even read to confirm it could be relevant.

  > In some Facebook groups I'm in where people are meant to help each other, people have started just pasting the questions into ChatGPT and responding with screenshots of the conversation
i get the feeling these ai tools will just further the alienation of society even more...