<< on the boundaries of how they should be used.
Isn't it up to the user how they want to use the tool? Why are people so hell bent on telling others how to press their buttons in a word processor ( or anywhere else for that matter ). The only thing that it does, is raising a new batch of Florida men further detached from reality and consequences.
Users can use tools how they want. However, some of those uses are hazards. If I am trying to scare birds away from my house with fireworks and burn my neighbors' house down, that's kind of a problem for me. If these fireworks are marketed as practical bird repellent, that's a problem for me and the manufacturer.
I'm not sure if it's official marketing or just breathless hype men or an astroturf campaign.
As arguments go, this is not bad, as we tend to have some expectations about 'truth in advertising' ( however watered-down it may be at this point ). Still, I am not sure I ever saw openAI, Claude or other providers claim something akin to:
- it will find you a new mate - it will improve your sex life - it will pay your taxes - it will accurately diagnose you
That is, unless I somehow missed some targeted advertising material. If it helps, I am somewhere in the middle myself. I use llms ( both at work and privately ). Where I might slightly deviate from the norm is that I use both unpaid versions ( gemini ) and paid ones ( chatgpt ) apart from my local inference machine. I still think there is more value in letting people touch the hot stove. It is the only way to learn.