The problem is that people call LLMs human or not depending on whether that benefits them.

In the copyright debate, people often call LLMs human ("we did not copy your data, the LLM simply learned from it").

In this case it might be the other way around ("You can trust us, because we are merely letting a machine view and control your browser")

You are right. Many times we already made an emotional decision. We then rationalize logically. I guess I did want to give access to LLM to my browser so my brain found an argument where one of the claims blocking me might not be true.

Yes it's fascinating how Meta managed to train Llama on torrent books without massive ripercussions: https://techhq.com/news/meta-used-pirated-content-and-seeded...

If LLM turn out to be a great technology overall the future will decide that copyright laws just were not made for LLMs and we'll retroactively fixed it.