But if there is a policy in place to prevent some sort of modification, then performing an exploit or workaround to make the modification anyways is arguably understood and respected by most people.
That seems to be the difference here, we should really be building AI systems that can be taught or that learn to respect things like that.
If people are claiming that AI is so smart or smarter than the average person then it shouldn't be hard for it to handle this.
Otherwise it seems people are being to generous in talking about how smart and capable AI systems truly are.
First off, LLMs aren't "smart", they're algorithmic text generators. That doesn't mean it is less useful than a human who produces the same text, but it is not getting to said text in the same way (it's not 'thinking' about it, or 'reasoning' it out).
This is analogous to math operations in a computer in general. The computer doesn't conceptualize numbers (it doesn't conceptualize anything), it just uses fixed mechanical operations on bits that happens to represent numbers. You can actually recreate computer logic gates with water and mechanical locks, but that doesn't make the water or the concrete locks "smart" or "thinking". Here's Stanford scientists actually miniaturizing this into a chip form [1].
[1]: https://prakashlab.stanford.edu/press/project-one-ephnc-he4a...
> But if there is a policy in place to prevent some sort of modification, then performing an exploit or workaround to make the modification anyways is arguably understood and respected by most people.
I'm confused about what you're trying to say. My point is that companies don't actually trust their employees, so it's not unexpected for them not to trust LLMs.