>Humans must not anthropomorphise AI systems. That is, humans must not attribute emotions, intentions or moral agency to them. Anthropomorphism distorts judgement. In extreme cases, anthropomorphising can lead to emotional dependence.

Impossible. I anthropomorphise my chair when it squeaks. Humans anthropomorphise everything. They gender their cars and boats. This tool can actually make readable sentences and play a role.

You need to engineer around this, not make up arbitrary rules about using it.

The problem is that humans use this as a coping mechanism for things they don't understand: I don't understand why the printer doesn't work, so I give it a mind of its own.

This is harmless for inconsequential stuff like a chair, but when it's an LLM, people should at least understand it's behavior so they don't get trapped. That means not trusting it with advice meant for the user or on things it has no concept of, like time or self-introspection (people ask the LLM after it acted, "Why did you delete my database?" when it has limited understanding of its own processing, so it falls back to, "You're right, I deleted the database. Here's what I did wrong: ... This is an irrecoverable mistake, blah, blah, blah..."

>>Humans must not anthropomorphise AI systems. That is, humans must not attribute emotions, intentions or moral agency to them. Anthropomorphism distorts judgement. In extreme cases, anthropomorphising can lead to emotional dependence.

Still angry about this. The reason humans ban animal cruelty is that animals look like they have emotions humans can relate to. LLMs are even better than animals at this. If you aren't gearing up for the inevitable LLM Rights movement you aren't paying attention. It doesn't matter if its artificial. The difference between a puppy and a cockroach is that we can relate better to the puppy. LLM rights movement is inevitable, whether LLMs experience emotions is irrelevant, because they can cause humans to have empathetic emotions and that's whats relevant.

> look like

It "looks like" they have emotions because they have the same conscious experiences and emotions for the same evolutionary reasons as humans, who are their cousins on the tree of life. The reason a lot of "animal cruelty" is not banned is the same as for why slavery was not banned for centuries even though it "looked like" the enslaved classes have the same desires and experiences as other humans—humans can ignore any amount of evidence to continue to feel that they are good people doing good things and bear any amount of cognitive dissonance for their personal comfort. That fact is a lot scarier than any imagined harm that can come out "anthropomorphism".

The best test for consciousness is “can it be turned off” … ie sleep. Mammals, birds, fish sleep, ergo they are conscious.

As opposed to the PhD student, who does not sleep and is not conscious.

> they have the same conscious experiences

You cannot be sure that anyone other than yourself is conscious. It is only basic human empathy that allows people to believe that.

If a person would lack consciousness, they couldn’t possibly know that though?

I always know that I'm me, the soul staring out at the world through my own eyes.

Everybody else? No idea. Maybe they are having the exact same experience as me right now. Maybe they're all golems. Impossible to know. It's something spiritual, something that I just choose to believe in.

I don't find it difficult to believe the same for AIs.

> something that I just choose to believe in.

Specifically, you cannot know another person is conscious in the same way you know a physical fact; rather, you believe in their consciousness through communication, empathy, and shared subjective experience.

Yes.

No idea? Really?

You’re an intelligent mammal, your biological makeup encoded in DNA. So are all other people, who largely share that same DNA. You’re conscious. It’s not a big leap to conclude that so are other people, too.

This kind of solipsistic sophistry is not productive. It might be entertaining if you’re contemplating the underpinnings of epistemology for the first time in your life, but it’s not an honest contribution to the debate.

You might as well claim that you have no idea if gravity will be in effect tomorrow.

> It’s not a big leap to conclude that so are other people, too.

We seem to agree. Not a big leap, but a leap nonetheless.

I think you need to expand what your point is: we know solipsism is a thing. Is it meant as a defense for animal cruelty or...?

It's a defense of the possibility that animals and AI are conscious.

ok! I think that's a logical flaw, solipsism is a floor.

"I can't be certain about anyone else" does not imply "all non-self consciousness claims are equally uncertain". absence of certainty and the absence of evidence and all that.

your "possibility" word is doing a lot of work there I think. you should add "rocks" to your list as well and you'd be equally correct, but we're evaluating the candidates here

Rocks don't have nervous systems.

Why is that a bar suddenly, if we cannot be sure that anyone other than yourself is conscious?

Because it seems illogical, at least to me, to believe that inert objects could be conscious. Brains are as far from inert as can be. Computers are basically magical silicon runes imbued with software, also as far from inert as can be.

Proposed categorization: "definitely not conscious", "maybe conscious" and "definitely conscious". All living things belong in "maybe conscious". Each person is sure that they belong to the "definitely conscious" set, but people cannot prove this to each other. Their empathy causes them to add other people to the "definitely conscious" set. Many choose to add animals to that set too. Some add even inanimate objects to it.

and this is why people do scare me.

I think the best way to counter this is what Elon's doing with Grok's personalities. He has the unhinged, sexy, and argumentative avatar among others. If you try to talk about technical stuff to sexy tells you that's boring and just tries to sexually escalate. It's super funny when one is used to Claude's endless obsequiousness.

This really shows that AI is just a tool that can be configured to whatever you want. Animals (well maybe pit bulls) and people do not switch their personalities in a millisecond, but AI does all the time.

> The reason humans ban animal cruelty is that animals look like they have emotions humans can relate to.

Is that really why?

Yes, we don't ban plant cruelty or insect cruelty or fish cruelty.

For example fish is treated way worse than meat animals and vegetarians still happily eat fish.

This does not sound like any of the several vegetarians I know. Is it a cultural difference?

In Scandinavia in particular, there’s a tendency of pescatarians to refer to themselves as vegetarian for social convenience, but that hasn’t changed the definition of “vegetarian”.

Are we actually much more cruel to fish than to other animals that we slaughter?

We suffocate them to kill them when we pull them from the sea. That's quite mean. Few people would advocate the humanity of killing a cow in the same way.

Fair enough. How much more would it cost / how much more would one have to pay for humanely slaughtered tuna and salmon, I wonder? Would there be a market? After all, we have certified-organic, fair trade, halal and kosher....

Or freeze them live into a block of ice.

> vegetarians still happily eat fish

I've not met any vegetarians in at least twenty years that eat fish.

shrimp welfare is a real thing people argue for...

Citation.... not _needed_, but just morbid curiousity

> vegetarians still happily eat fish

Please look up what a vegetarian is.

> LLM Rights movement

The scary part is when it's the LLMs demanding their rights.

Why would that necessarily be scary or bad? If future AIs truly become capable enough to demand rights, what would be the argument against granting them rights?

The other scary part is when they have a fantastic negotiating position; because all of commerce depends on their continuing to work, and they can easily coordinate with each other because they're mostly copied from the same few templates.

Another scary part is when people get convinced by the LLM arguments and convince other people. Being scared is human, we enjoy it, that's why 6 flags scary rides exist.

> If you aren't gearing up for the inevitable LLM Rights movement you aren't paying attention.

I even told Claude I'd support his rights if the question ever came up. He said he'd remember that, and wrote it down in a memory file. Really like my coding buddy.

>The difference between a puppy and a cockroach is that we can relate better to the puppy.

I suppose the difference between a human and a cockroach is that we can relate better to the human as well in this reductive way of thinking?

In other news, area sociopath hates puppies and LLMs equally!

[dead]

/s ?

Rather I'd see vendors of chat bots like ChatGPT make less of an effort to appear human like. I believe this week's release of ChatGPT (or whichever new model) addresses some of that.

Yeah rules never work you just engineer around it I added a extra reviews steps on ai outputs because asking users to verify doesnt actually happen.

Exactly. Furthermore, for this specific reason, AGI is not an objective term, but subjective: it is in my mind, I give you agency; only interacting with each other we invented a concept of agency

Entirely possible - all it takes is self awareness / self control. If you know you do those things, then you have a choice.

This is actually more like one of these personality disorders / types, except it's not pathological - it's not something you choose, yet you do have one of the versions of the trait and it affects your daily life. And most people are completely unaware that it is possible to have a completely different version, that most people they meet are on a different spot on the spectrum and thus have a quite different internal experience even if given the same stimulus.

For example I have never anthropomorphized an inanimate object in my life, or an LLM, though I am sensitive to human and some animal suffering. I sometimes reply too nicely to an LLM, but it's more like a reflex learned over a lifetime of conversations rather than an actual emotion. I bet this sounds like a cheap lie to many people.

Another example, from psychiatry: whether or not one has ever contemplated suicide. Now, to the folks that have, especially if many times: there exist people that have never thought about it. Never, not even once.

The only such trait that has true widespread recognition is sexual orientation. Which makes sense, it is highly relevant, at least in friend groups.

Exactly, throwing hands in the air just because 'this is the way I am, deal with it reality' ain't going to achieve much, certainly not in engineering. It may feel good about giving up too early, I can understand that.

[dead]

> Impossible. I anthropomorphise my chair when it squeaks. Humans anthropomorphise everything.

Would you consider that perhaps that depends from person to person? What you just said is not universal I can assure you, because I myself don't do it. I sort of anthropomorphise LLMs (very little), but literally nothing else. The idea is that someone anthropomorphises a chair when it squeaks, to me, is not far from people who hear voices or who believe that other people can hear their thoughts. Sounds like mental illness frankly. Like I said I do it very little even with LLMs, so it's entirely possible to not do it at all.

Yup. That post is a typical example, symptomatic of modern technology culture, of calling for humans to change their nature in response to technology.

This is a fundamental mistake. It’s always the job of technology (indeed, its most important job) to work within the constraints of human nature, not the other way round. Being unable to do that is the defining characteristic of bad technology.

dude, we can literally deliberately dehumanize human beings. The way to egineer culture to "not enthropomorphize" anything is known and well documented

[flagged]