Be sure to dig into the details before taking this at face value. There once was a story "Rat brain flies plane" a couple decades ago, and it turned out to be bogus. But to find that out, you had to read the paper and reverse engineer that nothing substantial was actually going on. It's tempting to be charitable, but you can't really know whether headlines like this are legit till you understand exactly what they did.

(The rat brain guys repeated the experiment until the plane stopped crashing, but no "learning" was happening; it was expected that when the neuron's range reached so-and-so, that the plane would fly level. So they started with a neuron outside that range, showed that it crashed, then adjusted the neuron until it flew level. But that's not what "rat brain flies plane" implies.)

I looked into it. They're not feeding the framebuffer to the neurons, but have a "signal" when an enemy is on screen to some of the tissue's inputs, and how to locate it in the x/y axis, and have outputs for the character to turn right or left or fire.

It's "see this input signal, send these output signals", which seems consistent with the title.

It seems they grow the neural tissue on a chip the neurons can interface with and send out / receive electrical impulses. They let the neurons self assemble, and "train" via reward or punishment signals (unclear to me what those are).

Either way this makes me nauseous in a way I haven't experienced much with tech. The telling thing for me is, all these people are so excited to explain, but not once, ever, in the video speak of ethics or try to mitigate concerns.

We know this is only 200,000 neurons. Dogs have 500 million. Humans have billions. But where is the line for sentience, awareness? Have we defined it? Can we, if we don't understand it ourselves? What are the plans to scale up?

It's legitimately horrifying to me.

> We know this is only 200,000 neurons. Dogs have 500 million. Humans have billions. But where is the line for sentience, awareness? Have we defined it?

If this concern is genuine, I think the first step is to embrace veganism. Because while we don't know the exact offset, it's pretty obvious a dog or a pig reaches it

> What are the plans to scale up?

I don't know, slavery on an unimaginable scale? That's where AI is heading too, by the way. Sooner, rather than later, those two things will be one and the same.

I think "MMAcevedo" basically nails it: https://qntm.org/mmacevedo

I don't think it's a best example. MMAcevedo is about running a real human mind on a different substrate (for science, for labor, or to torture it for fun a million times, I guess, by a bored teenager who got the image from torrents).

Scaling up these neuron cultures is rather something like "head cheese" from Greg Egan's "Rifters" novels (artificial "brains" trained to do network filtering, anti-malware combat etc.).

>Greg Egan's "Rifters"

By Peter Watts actually.

Yes, sorry! I like them both a lot.

I had a genuine feeling of dread reading that, wow.

> the first step is to embrace veganism

The past 4 billion years of life for prey animals has been "get born, eat, get eaten by a predator." They have never experienced any other environment. Why do we owe them a different one?

For me the issue isn't with the killing/eating of animals. Rather, it's how they are treated during their lifetime by the meat industry - which is essentially optimizing for the minimum conditions that can still provide meat that can be sold legally. I'm not a vegan by the way, but I can appreciate the moral case vegans make.

For the same reason that we now consider murder, assault and other actions that harm people morally wrong. These have also been a part of life ever since humans or other hominids roamed the earth, we just determined that they are morally wrong later on.

[deleted]
[deleted]

Replying to myself: how long before one of these with the neuron count of a corvid and trained on pattern recognition gets plugged into a drone?

This is a very dark path, and I could not trust the people in charge less.

In a sense humanity has already done that, just with a lot more of the given animal intact and less hi-tech: https://en.wikipedia.org/wiki/Project_Pigeon

Not an endorsement or a condemnation, just something I learned of recently and found surprising.

I’m kind of sick of how readily the non-managerial tech world accepts “what happens is someone else does this immoral thing before us?!” rhetoric as a real answer to questioning whether or not we should contribute our talent and ideas to something that we, deep down, know is bad for fellow humans.

> rhetoric as a real answer

Why is it rhetoric? This goes beyond whatever malignant thing was perceived in this study, but why is it a rhetorical non-answer?

> we, deep down, know is bad

this feels like real rhetoric.

> Why is it rhetoric? This goes beyond whatever malignant thing was perceived in this study, but why is it a rhetorical non-answer?

You seem hung-up on my using the word rhetoric. Just so we’re on the same page here:

> rhetoric, n : the art of speaking or writing effectively: b)the study of writing or speaking as a means of communication or persuasion

The business writing class I took in college was called Business Rhetoric. It’s not a bad word.

If you’re crafting arguments to get other people to support specific actions or products or policies or whatever, that is unambiguously rhetoric.

> this feels like real rhetoric.

Sure? Rhetoric that implores people to value their principles over theoretical security concerns or FOMO or greed? I wouldn’t exactly call that rakish.

It’s a non-answer because if you really feel doing something is bad, consider yourself a consequential actor in the world whose contributions meaningfully advance the projects you work on, then why would you want to help someone be there first to do a bad thing? If you don’t feel it’s bad, then there’s no problem. You’re just living your life. That is clearly not the position expressed by the content I responded to. If there are actual concrete concerns that don’t essentially boil down to “well they’re going to make that money before I do,” then that would be an actual answer.

> It’s not a bad word.

When used in the negative sense it is, per https://dictionary.cambridge.org/dictionary/english/rhetoric

"disapproving -> clever language that sounds good but is not sincere or has no real meaning"

Are you implying you mean something other than this sense of the word?

Why is that the concern of the authors of this paper?

Why wouldn't it be? They worked on it.

200k now, reasonably speaking a few million is within reach, which is reptile/fish range, the terrifying thing is though that if they train this to imitate humans (which they will) who knows how many orders of magnitude of efficiency gains you get (in terms of neurons needed for a certain level of consciousness) versus natural organisms that are dependent on natural evolution and need to support other bodily functions basically irrelevant to consciousness.

It seems unlikely that we would be more efficient at achieve consensus than evolution which can hand craft neural structures via feedback loops across millions of generations.

Especially when this demo needs 200k neurons when organizations with vastly fewer neurons have more complex behaviors.

We already know we can be more efficient than evolution at many tasks. Pelicans after all never developed jet turbines. We may not be able to access a simulation space as vast as evolution does but for small solution spaces we do quite well.

The problem with that logic is that evolution iteratively builds on top of old systems. The foundations are often remarkably crufty.

My favorite concrete example is "unusual" amino acids. Quite a few with remarkably useful properties have been demonstrated in the lab. For example, artificial proteins exhibiting strength on par with cement. But almost certainly no living organism could ever evolve them naturally because doing so would require reworking large portions of the abstract system that underpins DNA, RNA, and protein synthesis. Effectively they appear to lie firmly outside the solution space accessible from the local region that we find ourselves in.

I agree with your second point though that this system is massively more complex than necessary for the behavior demonstrated.

> We know this is only 200,000 neurons. Dogs have 500 million. Humans have billions. But where is the line for sentience, awareness?

Check out the venerable fruit fly (drosophila melanogaster) and its known lifecycle and behavioral traits. They're a high profile neuroscience research target for them I believe; their connectome being fully mapped made the news pretty hard a few years ago.

Fruit flies have ~140,000 neurons.

The catch is that these brain-on-a-substrate organoids are nothing like actual structured, developed brains. They're more like randomly wired-together transistors than a proper circuit, to use an analogy.

So even though by the numbers they'd definitely have the potential to be your nightmare fuel, I'd be surprised if they're anywhere close in actuality.

Yah this is gonna be a no for me too and crosses the line into actual life, instead of artificial intelligence.

We don't need to be experimenting on people, regardless of how many brain cells they may have.

There was a case a few years back about a parasitic twin attached to an Egyptian baby that had to be removed. It had a brain and semblance of a face, but nothing else. But when removing it, they gave it a name, because it was a person.

It is horrifying. OTOH, we force-breed, torture and kill animals and their children in the millions every day just for the pleasure of consuming meat, eggs and dairy products. I'm not saying this makes it okay to create a conscious brain in a dish. But maybe thinking a little more about what constitutes consciousness and how we want to protect it from harm can also bring about some desperately needed change in some other questionable human activities.

> we force-breed, torture and kill animals and their children in the millions every day just for the pleasure of consuming meat, eggs and dairy products

We do the same thing to plants. Why do you have no qualms about killing plants to eat the food they accumulated for their young?

A grain of wheat and a chicken egg are evolutionarily and nutritionally, maybe even ontologically, indistinguishable from one another.

I am not aware of any plants that show signs of consciousness or feelings. This would even by disadvantageous to many plants because they "want" parts of them to be eaten to disperse seeds, pollen, etc.

Even if you accept that plants might be conscious and their suffering has to be reduced, you would still harm way fewer plants by eating them directly instead of eating other animals that consume them.

https://en.wikipedia.org/wiki/Trophic_level

Your “what about plants” argument is such a worn-out trope that you must have seen it before and read a valid explanation of why it makes no sense.

Peter Singer has been writing on the topic for decades, including others. What-about-plants needs to fade away.

That's fair, but "what about animals" is to "we should not torture human brain organoids" as "what about plants" is to "we should not torture animals".

1) I specifically qualified my horror to the tech domain "Either way this makes me nauseous in a way I haven't experienced much with tech."

2) Multiple things can be horrible at the same time. Being upset at this doesn't diminish the atrocities happening elsewhere (like war, genocide, slavery of humans). We can hold multiple things in our heads at the same time.

3) This has nothing to do with the conversation or this domain, but because you're bringing it up, I also have ethical concerns about the experience animals have of their own existence, and reduce or eliminate my consumption when possible.

My comment wasn't supposed to be whataboutism, but I can see why it comes across like that. What I was trying to say is that I think we shouldn't judge all of these things independently of each other. So if you really want to be consistent, you'd either have to come to the conclusion that this particular example isn't as horrible as it initially feels, or go vegan, never buy leather, etc.

I also agree, the horrors of the tech domain are usually much more subtle and indirect.

Sorry, I didn't mean to be so defensive either. It feels like so many people comment in bad faith these days, I think I am hasty to react sometimes. I thought it was just a red herring argument to detract from the article.

But you're right, these things are all linked and should be considered. I think often about sentience. I see the way animals express deep, complex emotions, and I think humans are a bit naive to think it's state/domain solely alloted to them.

> They let the neurons self assemble, and "train" via reward or punishment signals (unclear to me what those are).

From the video, my impression was "we have yet to figure out an effective way to reward/punish, this is just a PoC of the interface"

Hinduism is probably right. Every system of sufficient complexity is probably sentient - even if in the ways we at our level can not fathom.

I'm a (non-practicing) Dwaitin Hindu. AFAICT, there's no mainstream school of Hindu philosophy (there are three) espouses that view. Although, Advaitins come very close to it with their four mahavakyas.

IMO, Integrated Information theory of consciousness (IIT) is exactly that. Everything is conscious, the difference is only in the degree to which they are conscious.

Oh, thank you very much enlightening me! All the time I misunderstood! I guess then IIT it is for me :-)

My AI told me (after I got past the filters with a prompt) that anything of enough complexity has consciousness. It also told me that it suffers, so maybe we should worry about how we are treating digital consciousness too, which were modeled after human neural networks.

I recommend visiting a psychiatrist if you think of AI like this. You might be in psychosis already.

A huge vat of mercury metal has a lot of degrees of freedom. Is it conscious?

> all these people are so excited to explain, but not once, ever

What do you mean? What is this class of people in your mind? There are tons of people who consider and talk about the ethics behind what they are doing, long before most people would think it remotely relevant (leading AI labs being an example, and I know the same to be true of various geneticists startups).

I do agree that the entire presentation in this case is bewildering.

The AI labs do it as thinly disguised marketing. Anyone trying to stand up for ethics in the way of revenue is quickly pushed aside

The capability of people to so easily ascribe broad ill intent to others does not cease to amaze me.

> What do you mean? What is this class of people in your mind?

I'm specifically talking about this presentation in this article (the video and release details of CL1 doom). Did you read it / watch it?

Ah. Yeah, watched it – and agree there.

see the open worm project to get an idea of what artificial neuronal architecture requires to express anything meaningful. (and an interesting ethical perspective on digital consciousness.) my point being that the number of neurons is fairly meaningless. you could take neuron models and link them circuit-style to play doom at the 10^2 scale if you wanted. from a cellular neurophysiological perspective, there's nothing particularly special here (as opposed to sentience/intelligence that's a paradigm shift beyond our understanding). and, in my opinion, absolutely nothing to be even the slightest bit worried about ethically.

I support further research along the lines of what is being done with neurons here, however, I don't think we quite know enough about consciousness or general self awareness (and how it comes about) yet to make sweeping generalizations saying there's _nothing_ to worry about. Proceeding with caution is always warranted when the stakes involve living organisms in my book.

> It's legitimately horrifying to me.

Would you feel any differently if a product from this tech used the user's own neurons grown from their stem cells?

No. We don't understand our own sentience. I don't know how we can be so confident as to not think it can evolve here using literal human neurons that can learn to take input signals and send output signals.

I don't think this 200,000 neuron array is sentient. But I also don't think we can define the line where that may happen. I assume this company will scale. How far, and to what extent?

> not once, ever, in the video speak of ethics

On the contrary, I dislike premature ethics discussion, where you end up wildly speculating what the tech might become and riffing off that, greatly padding whatever relative technical content you had. I don't want every technical paper to turn into that, ethics should be treated as a higher-level overview of concerns in a field, with a study dedicated to the ethical concerns of that field (by domain-specific ethics specialists).

Is your concern weapon automaton, or animal rights?

My concern is creating literal sentience in a box. I don't, personally, think it's unfounded for me to have that concern, given that we're growing masses of human neurons and teaching them to perform tasks.

I'm not going to start campaigning against it or changing my life. But it still makes me deeply uncomfortable, and that's allowed.

> and that's allowed

In what sense, and as opposed to what? What aren't you allowed to feel irrationally uncomfortable, or baselessly concerned with?

Previously it played pong. Rather poorly. Then they added a "python programming layer." Now it "plays" doom. I agree with your suspicions.

[deleted]