> One might ask why the cells in your body don't signal via python code and instead use signalling mechanisms

Right. They don't just make their membranes chemically transparent. Same reason: security, i.e. the varied motivations of things outside the cell compared to within it.

I don't think using a certain language is more secure than just writing that same function call in some other language. Security in compute comes from priviledged access from some agents and blacklisting others. The language doesn't matter for that. It can be a python command, it can be a tcp packet, it can be a voltage differential, the actual "language" used is irrelevant.

All I am arguing is that languages and paradigms written in a way to make sense for our english speaking monkey brain is perhaps not the most efficient way to do things once we remove the constraint of having an english speaking monkey brain being the software architect.

> Right. They don't just make their membranes chemically transparent. Same reason: security, i.e. the varied motivations of things outside the cell compared to within it.

Cells or organelles within a cell could be described as having motivations I guess, but evolution itself doesn’t really have motivations as such, but it does have outcomes. If we can take as an assumption that mitochondria did not evolve to exist within the cell so much as co-evolve with it after becoming part of the cell by some unknown mechanism, and that we have seen examples of horizontal gene transfer in the past, by the anthropic principle, multicellular life is already chimeric and symbiotic to a wild degree. So any talk of motivations of an organelle or cell or an organism are of a different degree to motivations of an individual or of life itself, but not really of a different kind.

And if motivations of a cell are up for discussion in your context, and to the context of whom you were replying to, then it’s fair to look at the motivations of life itself. Life seems to find a way, basically. Its motivation is anti-annihilation, and life is not above changing itself and incorporating aspects of other life. Even without motivations at the stage of random mutation or gene transfer, there is still a test for fitness at a given place and time: the duration of a given cell or individual’s existence, and the conservation and preservation of a phenotype/genotype.

Life is, in its own indirect way, preserving optionality as a hedge against failure in the face of uncertain future events. Life exists to beget more life, each after its kind historically, in human time scales at least, but upon closer examination, life just makes moves slowly enough that the change is imperceptible to us.

Man’s search for meaning is one of humanity’s motivations, and the need to name things seems almost intrinsic to existence in the form of self vs not self boundary. Societally we are searching for stimuli because we think it will benefit us in some way. But cells didn’t seek out cell membrane test candidates, they worked with the resources they had, throwing spaghetti at the wall over and over until something stuck. And that version worked until the successor outcompeted it.

We’re so far down the chain of causality that it’s hard to reason about the motivations of ancient life and ancient selection pressures, but questions like this make me wonder, what if people are right that there are quantum effects in the brain etc. I don’t actually believe this! But as an example for the kinds of changes AI and future genetic engineering could bring, as a though exercise bear with me. If we find out that humans are figuratively philosophical zombies due to the way that our brains and causality work compared to some hypothetical future modified humans, would anything change in wider society? What if someone found out that if you change the cell membranes of your brain in some way that you’ll actually become more conscious than you would be otherwise. What would that even mean or feel like? Socially, where would that leave baseline humans? The concept of security motivations in that context confront me with the uncomfortable reality of historical genetic purity tests. For the record, I think eugenics is bad. Self-determination is good. I don’t have any interest in policing the genome, but I can see how someone could make a case for making it difficult for nefarious people to make germline changes to individual genomes, but it’s probably already happening and likely will continue to happen in the future, so we should decide what concerns are worth worrying about, and what a realistic outcome looks like in such a future if we had our druthers. We can afford to be idealistic before the horse has left the stable, but likely not for much longer.

That’s why I don’t really love the security angle when it comes to motivations of a cell, as it could have a Gattaca angle to it, though I know you were speaking on the level of the cell or smaller. Your comment and the one you replied to inspired my wall of text, so I’m sorry/you’re welcome.

Man is seeking to move closer to the metal of computation. Security boundaries are being erected only for others to cross them. Same as it ever was.