I sometimes wonder if the concept of “intelligence” is going to benefit from a formal model the way “computation” benefited from Turing Machines.
Are there classes of intelligence? Are there things that some classes can and cannot do? Is it a spectrum? Is the set of classes countable? Is it finite? Is there a maximum intelligence? One can dream…
Philosophers have been trying to define what it means to be conscious since forever. I think that is informally what you mean here.
If you just mean what problems can it solve, and how quickly, we already have a well developed theory of that in terms of complexity classes -https://complexityzoo.net/Complexity_Zoo
I think this is more about levels or classifications of intelligence.
If you've ever interacted with a very smart animal, it's easy to recognize that their reasoning abilities are on par with a human child in a very subjective and vague way. We can also say with extreme confidence that humans have wildly different levels of intelligence and intellectual ability.
The question is, how do we define what we mean by "Alice is smarter than Bob". Or more pertinently, how do we effectively compare the intelligence and ability of an AI to that of another intelligent entity?
Is ChatGPT on par with a human child? A smart dog? Crows? A college professor? PhD level?
Of course we can test specific skills. Riddles, critical thinking, that sort of thing. Problem is that the results from a PhD will be indistinguishable from the results of a child with the answer key. You can't examine the mental state of others, so there's no way to know if they've synthesized the answer themselves or are simply parroting. (This is also a problem philosophers have been thinking about for millenia)
Personally, I doubt we'll answer these questions any time soon. Unless we do actually develop a science of consciousness, we'll probably still be asking these questions in a century or two.
> Is ChatGPT on par with a human child? A smart dog? Crows? A college professor? PhD level?
That presumes a total ordering of intelligence. I think the balance of evidence is that no such total ordering exists.
There are things chatgpt can do that children (or adults) cannot. There are thing that children can do that chatgpt cannot.
At best maybe the Turing test can give us a partial ordering.
I don't think there is much value in viewing "intelligence" as a whole. Its a combination of a multitude of factors, that need to be dealt with independently.
Intelligence is often a measure how quickly we can embrace a new model and how effectively we can use it. Building such a model can be done haphazardly or be guided with skill transfer methodology. Once that done, intelligence is how well we can select the correct model, filter the relevant parameters out and then produce a correct answer.
There's a lot of factors there and more that I haven't specified. But one thing that I believe is essential is the belief that an answer is correct or uncertain.
I don't either philosophical conceptions of consciousness or theories of computational complexity count as even "efforts to formalize intelligence". They are each focused on something significantly different.
The closest effort I know of as far characterizing intelligence as such is Steven Smale's 18th problem.
https://en.wikipedia.org/wiki/Smale%27s_problems
The wikipedia article is pretty useless here.
The original paper is better, but still seems to be too vauge to be useful. Where it isn't vauge it seems to point pretty strongly to computability/complexity theory.
Intelligence means many different things to different people. If we just gesture vaugely at it we aren't going to get anywhere, everyone will just talk past each other.
Yeah,
Smale is a very smart person but his stuff indeed seems as much a vague gesture as the other efforts. I feel like neural networks have succeeded primarily because of the failure of theorists/developers/etc to create any coherent theory of intelligence aside from formal logic (or Perl, formal probability). Nothing captures the ability of thinking to use very rough approximations. Nothing explains/accounts-of Moravec's Paradox etc.
Purely intuitively, it seems like there should be a connection between the two (computation and intelligence). But I have not formally studied any relevant fields, I would be interested to hear thoughts from those who have.
The known laws of physics are computable, so if you believe that human intelligence is purely physical (and the known laws of physics are sufficient to explain it) then that means that human intelligence is in principle no more powerful than a Turing machine, since the Turing machine can emulate human intelligence by simulating physics.
If there's a nonphysical aspect to human intelligence and it's not computable, then that means computers can never match human intelligence even in theory.
I was thinking the same thing, maybe a level above in the Chomsky Hierarchy…
This is one of those ideas that great minds have been chewing on for all of recorded history.
In modern thinking, we do certainly recognize different classes of intelligence. Emotional intelligence, spatial reasoning, math, language, music.
I'd say the classes of intelligence are finite. Primarily because different types of intelligence (generally) rely on distinct parts of our finite brains.
As for maximum intelligence, I like the angle taken by several SciFi authors. Generally, when an AI is elevated to the extremes of intelligence, they detach from reality and devolve into infinite naval gazing. Building simulated realities or forever lost in pondering some cosmic mystery. Humans brought to this state usually just die.
For physical, finite entities, there's almost certainly an upper limit. It's probably a lot higher than we think, though. Biological entities have hard limits on energy, nutrients, and physical size beyond which it's just impossible to maintain any organism. Infinities are rarely compatible with biology, and we will always be limited.
Even an AI is still beholden to the practicalities of moving energy and resources around. Your processor can only get so large and energy/information can only move so fast. Intelligence in AI is probably also bounded by physical constraints, same as biological entities.
Kinda related, I’ve had a hunch for a while that we’re going to eventually learn that “the singularity” (AI’s improving themselves ad infinitum) is impossible for similar reasons the halting problem is impossible. I can’t really articulate why though. It just seems similarly naive to think “if the AI becomes smarter than us, surely it can thus make a better AI than we could” as it is to think “if a computer can compute anything, surely it can compute whether this program is will halt.”
My bet is there is some level of “incompleteness” to what an intelligence (ours or a machine’s) can do, and we can’t just assume that making one means it can become a singularity. More likely we’re just going to max out around human levels of intelligence, and we may never find a higher level than that.
>More likely we’re just going to max out around human levels of intelligence, and we may never find a higher level than that.
I've seen people state this before, but I don't think I've seen anyone make a scientific statement on why this could be the case. The human body has a rather tight power and cooling envelope itself. On top of that we'd have to ask how and why our neural algorithm somehow found the global maxima of intelligence when we can see that other animals can have higher local maxima of sensory processing.
Moreso machine intelligence has more exploration room to search the problem space of survival (aka Mickey7) that the death any attached sensoring/external network isn't the death of the AI itself. How does 'restore from backup' affect the evolution of intelligence?
Granted there are limits somewhere, and maybe those limits are just a few times what a human can do. Traversing the problem space in networks much larger than human sized capabilities might explode in time and memory complexity, or something weird like that.
For that definition, including the phrase "ad infinitum", then it's pretty unlikely.
But a lack of infinities won't prevent the basic scenario of tech improving tech until things are advancing too fast for humans to comprehend.
I agree the idea of a formal view of intelligence is appealing. The hurdle that any such view faces is that most intelligence seems to involve rough, approximate reasoning.
I think this approximate nature of intelligence is essentially why neural nets have been more successful than earlier Gofai/logic based system. But I don't think that means formal approaches to intelligence are impossible, just they face challenges.