Philosophers have been trying to define what it means to be conscious since forever. I think that is informally what you mean here.
If you just mean what problems can it solve, and how quickly, we already have a well developed theory of that in terms of complexity classes -https://complexityzoo.net/Complexity_Zoo
I think this is more about levels or classifications of intelligence.
If you've ever interacted with a very smart animal, it's easy to recognize that their reasoning abilities are on par with a human child in a very subjective and vague way. We can also say with extreme confidence that humans have wildly different levels of intelligence and intellectual ability.
The question is, how do we define what we mean by "Alice is smarter than Bob". Or more pertinently, how do we effectively compare the intelligence and ability of an AI to that of another intelligent entity?
Is ChatGPT on par with a human child? A smart dog? Crows? A college professor? PhD level?
Of course we can test specific skills. Riddles, critical thinking, that sort of thing. Problem is that the results from a PhD will be indistinguishable from the results of a child with the answer key. You can't examine the mental state of others, so there's no way to know if they've synthesized the answer themselves or are simply parroting. (This is also a problem philosophers have been thinking about for millenia)
Personally, I doubt we'll answer these questions any time soon. Unless we do actually develop a science of consciousness, we'll probably still be asking these questions in a century or two.
> Is ChatGPT on par with a human child? A smart dog? Crows? A college professor? PhD level?
That presumes a total ordering of intelligence. I think the balance of evidence is that no such total ordering exists.
There are things chatgpt can do that children (or adults) cannot. There are thing that children can do that chatgpt cannot.
At best maybe the Turing test can give us a partial ordering.
I don't think there is much value in viewing "intelligence" as a whole. Its a combination of a multitude of factors, that need to be dealt with independently.
Intelligence is often a measure how quickly we can embrace a new model and how effectively we can use it. Building such a model can be done haphazardly or be guided with skill transfer methodology. Once that done, intelligence is how well we can select the correct model, filter the relevant parameters out and then produce a correct answer.
There's a lot of factors there and more that I haven't specified. But one thing that I believe is essential is the belief that an answer is correct or uncertain.
I don't either philosophical conceptions of consciousness or theories of computational complexity count as even "efforts to formalize intelligence". They are each focused on something significantly different.
The closest effort I know of as far characterizing intelligence as such is Steven Smale's 18th problem.
https://en.wikipedia.org/wiki/Smale%27s_problems
The wikipedia article is pretty useless here.
The original paper is better, but still seems to be too vauge to be useful. Where it isn't vauge it seems to point pretty strongly to computability/complexity theory.
Intelligence means many different things to different people. If we just gesture vaugely at it we aren't going to get anywhere, everyone will just talk past each other.
Yeah,
Smale is a very smart person but his stuff indeed seems as much a vague gesture as the other efforts. I feel like neural networks have succeeded primarily because of the failure of theorists/developers/etc to create any coherent theory of intelligence aside from formal logic (or Perl, formal probability). Nothing captures the ability of thinking to use very rough approximations. Nothing explains/accounts-of Moravec's Paradox etc.