This is also assuming that AGI is even possible. So far there is no evidence that this is actually doable over anything but billions of years (and even then we have no idea how nature really managed it).
Edit: Meant to say AGI (superintelligence didn't make sense). Superintelligence is undefinable at the moment so even considering if it's possible or not is more of a philosophical thing/si-fi thought experiment than anything else.
> So far there is no evidence that this is actually doable over anything but billions of years (and even then we have no idea how nature really managed it).
"The brain is so mysterious and unique, that we should abandon all attempts to even try to apply results like the general approximation theorem to it and discard all signs that some approximation is happening."
Why we don't see signs of intelligence in the universe? The simplest self-replicator requires accidental synthesis of the sequence of 200 (or so) RNA nucleobases.
BTW, your argument could have been applied word-for-word to powered flight in 1899. In short, argumentum ad ignorantiam.
No. To realize the possibility of powered flight one only needs to look at birds. AGI, on the other hand, is another word for God.
Just define "general" as "as general as allowed by math, physics, and practical limitations." Or use a conventional reading of AGI as a human-level intelligence (which we, naturally, have a working example of).
Yeah but if you do that, you have to then turn around and look at how all the goalposts keep moving around. That is what I was (originally) trying to get at, and why I phrased it like I did. If we truly had actual (artificial) general intelligence (or were close to it) we would already have a solid definition/benchmark (and it... Probably wouldn't be what you said, but something a lot more detailed/thorough). Right now both AGI and ASI is just... Whatever. "It earns a hundred billion dollars in revenue," "It can do anything a general human can do" (ignoring the shear amount of ambiguity alone in that), "It can do most tasks a human can do" (again, ambiguous: which human, which tasks, on and on and on).
oh absolutely, no argument there, the case for AGI is pretty weak. I was just saying that I am even more sceptical that any of this is a "first or nothing" scenario - that is one of my biggest pet peeves about the entire tech sector.
Right, but I never said it was a first-or-nothing scenario to begin with. Given that both AGI and ASI are so ambiguous as to be nothingburgers, talking about them is just a performative thought experiment IMO. An interesting one, certainly, but neither are even remotely close to being realized. Until we have some kind of clear definition that can be scientifically proven and reproduced, that will remain the case.
ASI is the acronym you’re looking for. It stands for Artificial Superintelligence.
Arguably it’s already here. ChatGPT knows more than any human who has ever lived. It can carry out millions of conversations at once. And it has better working memory (“context”) than humans. And it can speak and write code much faster than humans.
Humans still have some advantages: Specialists are smarter than chatgpt in most domains. We’re better at using imagination. We understand the physical world better. But it seems like we’re watching the gap close in real time. A few years ago chatgpt could barely program. Now you can give it complex prompts and it can write large, complex programs which mostly work. If you extrapolate forward, is there any good reason to think humans will retain a lead?
No, I am not looking for ASI. We have yet to achieve AGI. Unless you can definitively prove that we already have? Because, I mean, if we've already achieved AGI then that obviously means that you can define what intelligence actually is, no?
> It can carry out millions of conversations at once.
You're anthropomorphizing it, this isn't what it's doing. It's being fed a series of text and predicting what comes next the box has no context about the other "conversations" it's having and doesn't remember them.
ChatGPT can only respond to a prompt, and in the context of that prompt. It has no continuous awareness of anything. That isn't superintelligence. We are easily fooled because we have stupid monkey brains.
We have more like Artificial Superstupidity.
Ultimately our current model is extremely unlikely to perform better than the sum of current human knowledge. Godlike super-intelligence is a pipe dream with the current LLM based approaches.