ASI is the acronym you’re looking for. It stands for Artificial Superintelligence.

Arguably it’s already here. ChatGPT knows more than any human who has ever lived. It can carry out millions of conversations at once. And it has better working memory (“context”) than humans. And it can speak and write code much faster than humans.

Humans still have some advantages: Specialists are smarter than chatgpt in most domains. We’re better at using imagination. We understand the physical world better. But it seems like we’re watching the gap close in real time. A few years ago chatgpt could barely program. Now you can give it complex prompts and it can write large, complex programs which mostly work. If you extrapolate forward, is there any good reason to think humans will retain a lead?

No, I am not looking for ASI. We have yet to achieve AGI. Unless you can definitively prove that we already have? Because, I mean, if we've already achieved AGI then that obviously means that you can define what intelligence actually is, no?

> It can carry out millions of conversations at once.

You're anthropomorphizing it, this isn't what it's doing. It's being fed a series of text and predicting what comes next the box has no context about the other "conversations" it's having and doesn't remember them.

ChatGPT can only respond to a prompt, and in the context of that prompt. It has no continuous awareness of anything. That isn't superintelligence. We are easily fooled because we have stupid monkey brains.

[deleted]

We have more like Artificial Superstupidity.

Ultimately our current model is extremely unlikely to perform better than the sum of current human knowledge. Godlike super-intelligence is a pipe dream with the current LLM based approaches.