>It makes you wonder how much of the future will even be understandable to us, o
There isn't much of a future left. But of what is left to humans, it is in all probability not enough time to invent any true artificial intelligence. Nothing we talk about here and elsewhere on the internet is anything like intelligence, even if it does produce something novel and interesting.
I will give you an example. For the moment, assume you come up with some clever prompt for ChatGPT or another one of the LLMs, and that this prompt would have it "talk" about a novel concept for which English has no appropriate words. Imagine as well that the LLM has trained on many texts where humans spoke of novel concepts and invented words for those new concepts. Will the output of your LLM ever, even in a million years, have it coin a new word to talk about its concept? You, I have no doubt, would come up with a word if needed. Sure, most people's new words would be embarrassing one way or another if you asked them to do so on the spot. But everyone could do this. The dimwitted kid in school that you didn't like much, the one who sat in the corner and played with his own drool, he would even be able to do this, though it would be childish and onomatopoeic.
The LLMs are, at best, what science fiction used to refer to as an oracle. A device that could answer questions seemingly intelligently, without having agency or self-awareness or even the hint of consciousness. At best. The true principles of intelligence, of consciousness are so far beyond what an LLM is that it would, barring some accidental discovery, require many centuries. Many centuries, and far more humans than we have even now... we only have eight or so 1-in-a-billion geniuses. And we have as many right now as we're ever going to have. China's population shrinks to a third of its current by the year 2100.
I’m probably too optimistic as a default, but I think it might be okay. Agriculture used to require far more people than it does now due to automation, and it certainly seems like many industries will be able to be partially automated with only incremental change to current technology. If less people are needed for social maintenance, then more will be able to focus on the sciences, so yes we may have less people but it’s quite possible we’ll have a lot more in science.
I don’t think AI needs to be conscious to be useful.
> The true principles of intelligence, of consciousness are so far beyond what an LLM is that it would, barring some accidental discovery, require many centuries.
I've been too harsh on myself for thinking it would take a decade to integrate imaging modalities into LLMs.
I mean, the integrated circuit, the equivalent of the evolution of multicellular life, was 1949. The microprocessor was 1979, and that would be what, Animals as a kingdom? Computers the size of a building are now the size of a thumb drive. What level are modern complex computer systems like ChatGPT? The level of chickens? Dogs? Whatever it is, it is light years away from what it was 50 years ago. We may be reaching physical limits for the size of circuits, but it seems like algorithm complexity and efficiency moving fast and are no where near any physical limits.
We haven’t needed many insane breakthroughs to get here. It has mostly been iterating and improving, which opens up new things to develop, iterate, and improve. IBMs Watson was a super computer in 2011 that could understand natural language. My laptop runs LLMs that can do that now. The pace of improvement is incredibly fast and I would be very hesitant to say with confidence that human level “intelligence” is definitely centuries away. 1804 was two centuries ago, and that was the year the locomotive was invented.