> the singularity is happening
[Citation needed]
No LLM is yet being used effectively to improve LLM output in exponential ways. Personally, I'm skeptical that such a thing is possible.
LLMs aren't AGI, and aren't a path to AGI.
The Singularity is the Rapture for techbros.
LLMs aren't AGI and maybe aren't a path to AGI, but step back and look at the way the world is changing. Hard disks were invented by IBM in 1953 and now less than a hundred years later there's an estimated million terabytes a year of hard disks made and sold, and a total sold of Mega, Giga, Tera, Peta, Exa, Zetta ... 1.36 Zettabytes.
In 2000, webcams were barely a thing, audio was often recorded to dictaphone tapes, and now you can find a recorded photo or video of roughly anyone and anything on Earth. Maybe a tenth of all humans, almost any place, animal, insect, or natural event, almost any machine, mechanism, invention, painting, and a large sampling of "indoors" both public and private, almost any festival or event or tradition, and a very large sampling of "people doing things" and people teaching things for all kinds of skills. And tons of measurements of locations, temperatures, movements, weather, experiment results, and so on.
The ability of computers to process information jumped with punched card readers, with electronic computers in the 1950s, again with transistors in the 1970s, semiconductors in the 1980s, commodity computer clusters (Google) in the 1990s, maybe again with multi-core desktops for everyone in the 2000s, with general purpose GPUs in the 2010s, and with faster commodity networking from 10Mbit to 100Gbit and more, and with SATA, then SAS, then RAID, then SSDs.
It's now completely normal to check Google Maps to look at road traffic and how busy stores are (picked up in near realtime from the movement of smartphones around the planet), to do face and object recognition and search in photos, to do realtime face editing/enhancement while recording on a smartphone, to track increasing amounts of exercise and health data from increasing numbers of people, to call and speak to people across the planet and have your voice transcribed automatically to text, to realtime face-swap or face-enhance on a mobile chip, to download gigabytes of compressed Wikipedia onto a laptop and play with it in a weekend in Python just for fun.
"AI" stuff (LLMs, neural networks and other techniques, PyTorch, TensorFlow, cloud GPUs and TPUs), the increase in research money, in companies competing to hire the best researchers, the increase in tutorials and numbers of people around the world wanting to play with it and being able to do that ... do you predict that by 2030, 2035, 2040, 2045, 2050 ... 2100, we'll have manufactured more compute power and storage than has ever been made, several times over, and made it more and more accessible to more people, and nothing will change, nothing interesting or new will have been found deliberately or stumbled upon accidentally, nothing new will have been understood about human brains, biology, or cognition, no new insights or products or modelling or AI techniques developed or become normal, no once-in-a-lifetime geniuses having any flashes of insight?
I mean, what you're describing is technological advancement. It's great! I'm fully in favor of it, and I fully believe in it.
It's not the singularity.
The singularity is a specific belief that we will achieve AGI, and the AGI will then self-improve at an exponential rate allowing it to become infinitely more advanced and powerful (much moreso than we could ever have made it), and it will then also invent loads of new technologies and usher in a golden age. (Either for itself or us. That part's a bit under contention, from my understanding.)
> "The singularity is a specific belief that we will achieve AGI
That is one version of it, but not the only one. "John von Neumann is the first person known to have discussed a "singularity" in technological progress.[14][15] Stanislaw Ulam reported in 1958 that an earlier discussion with von Neumann "centered on the accelerating progress of technology and changes in human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue""[1]. A time when people before it, would be unable to predict what came after it because it was so different. (And which I argue in another comment[2] is not a specific cutoff time, but a trend over history of the future being increasingly hard to predict over shorter and shorter timeframes).
Apart from AGI, or Von Neuman accelerationism, I also understand it as augmenting human intelligence: "once we become cyborgs and enhance our abilities, nobody can predict what comes next"; or artificial 'life' - "if we make self-replicating nano-machines (that can have Darwinian natural selection?), all bets about the future are off"; or "once we can simulate human brains in a machine, even if we can't understand how they work, we can run tons of them at high speeds".
> and usher in a golden age. (Either for itself or us. That part's a bit under contention, from my understanding.)
Arguably, we have built weakly-superhuman entities, in the form of companies. Collectively they can solve problems that individual humans can't, live longer than humans, deploy and exploit more resources over larger areas and longer timelines than humans, and have shown a tendency to burn through workers and ruin the environment that keeps us alive even while supposedly guided by human intelligence. I don't have very much hope that a non-human AGI would be more aligned with our interests than companies made up of us are.
[1] https://en.wikipedia.org/wiki/Technological_singularity
[2] https://news.ycombinator.com/item?id=46935546
If you look at the rapid acceleration of progress and conclude this way, well, de nile ain't just a river in egypt.
Also yes LLMs are indeed AGI: https://www.noemamag.com/artificial-general-intelligence-is-...
This was Peter Norvig's take. AGI is a low bar because most humans are really stupid.
> If you look at the rapid acceleration of progress
I don’t understand this perspective. There are numerous examples of technical progress that then stalls out. Just look at batteries for example. Or ones where advancements are too expensive for widespread use (e.g. why no one flies Concorde any more)
Why is previous progress a guaranteed indicator of future progress?
Just think of this as risk management.
If AGI doesn't happen, then good. You get to keep working and playing and generally screwing off in the way that humans have for generations.
On the other hand if AGI happens, especially any time soon, you are exceptionally fucked along with me. The world changes very rapidly and there is no getting off Mr Bones wild ride.
>Why is previous progress a guaranteed indicator of future progress?
In this case, because nature already did it. We're not just inventing and testing something whole cloth. And we know there are still massive efficiencies to be gained.
For me the Concorde is an example of how people look at stuff incorrectly. In the past we had to send people places very quickly to do things. This was very expensive and inefficient. I don't need to get on a plane to have an effect just about anywhere else in the world now. The internet and digital mediums give me a presence at other locations that is very close to being there. We didn't need planes that fly at the speed of sound, we needed strings that communicate at the speed of light.
If you think AGI is at hand why are you trying to sway a bunch of internet randos who don’t get it? :) Use those god-like powers to make the life you want while it’s still under the radar.
how do you take over the world if you have access to 1000 normal people? if AGI is by the original definition (long forgotten by now) of surpassing MEDIAN human at almost all tasks. How the rebranding of ASI into AGI happened without anyone noticing is kind of insane
>If you look at the rapid acceleration of progress and conclude this way
There's no "rapid acceleration of progress". If anything there's a decline, and even an economic decline.
Take away the financial bubbles based on deregulation and huge explosion of debt, and the last 40 years of "economic progress" are just a mirage filling a huge bubble with air in actual advancement terms - unlike the previous millenia.
That’s completely wrong. There was barely any progress in previous millennia. There was even economic Nobel prize for showing why!
The GDP per capita of the world has been slowly increasing for several millenia. Same for the advancements in core technology.
The industrial revolution increased the pace, but it was already there, not flat or randomly flunctuating (think ancient hominids versus early agriculture vs bronge age, vs ancient Babylon and Assyrian empires, vs later Greece, and Persia, later Rome, later Renaissance and so on).
Post 1970s most of the further increase has been based on mirages due to financialization, and doesn't reflect actual improvement.
> Post 1970s most of the further increase has been based on mirages due to financialization, and doesn't reflect actual improvement.
Of course it does. It would be good if you would try to actually support such controversial claims with data.
Lol, wut?
The world can produce more things cheaper and faster than ever and this is an economic decline? I think you may have missed the other 6 billion people on the planet getting massive improvements in their quality of life.
>I think you may have missed the other 6 billion people on the planet getting massive improvements in their quality of life.
I think you have missed that it's easy to get "massive improvements in your quality of life" if you start from merely-post-revolution-era China or 1950s Africa or colonial India.
Much less so if you plateaud as US and Europe, and live off of increased debt ever since the 1970s.
And yet in the US I can currently survive and illness by the means of technology where I would have died in the 70s. It can be really hard to see the forest from the trees when everything around us is rapidly changing technology.
Increased debt is mostly on the good that technology cannot at least yet reproduce. For example they aren't making new land. Taste, NIMBYism and currently laws stop us from increased housing density in a lot of places too. Healthcare is still quite limited by laws in the US and made expensive because of it.
> rapid acceleration
Who was it who stated that every exponential was just a sigmoid in disguise?
> most humans are really stupid.
Statistically, don't we all sort of fit somewhere along a bell curve?
The bell curve of IQ and being stupid probably don't have much to do with each other.
Think of stupidity as the consequences of interacting with ones environment with negative outcomes. If you have a simple environment with few negative outcomes, then even someone with a 80 IQ may not be considered stupid. But if your environment rapidly grows more complex and the amount of thinking you have to do for positive outcomes increases then even someone with a 110 IQ may find themselves quickly in trouble.
Yes, and that's why surpassing it doesn't lead to a singularity except over an infinite timeframe. This whole thing was stupid in the first place.
What rapid acceleration?
I look at the trajectory of LLMs, and the shape I see is one of diminishing returns.
The improvements in the first few generations came fast, and they were impressive. Then subsequent generations took longer, improved less over the previous generation, and required more and more (and more and more) resources to achieve.
I'm not interested in one guy's take that LLMs are AGI, regardless of his computer science bonafides. I can look at what they do myself, and see that they aren't, by most very reasonable definitions of AGI.
If you really believe that the singularity is happening now...well, then, shouldn't it take a very short time for the effects of that to be painfully obvious? Like, massive improvements in all kinds of technology coming in a matter of months? Come back in a few months and tell me what amazing new technologies this supposed AGI has created...or maybe the one in denial isn't me.
> I look at the trajectory of LLMs, and the shape I see is one of diminishing returns
It seems even more true if you look at OpenAI funding thru 2022 initial public release to how spending has exponentially increased to deliver improvements since. We’re now talking upwards of $600B/yr of spending on LLM based AI infrastructure across the industry in 2026.
In my opinion, LLMs provide one piece of AGI. The only intelligence I’ve directly experienced is my own. I don’t consciously plan what I’m saying (or writing right now).
Instead, a subconscious process assembles the words to support my stream of consciousness. I think that LLMs are very similar, if not identical.
Stream of thought is accomplishing something superficially similar to consciousness, but without the ability to be innovative.
At any rate, until there’s an artificial human level stream of consciousness in the mix for each AI, I doubt we’ll see a group of AIs collaborating to produce a significantly improved new generation of AI hardware and software minus human involvement.
Once that does happen, the Singularity is at hand.