If you look at the rapid acceleration of progress and conclude this way, well, de nile ain't just a river in egypt.
Also yes LLMs are indeed AGI: https://www.noemamag.com/artificial-general-intelligence-is-...
This was Peter Norvig's take. AGI is a low bar because most humans are really stupid.
> If you look at the rapid acceleration of progress
I don’t understand this perspective. There are numerous examples of technical progress that then stalls out. Just look at batteries for example. Or ones where advancements are too expensive for widespread use (e.g. why no one flies Concorde any more)
Why is previous progress a guaranteed indicator of future progress?
Just think of this as risk management.
If AGI doesn't happen, then good. You get to keep working and playing and generally screwing off in the way that humans have for generations.
On the other hand if AGI happens, especially any time soon, you are exceptionally fucked along with me. The world changes very rapidly and there is no getting off Mr Bones wild ride.
>Why is previous progress a guaranteed indicator of future progress?
In this case, because nature already did it. We're not just inventing and testing something whole cloth. And we know there are still massive efficiencies to be gained.
For me the Concorde is an example of how people look at stuff incorrectly. In the past we had to send people places very quickly to do things. This was very expensive and inefficient. I don't need to get on a plane to have an effect just about anywhere else in the world now. The internet and digital mediums give me a presence at other locations that is very close to being there. We didn't need planes that fly at the speed of sound, we needed strings that communicate at the speed of light.
If you think AGI is at hand why are you trying to sway a bunch of internet randos who don’t get it? :) Use those god-like powers to make the life you want while it’s still under the radar.
how do you take over the world if you have access to 1000 normal people? if AGI is by the original definition (long forgotten by now) of surpassing MEDIAN human at almost all tasks. How the rebranding of ASI into AGI happened without anyone noticing is kind of insane
>If you look at the rapid acceleration of progress and conclude this way
There's no "rapid acceleration of progress". If anything there's a decline, and even an economic decline.
Take away the financial bubbles based on deregulation and huge explosion of debt, and the last 40 years of "economic progress" are just a mirage filling a huge bubble with air in actual advancement terms - unlike the previous millenia.
That’s completely wrong. There was barely any progress in previous millennia. There was even economic Nobel prize for showing why!
The GDP per capita of the world has been slowly increasing for several millenia. Same for the advancements in core technology.
The industrial revolution increased the pace, but it was already there, not flat or randomly flunctuating (think ancient hominids versus early agriculture vs bronge age, vs ancient Babylon and Assyrian empires, vs later Greece, and Persia, later Rome, later Renaissance and so on).
Post 1970s most of the further increase has been based on mirages due to financialization, and doesn't reflect actual improvement.
> Post 1970s most of the further increase has been based on mirages due to financialization, and doesn't reflect actual improvement.
Of course it does. It would be good if you would try to actually support such controversial claims with data.
Lol, wut?
The world can produce more things cheaper and faster than ever and this is an economic decline? I think you may have missed the other 6 billion people on the planet getting massive improvements in their quality of life.
>I think you may have missed the other 6 billion people on the planet getting massive improvements in their quality of life.
I think you have missed that it's easy to get "massive improvements in your quality of life" if you start from merely-post-revolution-era China or 1950s Africa or colonial India.
Much less so if you plateaud as US and Europe, and live off of increased debt ever since the 1970s.
And yet in the US I can currently survive and illness by the means of technology where I would have died in the 70s. It can be really hard to see the forest from the trees when everything around us is rapidly changing technology.
Increased debt is mostly on the good that technology cannot at least yet reproduce. For example they aren't making new land. Taste, NIMBYism and currently laws stop us from increased housing density in a lot of places too. Healthcare is still quite limited by laws in the US and made expensive because of it.
> rapid acceleration
Who was it who stated that every exponential was just a sigmoid in disguise?
> most humans are really stupid.
Statistically, don't we all sort of fit somewhere along a bell curve?
The bell curve of IQ and being stupid probably don't have much to do with each other.
Think of stupidity as the consequences of interacting with ones environment with negative outcomes. If you have a simple environment with few negative outcomes, then even someone with a 80 IQ may not be considered stupid. But if your environment rapidly grows more complex and the amount of thinking you have to do for positive outcomes increases then even someone with a 110 IQ may find themselves quickly in trouble.
Yes, and that's why surpassing it doesn't lead to a singularity except over an infinite timeframe. This whole thing was stupid in the first place.
What rapid acceleration?
I look at the trajectory of LLMs, and the shape I see is one of diminishing returns.
The improvements in the first few generations came fast, and they were impressive. Then subsequent generations took longer, improved less over the previous generation, and required more and more (and more and more) resources to achieve.
I'm not interested in one guy's take that LLMs are AGI, regardless of his computer science bonafides. I can look at what they do myself, and see that they aren't, by most very reasonable definitions of AGI.
If you really believe that the singularity is happening now...well, then, shouldn't it take a very short time for the effects of that to be painfully obvious? Like, massive improvements in all kinds of technology coming in a matter of months? Come back in a few months and tell me what amazing new technologies this supposed AGI has created...or maybe the one in denial isn't me.
> I look at the trajectory of LLMs, and the shape I see is one of diminishing returns
It seems even more true if you look at OpenAI funding thru 2022 initial public release to how spending has exponentially increased to deliver improvements since. We’re now talking upwards of $600B/yr of spending on LLM based AI infrastructure across the industry in 2026.