> Still, while it is basically impossible to predict with any confidence what AI will do in 20 or 30 years, one can say something about the next decade, because most of these near-term economic effects must involve existing technologies and improvements to them.

I think you would be hard pressed to find someone who was making adequate predictions about where we would be now back in 2020, much less 2015, and if you did, I doubt many people would have taken them seriously.

I’d argue that we can currently speak with some level of confidence about what things will be like in three years. After that, who knows?

Yup. Just because someone's a Nobel Laureate (he's an economist) doesn't mean they're right. Just like I won't let my doctor inform me on tabs vs. spaces.

Economists, businesspeople & their ilk have proven time & time again that 99% of them just throw darts at a board & see what sticks. The only ingredients required are money, connections and an extroversion (height helps too). That's not to say that most scientists don't do the same thing, that is science after all.

I doubt many people at all would have expected even the success of LLMs before Google's attention paper. NLP experienced a huge jump, previous models always seemed to me like handwritten sets of statistical rules stringing together text and now we have trained sets of statistical rules orders of magnitudes more complex...I have no idea what we'll end up with next.

> I doubt many people at all would have expected even the success of LLMs before Google's attention paper. NLP experienced a huge jump

AI doing fantastically better on AI benchmarks is different from AI greasing the wheels of the economy towards greater productivity. Acemoglu doesn't have much to say about the former (he's an economist, after all) and is focusing on the latter.

It is argued even whether and how personal computing has influenced productivity: https://en.wikipedia.org/wiki/Productivity_paradox

Suffice to say that even though these technologies might change life to feel radically different -- it remains to be seen how that finally snowballs into overall productivity. Of course, this is also complicated by questions of whether we're measuring productivity correctly.

> Just like I won't let my doctor inform me on tabs vs. spaces.

Tabs for indentation, spaces for alignment. 100% all the way. Anything else is Heresy... ;)

Oh God I knew that offhand would provoke a response like this ha ha. Four space tabs for indentation. Everything space, just like the universe around us.

Hahahaha! Okay. Down-vote fully accepted as totally justifiable... I clearly risked a flame-war by wading into religious territory like "tabs vs spaces"... :rofl:

(Seriously though, tabs all the way for me... It's just less key-presses.)

>Just because someone's a Nobel Laureate (he's an economist) doesn't mean they're right.

https://fivethirtyeight.com/features/the-economics-nobel-isn...

"Yup. Just because someone's a Nobel Laureate"

It's also worth pointing out that using technology is not the same as the cohort of people that spend their whole lives building and working with technology and dreaming about where the technology can go.

"It is reasonable to suppose that AI’s biggest impact will come from automating some tasks and making some workers in some occupations more productive."

This person needs the Ghost of AI present and future to come show him a bit more of this tech first-hand (try out Google Flow and try to make a statement like the one above, you won't be able to).

---

And oddly, this was just recommended to me on Youtube:

The AI Revolution Is Underhyped | Eric Schmidt (former Google CEO) | TED

https://www.youtube.com/watch?v=id4YRO7G0wE

Really ?

Provocative question for sure, but how much things changed since 2020 ? Or even 2015 ?

I'm talking about changes in real economy. Except of huge system shock that was Covid, not that much.

Yeah, it's interesting to think about change in terms of the change in the economy. It might be rose-tinted glasses when looking at the past, but when "the internet" went mainstream, ie when it had its "chatgpt 3.5" moment, in two years time there was more significant economic impact than this round of AI as I recall. And I'm thinking of the normal economy not the VC hype money slushing around. If that's true, I'm guessing the cost factor of AI vs the internet is also a substantial factor.

EDIT: I see that someone on the thread posted that Krugman doesn't think the internet brought real economic change either apparently.

I will kindly point out the entire point of the article isn't about culture or technology merely but it's specifically about AI's impact on the economy. This isn't just an "insightful observation", it's the whole point of the article.

>you would be hard pressed to find someone who was making adequate predictions about where we would be now back in 2020, much less 2015

Macroeconomic and productivity forecasts from 10-15 years ago are pretty accurate, and if anything, were too optimistic on the productivity front, but there was certainly nothing wrong with taking them seriously.

Macro forecasts are generally much easier than those tied to specific technologies. We can be much more specific & confident about predicting next year's inflation rate than next year's ChatGPT+ pricing, for example.

Yup a lot micro factor in aggregate all average/cancel/smoothen out at macro levels, which is why effects at macro levels are more muted and much more predictable

Is there a good source that tracks the performance of these forecasts? I’d be particularly interested in seeing what things looked like in, say, 2005, looking ahead ten years, and then maybe 2008. That would be right before and right after the smartphone boom started, which might be our best recent basis for comparison.

Modern AI started with NLP, computer vision and speech recognition and it was expected as chips get more powerful and faster, that software and AI people would figure how to utilize the massive new computing capabilities. My prediction would've been early 2010s for something like LLMs to occur but I guess I am too optimistic. And probably if it wasn't for Google and their enormous spending on R&D we would see LLMs not in early 2020s but in early 2030s.

a ton of people saw today coming in 2016 based on 1 if, a second term. not enough people listened, or they like those predictions.

For accurate predictions, or the lack thereof, it can be educational to look back in time. People in the late nineteenth century wrote down a lot of what in retrospect is a lot of hyperbole, nonsense, and rubbish. Some of it is pretty entertaining. The most outrageous ones actually got some of it right while completely missing the point at the same time. Jules Vernes for example had a pretty lively imagination. We went to the moon. But not by cannon ball. And there wasn't a whole lot there to see and do. And flying around the world takes a lot less than 80 days. Even in a balloon it can be done a lot quicker.

I was borne in the seventies. Much of what is science fact today was science fiction then. And much of that was pretty naive and enlightening at the same time.

My point is that nothing has changed when it comes to people's ability to predict the future. The louder people claim to know what it all means or rush to man-splain that to others, the more likely it is that they are completely and utterly missing the point. And probably in ways that will make them look pretty foolish in a few decades. Most people are just flailing around in the dark. And some of the crazier ones might actually be the ones to listen to. But you'd be well advised to filter out their interpretations and attempts to give meaning to it all.

Hal, the Paranoid Android, Kitt, C3PO, R2D2, Skynet, Data, and all the other science fiction AIs from my youth are now pretty much science fact. Some of those actually look a bit slow and retarded in comparison. Are we going to build better versions of these? I'd be very disappointed in the human race if we didn't. And I'd be also disappointed if that ends up resembling the original fantasies of those things. I don't think many people are capable of imagining anything more coherent than versions of themselves dressed up in some glossy exterior. Which is of course what C3PO is. Very relatable, a bit stupid, and clownish. But also, why would you want such a thing? And the angry Austrian body builder version of that of course isn't any better.

I think the raw facts are that we've invented some interesting software that passes the Turing test pretty much with flying colors. For much of my life that was the gold standard of testing AIs. I don't think anyone has bothered to actually deal with the formalities of letting AIs take that test and documenting the results in a scientific way. That test obviously became obsolete before people even thought of doing that. We now worry about abuse of AIs to deceive entire populations with AIs pretending to be humans manipulating people. You might actually have a hard time convincing people that have been abused in such a way that what they saw and heard was actually real. We imagined it would be hard to convince them it AIs are human. We failed to imagine the job of convincing them they are not is much harder.

Really? Mansplain? Why bring gender wars terms into this.

[flagged]