I feel that the article draws a false equivalence between skepticism and doomsaying. If anything, thinking AI is as dangerous as a nuclear weapon signals a true believer.
I feel that the article draws a false equivalence between skepticism and doomsaying. If anything, thinking AI is as dangerous as a nuclear weapon signals a true believer.
TFA doesn't even draw an "equivalence" between those two positions; it merely misuses the word "skeptic" to mean "true believer in the Singularity."
TFA mourns the disappearance of true believers — those pundits saying LLMs would quickly achieve AGI and then go on to basically destroy the world. As that prediction became more obviously false, the pundits quietly stopped repeating it.
"Skeptics" is not, and never was, the label for those unbridled believers/evangelists; the label was "AI doomers." But an essay titled "Where have all the AI doomers gone?" wouldn't get clicks because the title question pretty much answers itself.
Exactly. “AI will take over the world because it’s dangerously smart” is the exact opposite of skepticism!
There are different arguments as to why AI is bad, and they’re not all coming from the same people! There’s the resource argument (it’s expensive and bad for the environment), the quality argument (hallucinations, etc.), the ethical argument (stealing copyrighted material), the moral argument (displacing millions of jobs is bad), and probably more I’m forgetting.
Sam Altman talking about the dangers of AI in front of Congress accomplishes two things: It’s great publicity for AI’s capabilities (what CEO doesn’t want to possess the technology that could take over the world?), and it sets the stage for regulatory capture, protecting the big players from upstarts by making it too difficult/expensive to compete.
That’s not skepticism, that’s capitalism.
I am also tired of this whole "hallucination" nonsense
These LLMs are buggy as hell. They say they can do certain things - reasoning, coding, summarizing, research, etc - but they can't. They mangle those jobs. They are full of bugs and the teams behind them have proved they can't debug them. They thought they could "scale laws" out of it but that proved as unfruitful as it was illogical.
What class of software can work this bad and still have people convinced the only solution is to double the amount of compute and data they need, again?
> What class of software can work this bad and still have people convinced the only solution is to double the amount of compute and data they need, again?
Cloud providers :)))
And chip producers
My biggest worry (and I still have some of those other concerns) is for school-age children using it instead of having to learn how to read for information and to write in their own words.
For everyone who argues, "naysayers said that letting schoolchildren use calculators would ruin their minds, but it didn't," how many people do you know who can make a good estimate of their share of a restaurant bill without pulling out their phones? Think about how that translates to how well they grasp at a glance what they're getting themselves into with Klarna, car loans, etc.
It also only seems to be interested in what tech CEOs have to say - people who were as disingenuous about their doom mongering as they were about their gold rush mentality.