> Let's also not forget that "AI" has been grinding on for at least half a century, with nothing even coming close to AGI.

"Not there yet" isn't an argument against achieving anything. AGI and fusion included. It just means we are not there yet.

That argument is a classic fallacy.

Second, neural network capabilities have made steady gains since the 80's, in lock step with compute power, with additional benefits from architecture and algorithm improvements.

From a consumer's, VC's, or other technologists point of view, nothing might have seemed to happen. Successful applications were much smaller, and infrequently sexy. But problem sizes grew exponentially nevertheless.

You are arguing that unrelenting neural network progress for decades, highly coupled to the exponential growth in computing for 3/4 a century, will suddenly stop or pause, because ... you didn't track the field until recently?

Or the recent step was so big (models that can converse about virtually any topics known to humankind, in novel combinations, with hardly a mid-step), that it is easy to overlook the steady progress that enabled it?

Gains don't always look the same, because of threshold effects. And thresholds of tech vs. solution are very hard to predict. But increases in compute capacity are (relatively) easy to predict.

It may be that for a few years, efficiency gains and specialized capabilities are consolidated before another big step in general capability. Or another big step is just around the corner.

The only thing that has changed in terms of steady progress, is the intensity and number of minds working on improvements now is 1000 times what it was even a few years ago.

>That isn't a useful lens

Yet that is the exact "lens" you chose to view fusion with. It absolutely also applies equally to AGI.

> you didn't track the field until recently?

Okay, now you're trolling. You don't know me and are making assumptions based on what exactly??

This conversation is over. I'm not going back and forth when you're going to make attacks like this.

My apologies if pressing my point came over too strong. I will take that as a lesson for me.

I have not made any claims that fusion is unachievable. Just pointed out that the history of the two technologies couldn’t be more different.

There is no reason to believe fusion isn’t making progress, or won’t be successful, despite the challenges.

> you didn't track the field until recently?

That was a question not an assumption.

If you are aware of the progression of actual neural network use (not just research) from the 80’s on, or for any shorter significant time scale, great. Then you know it’s first successes were on toy problems, but it’s been a steady drumbeat of larger, more difficult, more general problems getting solved with higher quality solutions every year since. Often quirky problems in random fields. Then significant acceleration with nVidia’s intro of general compute on graphics cards and researcher hosted leader boards tracking progress more visibly. Only recently solving problems of a magnitude that is relevant to the general public.

Either you were not aware of that, which is no crime. Or dismissing it, or perhaps simply not addressing it directly.

If you have a credible reason that continued compute and architecture exploration is going to stop what has been exponential progress, for 40 years, I want to hear it.

(Not being aggressive, I mean I really would be interested in that. Even if it’s novel and/or conjectural.

Chalk up any steam on my part to being highly motivated to consider alternative reasoning. Not an attempt to change your mind, but to understand.)