We already know fusion has been a grind. For decades.

We already know that computation has grown exponentially in scale and cost, for decades. With breakthrough software tech showing up reliably as it harnesses greater computational power, even if unpredictably and sparsely.

If we view AI across decades, it has reliably improved. Even through its “winters”. It just took time to hit thresholds of usefulness making relatively smooth progress look very random from a consumers point of view.

As computational power grows, and current models become more efficient, the hardware overhang for the next major step is growing rapidly.

That just hasn’t been the case for fusion.

>We already know fusion has been a grind. For decades.

Let's also not forget that "AI" has been grinding on for at least half a century, with nothing even coming close to AGI. No, LLMs are not AGI and cannot be AGI. They may be a component of an eventual AGI in the future, but we're still far, far away from artificial general intelligence. LLMs are not intelligent, but they do fool a lot of people.

With the paltry amount of money spent on fusion, it's no wonder it's been delayed for so long. Maybe try putting the $1 Trillion into it in the short time "AI" has had its windfall, and let's see what happens with fusion.

>If we view AI across decades, it has reliably improved

That's your opinion. It's still not much better than Eliza. And LLMs lie half the time, just to make the user happy hearing what they want to hear.

>As computational power grows, and current models become more efficient, the hardware overhang for the next major step is growing rapidly.

And it's still going to "hallucinate" and give wrong answers a lot of the time because the fundamental tech behind LLMs is a bit flawed and is in no way even close to AGI.

>That just hasn’t been the case for fusion.

And yet we have actual fusion reactions happening now, sustained for 22 minutes. We're practically on the threshold of having fusion power, but still there is nowhere near the money being spent on it compared to LLMs, which won't be capable of AGI in spite of the marketing line you've been told.

> Let's also not forget that "AI" has been grinding on for at least half a century, with nothing even coming close to AGI.

"Not there yet" isn't an argument against achieving anything. AGI and fusion included. It just means we are not there yet.

That argument is a classic fallacy.

Second, neural network capabilities have made steady gains since the 80's, in lock step with compute power, with additional benefits from architecture and algorithm improvements.

From a consumer's, VC's, or other technologists point of view, nothing might have seemed to happen. Successful applications were much smaller, and infrequently sexy. But problem sizes grew exponentially nevertheless.

You are arguing that unrelenting neural network progress for decades, highly coupled to the exponential growth in computing for 3/4 a century, will suddenly stop or pause, because ... you didn't track the field until recently?

Or the recent step was so big (models that can converse about virtually any topics known to humankind, in novel combinations, with hardly a mid-step), that it is easy to overlook the steady progress that enabled it?

Gains don't always look the same, because of threshold effects. And thresholds of tech vs. solution are very hard to predict. But increases in compute capacity are (relatively) easy to predict.

It may be that for a few years, efficiency gains and specialized capabilities are consolidated before another big step in general capability. Or another big step is just around the corner.

The only thing that has changed in terms of steady progress, is the intensity and number of minds working on improvements now is 1000 times what it was even a few years ago.

>That isn't a useful lens

Yet that is the exact "lens" you chose to view fusion with. It absolutely also applies equally to AGI.

> you didn't track the field until recently?

Okay, now you're trolling. You don't know me and are making assumptions based on what exactly??

This conversation is over. I'm not going back and forth when you're going to make attacks like this.

My apologies if pressing my point came over too strong. I will take that as a lesson for me.

I have not made any claims that fusion is unachievable. Just pointed out that the history of the two technologies couldn’t be more different.

There is no reason to believe fusion isn’t making progress, or won’t be successful, despite the challenges.

> you didn't track the field until recently?

That was a question not an assumption.

If you are aware of the progression of actual neural network use (not just research) from the 80’s on, or for any shorter significant time scale, great. Then you know it’s first successes were on toy problems, but it’s been a steady drumbeat of larger, more difficult, more general problems getting solved with higher quality solutions every year since. Often quirky problems in random fields. Then significant acceleration with nVidia’s intro of general compute on graphics cards and researcher hosted leader boards tracking progress more visibly. Only recently solving problems of a magnitude that is relevant to the general public.

Either you were not aware of that, which is no crime. Or dismissing it, or perhaps simply not addressing it directly.

If you have a credible reason that continued compute and architecture exploration is going to stop what has been exponential progress, for 40 years, I want to hear it.

(Not being aggressive, I mean I really would be interested in that. Even if it’s novel and/or conjectural.

Chalk up any steam on my part to being highly motivated to consider alternative reasoning. Not an attempt to change your mind, but to understand.)