I am confused, since even factoring 21 is apparently so difficult that it "isn’t yet a good benchmark for tracking the progress of quantum computers." [0]

So the "useful quantum computing" that is "imminent" is not the kind of quantum computing that involves the factorization of nearly prime numbers?

[0] https://algassert.com/post/2500

Factoring will be okay for tracking progress later; it's just a bad benchmark now. Factoring benchmarks have little visibility into fault tolerance spinning up, which is the important progress right now. Factoring becoming a reasonable benchmark is strongly related to quantum computing becoming useful.

Perhaps? The sort of quantum computers that people are talking about now are not general purpose. So you might be able to make a useful quantum computer that is not Shor's algorithm.

Simulating the Hubbard model for superconductors at large scales is significantly more likely to happen sooner than factoring RSA-2048 with Shor’s algorithm.

Google have been working on this for years

Don't ask me if they've the top supercomputers beat, ask Gemini :)

I always find this argument a little silly.

Like if you were building one of the first normal computers, how big numbers you can multiply would be a terrible benchmark since once you have figured out how to multiply small numbers its fairly trivial to multiply big numbers. The challenge is making the computer multiply numbers at all.

This isn't a perfect metaphor as scaling is harder in a quantum setting, but we are mostly at the stage where we are trying to get the things to work at all. Once we reach the stage where we can factor small numbers reliably, the amount of time to go from smaller numbers to bigger numbers will be probably be relatively short.

From my limited understanding, that's actually the opposite of the truth.

In QC systems, the engineering "difficulty" scales very badly with the number of gates or steps of the algorithm.

Its not like addition where you can repeat a process in parallel and bam-ALU. From what I understand as a layperson, the size of the inputs is absolutely part of the scaling.

But the reason factoring numbers is used as the quantum benchmark is exactly that we have a quantum algorithm for that problem which is meant to scale better than any known algorithm on a classical computer.

So it seems like it takes an exponentially bigger device to factor 21 than 15, then 35 than 21, and so on, but if I understand right, at some point this levels out and it's only relatively speaking a little harder to factor say 10^30 than 10^29.

Why are we so confident this is true given all of the experience so far trying to scale up from factoring 15 to factoring 21?

The fact that it does appear to be so difficult to scale things up would suggest that the argument isn't silly.

Actually yes, how much numbers you can crunch per second and how big they are were among the first benchmarks for actual computers. Also, these prototypes were almost always immediately useful. (Think of the computer that cracked Enigma).

In comparison, there is no realistic path forward for scaling quantum computers. Anyone serious that is not trying to sell you QC will tell you that quantum systems become exponentially less stable the bigger they are and the longer they live. That is a fundamental physical truth. And since they're still struggling to do anything at all with a quantum computer, don't get your hopes up too much.

This is quite falatious and wrong. The first computers were built in order to solve problems immediately that were already being solved slowly by manual methods. There never was a period where people built computers so slow that they were slower than adding machines and slide rules, just because they seemed cool and might one day be much faster.