More important is that the new algorithm has a multiplicative factor in m (edges), so it's only efficient for extremely sparse graphs.

If m > n (log n)^{1/3}

Then this algorithm is slower.

for 1 Million nodes, if the average degree is >3.5, the new algorithm has worse complexity (ignoring unstated constant factors)

"Any sufficiently sparse graph is indistinguishable from a linked list" comes to mind ;)

A linked list is sparse by the metric of minimum maximum degree (2).

A maximally sparse connected graph by mean (degree edge/node ratio) is any tree (mean degree ~ 1), not necessarily a linked list.

Yeah, just based on this article that really stood out. It seems to be for a different use-case than Djikstra’s. An average degree of 3.5 seems like an extremely practical a useful use-case in real life, I just don’t see any reason to put it and Djikstra’s against each-other in a head-to-head comparison.