As an experimental physicist, I refuse to get excited about a new theory until the proponent gets to an observable phenomenon that can fix the question.

This is why I'm skeptical of theories like Wolfram's: It feels like an overfit based on this: It produces all sorts of known theories (special relativity, parts of QM, gravity etc), but doesn't make new testable predictions, or new fundamentals. When I see 10 predictions emerge from the theory, and they all happen to be ones we already known of... Overfit.

But that means we'd prefer whichever theory our species had landed on first. Basing our preference for a theory on that timing seems kind of arbitrary to me. If they're the same in other respects, I'd take a look at both sides to see if there are other compelling reasons to focus on one or the other, such as which is simpler. Of course if they make different predictions that'd be even better, time to get to testing :)

A positive example would be the periodic table - the pattern to the elements made sense, but also exposed 'holes' in the table which were later filled in as we discovered additional elements. Wolfram may be closer to inventing epicycles to explain orbits - which is interesing and technically challenging and probably tickles his mind, but doesn't actually generate new knowledge.

Not quite apples to apples tho because you have to take into consideration what was known at the time each theory was developed (the input), not just the output.

Theory A: fits 7 known predictions but also makes a not-yet-verified prediction

Theory B: fits 8 known predictions and offers no new ones

In this example wouldn't Theory A be better, because all else equal it is less likely the product of overfitting and required more insight and effort to discover? In other words, Theory A used a different process that we know has a higher likelihood of novel discovery.

(Maybe this is a restatement of the simplicity argument, in that Theory A requires fewer predictions to discover it, ergo it is simpler)

> In this example wouldn't Theory A be better, because all else equal it is less likely the product of overfitting and required more insight and effort to discover?

No, Theory A might simply be a dead end with no new insights to offer. And alas: the universe does not care about insights, efforts, or simplicity.

All else equal if Theory B is easier to teach - easier for more people to understand - it might have value for that reason. It might also be valuable to teach multiple ways to understand the same underlying phenomenon.

> In other words, Theory A used a different process that we know has a higher likelihood of novel discovery.

How would we measure "likelihood of novel discovery"?

Now to call myself out here: the best way to answer any of these questions is to probe both theories at their limits to find differences in predictions that we can test. It may be that we don't have the right equipment or haven't designed experiments sufficient to do that currently.

Remember that Einstein's GR was validated by its prediction and the Eddington experiment, though his initial 1911 prediction was wrong and he later refined it in 1915. The 1919 Eddington measurements validated the theory.

We should remember though: That only worked out because the 1912 attempt to make the observations (which would have invalidated Einstein) got rained out. Who knows how Einstein's career would have turned out if the 1912 observations had succeeded. Perhaps people would have said he simply over-fit his theory to fit observation.

> requires fewer predictions to discover

I don’t think that is implied. It was discovered first, but that doesn’t mean it is necessarily simpler or required less data to discover. Take Newton/Leibniz calculus for example as a clear example of similar discovery time, leading to the same result but using different approaches. Leibniz started after Newton technically, and yet is the preferred way.

Especially if theory B is equivalent to theory A, then using it as a replacement for theory A seems perfectly fine (well as long as there are other benefits).

In some cases it might be pointless though from a scientific standpoint because the goal is “not-yet-known” predictions, but if viewed through a mathematical lens, then it seems like a valid area of study.

Maybe the process behind creating theory A is more generalisable towards future scientific discovery, but that would make the process worthwhile, not the theory.

Jonathan Gorard goes through a handful of testable predictions for the hypergraph stuff here: https://www.youtube.com/watch?v=XLtxXkugd5w

gorard is one to watch

I don't know anything about Wolfram's theory, but one general way to address this is to compare the Akaike information criterion (or similar measures).

The metrics attempt to balance the ability of a model to fit data with the number of parameters required. For equally well-fitting models, they prefer the one with fewer params.

If Wolfram's theory fits as well but has fewer params, it should be preferred. I'm not sure if fewer "concepts" counts, but it's something to consider.

[dead]

The problem with emergent theories like this is that they _derive_ Newtonian gravity and General Relativity so it’s not clear there’s anything to test. If they are able to predict MOND without the need for an additional MOND field then they become falsifiable only insofar as MOND is.

Deriving existing theories of gravity is an important test of the theory, it's not a problem at all. It's only a problem if you can only do this with more free parameters than the existing theory and/or the generalized theory doesn't make any independent predictions. Seems like in the article the former may be true but not the latter.

If such a theory makes no new predictions but is simple / simpler than the alternative, then it is a better theory.

Please, how is the article related to MOND's theories?

In general, they’re not. But if the only thing emergent theories predict is Newtonian dynamics and General Relativity then that’s a big problem for falsifiability. But if they modify Newtonian dynamics in some way, then do we have something to test.

From https://news.ycombinator.com/item?id=43738580 :

> FWIU this Superfluid Quantum Gravity [SQG, or SQR Superfluid Quantum Relativity] rejects dark matter and/or negative mass in favor of supervaucuous supervacuum, but I don't think it attempts to predict other phases and interactions like Dark fluid theory?

From https://news.ycombinator.com/item?id=43310933 re: second sound:

> - [ ] Models fluidic attractor systems

> - [ ] Models superfluids [BEC: Bose-Einstein Condensates]

> - [ ] Models n-body gravity in fluidic systems

> - [ ] Models retrocausality

From https://news.ycombinator.com/context?id=38061551 :

> A unified model must: differ from classical mechanics where observational results don't match classical predictions, describe superfluid 3Helium in a beaker, describe gravity in Bose-Einstein condensate superfluids , describe conductivity in superconductors and dielectrics, not introduce unoobserved "annihilation", explain how helicopters have lift, describe quantum locking, describe paths through fluids and gravity, predict n-body gravity experiments on earth in fluids with Bernoulli's and in space, [...]

> What else must a unified model of gravity and other forces predict with low error?

u/lewdwig's point was that if an emergent gravity theory made the sorts of predictions that MOND is meant to, then that would be a prediction that could be tested. The MOND thing is just an example of predictions that an emergent theory might make.

They both have to do with very weak gravitational fields.

Sometimes I wonder, imagine if our physics never allowed for Blackholes to exist. How would we know to stress test our theories? Blackholes are like standard candles in cosmology which allows us to make theoretical progress.

And each new type of candle becomes a source of fine-tuning or revision, progressing us with new ways to find the next candles - cosmological or microscopic.

Which kinda points to the fact that we’re not smart enough to make these steps without “hints”. It’s quite possible that our way of working will lead to a theory of everything in the asymptote, when everything is observed.

we dont know much about black holes, and there are theories which dont allow for proper black holes but do allow for objects that look like black holes in the limit (but e.g. don't produce Hawking radiation)

Between two models the one with the shorter Minimum Description Length (MDL) will more likely generalize better

But, think of all the fun math we get to do before someone shows it's an unworkable idea.

An observable is always the strongest evidence but there's also the improbability of a mathematical coincidence that can serve as almost as strong evidence. For example the fact that the Bekenstein-Hawking entropy of an Event Horizon comes out to exactly be the surface area divided by plank-length-square units, seems to me almost a proof that nothing really falls into EHs. They only make it to the surface. I'm not saying Holographic Theory is correct, but it's on the right track.

My favorite conjecture is that what happens is things effectively lose a dimension when they reach the EH surface, and become like a "flatlander" (2D) universe, having only two degrees of freedom on the surface. For such a 2D universe their special "orthogonal" dimension they'd experience as "time" is the surface normal vector. Possibly time only moves forward for them when something new "falls in" causing the sphere to expand.

This view of things also implies the Big Bang is wrong, if our universe is a 3D EH. Because if you roll back the clock on an EH back to when it "formed" you don't end up at some distant past singularity; you simply get back to a time where stuff began to clump together from higher dimensions. The universe isn't exploding from a point, it's merely expanding because it itself is a 3D version of what we know as Event Horizons.

Just like a fish can't tell it's in water, we can't tell we're on a 3D EH. But we can see 2D versions of them embedded in our space.