I'm an enthusiastic Cantor skeptic, I lean very heavily constructivist to the point of almost being a finitist, but nonetheless I think the thesis of this article is basically correct.

Nature and the universe is all about continuous quantities; integral quantities and whole numbers represent an abstraction. At a micro level this is less true -- elementary particles specifically are a (mostly) discrete phenomenon, but representing the state even of a very simple system involves continuous quantities.

But the Cantor vision of the real numbers is just wrong and completely unphysical. The idea of arbitrary precision is intrinsically broken in physical reality. Instead I am off the opinion that computation is the relevant process in the physical universe, so approximations to continuous quantities are where the "Eternal Nature" line lies, and the abstraction of the continuum is just that -- an abstraction of the idea of having perfect knowledge of the state of anything in the universe.

> But the Cantor vision of the real numbers is just wrong and completely unphysical.

They're unphysical, and yet the very physical human mind can work with them just fine. They're a perfectly logical construction from perfectly reasonable axioms. There are lots of objects in math which aren't physically realizable. Plato would have said that those sorts of objects are more real than anything which actually exists in "reality".

There are two things being talked about here, and worth teasing them out.

On the one hand, this article is talking about the hierarchy of "physicality" of various mathematical concepts, and they put Cantor's real numbers at the floor. I disagree with that specifically; two quantities are interestingly "unequal" only at the precision where an underlying process can distinguish them. Turing tells us that any underlying process must represent a computation, and that the power of computation is a law of the underlying reality of the universe (this is my view of the Universal Church-Turing Thesis, not necessarily the generally accepted variant).

The other question is whether Cantor's conception of infinity is a useful one in mathematics. Here I think the answer is no. It leads to rabbit holes that are just uninteresting; trying to distinguish inifinities (continuum hypothesis) and leading us to counterintuitive and useless results. Fun to play with, like writing programs that can invoke a HaltingFunction oracle, but does not tell us anything that we can map back to reality. For example, the idea that there are the same number of integers as even integers is a stupid one that in the end does not lead anywhere useful.

> On the one hand, this article is talking about the hierarchy of "physicality" of various mathematical concepts, and they put Cantor's real numbers at the floor. I disagree with that specifically

I didn't mean to suggest that the reals are the floor of reality, rather that they are more floorlike than the integers.

> The other question is whether Cantor's conception of infinity is a useful one in mathematics. Here I think the answer is no.

Tools are created by transforming nature into something useful to humans. Is Cantor's conception of infinity more natural? I can't really say, but the uselessness and confusion seems more like nature than technology.

> the idea that there are the same number of integers as even integers is a stupid one that in the end does not lead anywhere useful

it leads to the idea that measuring 2 sets via a bijection is a better idea than measuring via containment

That a bijection exists is incredibly useful. But the idea of "measuring" infinite sets in the cardinality sense is not very interesting or useful.

Saying that two sets have the same cardinality is equivalent to saying there is a bijection between them. I don't understand how the latter can be useful but not the former?

It isn't very interesting or useful... to you.

> For example, the idea that there are the same number of integers as even integers is a stupid one that in the end does not lead anywhere useful.

I am not sure what you are arguing here. We’ve been teaching this to all undergraduate mathematicians for the last century; are you trying to make the point that this part of the curriculum is unnecessary, or that mathematics has not contributed to the wellbeing of society in the last hundred years? Both of these seem like rather difficult positions to defend.

Yeah, we teach it. It ends up showing up again in measure theory, assuming that anyone still bothers to teach the mostly useless Lebesgue integral instead of the gauge integral. Measure theory shows up again in probability theory if you're not using Kolmogorov for some sadistic reason and you have to deal with countability.

Otherwise it's pretty much a dead end unless you're in the weeds. You just mutter "almost everywhere" as a caveat once in a while and move on with your life. Nobody really cares about the immensely large group of numbers that by definition we cannot calculate or define or name except to kowtow to what is in retrospect a pretty bad theoretical underpinning for formal analysis.

> For example, the idea that there are the same number of integers as even integers is a stupid one that in the end does not lead anywhere useful.

Well, there are the same number. So, uh, sorry?

> They're unphysical, and yet the very physical human mind can work with them just fine.

Can it? We can only work with things we can name and the real numbers we can name are an infinitesimal fraction of the real numbers. (The nameable reals and sets of reals have the same cardinality as integers while the rest are a higher cardinality.)

We can work with unnameable things very easily. Take, for instance, every known theorem that quantifies over all real numbers. If you try to argue that proving theorems about these real numbers does not constitute “working with” them, it seems you have chosen a rather deficient definition of “working with” that does not match with how that phrase is used in the real world.

I would argue that all of those theorems work with nameable sets of real numbers but not with any unnamable real numbers themselves.

The human mind can't work with a real number any more than it can infinity. We box them into concepts and then work with those. An actual raw real number is unfathomable.

I don’t know about you, I can work with it just fine. I know its properties. I can manipulate it. I can prove theorems about it. What more is there?

In fact, if you are to argue that we cannot know a “raw” real number, I would point out that we can’t know a natural number either! Take 2: you can picture two apples, you can imagine second place, you can visualize its decimal representation in Arabic numerals, you can tell me all its arithmetical properties, you can write down its construction as a set in ZFC set theory… but can you really know the number – not a representation of the number, not its properties, but the number itself? Of course not: mathematical objects are their properties and nothing more. It doesn’t even make sense to consider the idea of a “raw” object.

You can hold a two in your head, but you can't hold a number with infinitely many decimal places. Any manipulations you do with the real 2 are done conceptually whereas with the natural 2, its done concretely.

The decimal places are just a way of representing it.

The infinite number of decimal places is the definitional feature of a real number. No matter how's it represented they are still there and cannot be contained in our brains. We can say pi and hold the concept of pi in our heads, but not the actual number.

No, it really isn’t. The real numbers can be constructed in a number of ways, and it is more common to define them as either Dedekind cuts, or equivalence classes of Cauchy sequences of rational numbers.

Personally, I’d go with the sideline cut definition.

Dang autocorrect. “sideline” should be Dedekind

Or maybe we can know them equally well? The function f(x) = x(0^(sin(πx)^2)) for example "requires" infinities, but only returns integer values.

I felt also something like this before. Also integers seem pretty close to the reality around us. One of their functions is to symbolically represent the similarity of objects (there might be a better way to put it). Like, if you see 5 sheep in one group and 6 in another, after that point they’re no longer just distinct sheep with unique properties - the numbers act as symbols for the groups. Real numbers still can work in the brain, but they're most distant from the world around us, at least when it comes to going from visual to conceptual understanding.

> They're unphysical, and yet the very physical human mind can work with them just fine

Nah, you're likely thinking of the rationals, which are basically just two integers in a halloween costume. Ooh a third, big deal. The overwhelming majority of the reals are completely batshit and you're not working with them "just fine" except in some very hand wavy sense.

the rationals are 3 naturals with in a "2,1" structure.

the first 2 naturals form an integer.

that integer and a 3rd natural constitute a real (but this 3rd natural best be bigger than zero, else we're in trouble)

what I choose to focus after observing the "unphysical" nature of numbers. is the sense of natural opposition (bordering on alternation) between "mathematical true" and "physical true". both are claiming to be really real Reality.

in the mathematical realm, finite things are "impossible", they become "zero", negible in the presence of infinities. it's impossible for the primes to be finite (by contradiction). it's impossible for things (numbers or functions of mathematical objects) to be finite.

but in the physical reality, it's the "infinite things" which become impossible.

the "decimal point" (i.e. scientific notation i.e. positional numeral systems) is truly THE wonder of the world. for some reason I want something better than such a system... so I'm still learning about categories

Huh?

You know it wouldn't be possible for us to tell the difference between a rational universe (one where all quantities are rational numbers) and a real universe (one where you can have irrational quantities).

The standard construction for the real numbers is to start with the rationals and "fill in all the holes". So why even bother with filling in the holes and instead just declare God created the rationals?

As in why bother using real numbers in physics? Mostly because you need them to make the maths rigorous. You can't do rigorous calculus (i.e. real analysis) on rationals alone.

We don't need reals to make the math rigorous. Only to make the math a lot more tractable.

I've solved multiple continuous value problems by discretizing, applying combinatorics to the techniques, and then taking the limit of the result - you of course get the same result if you had simply used regular integration/differentiation, and it's a lot easier to use calculus than combinatorics.

But the point is the "rational", discretized approach will get you arbitrarily close to the answer.

It's why many analysis textbooks define a (given) real number as "a sequence of converging rational numbers" (before even defining what a limit is).

It's more about derivation of theorems than calculations.

Computation can only use rationals, and of course can get arbitrarily close to an answer because they are dense in the reals.

However, the entire edifice of analysis rests on the completeness axiom of the reals. The extreme value theorem, for example, is equivalent to the completeness axiom; the useful properties of continuous functions break down without it; the fundamental theorem of calculus doesn't work without it; Etc. So if the maths used in your physics (the structure of the theory, not just the calculations you perform with it) relies on these things at all, you're relying on the reals for confidence that the maths is sound.

Now you could argue that we don't need mathematical rigour for physics, that real analysis is a preoccupation of mathematicians, while physicists should be fine with informal calculus. I'm not going to argue that point. I'm just pointing out what the real numbers bring to the table.

Here's Tim Gowers on the subject: https://www.dpmms.cam.ac.uk/~wtg10/reals.html

The uncomputable real numbers always seemed strange to me. I can understand a convergent sequence of rationals, or the idea of a program that outputs a number to arbitrary precision, but something that cannot be computed at all is a very bizarre object. I think NJ Wildberger has some interesting ideas in this area, although I’m not sure I agree with his finititist interpretation in all circumstances. Specifically I don’t think comparisons to the number of atoms in the universe or information theoretic limits on storage based on the volume of the observable universe are interesting considerations here.

To me at least, if you can write down a finite procedure that can produce a number to arbitrary precision, I think it is fair to say the number at that limit exists.

This made me think of a possible numerical library where rather than storing numbers as arbitrary precision rationals, you could store them as the combination of inputs and functions that generate that number, and compute values to arbitrary precision.

> I've solved multiple continuous value problems by discretizing, applying combinatorics to the techniques, and then taking the limit of the result

But taking the limit of a sequence of rationals isn’t guaranteed to remain in the rationals (classic example: https://en.wikipedia.org/wiki/Basel_problem. Each partial sum is rational, but the limit of the partial sums is not)

So, how does that statement rebut “You can't do rigorous calculus (i.e. real analysis) on rationals alone.”?

> But taking the limit of a sequence of rationals isn’t guaranteed to remain in the rationals

I'm not saying it does. What I'm saying is that you can make a correspondence with the reals by using only rationals.

You can define convergence without invoking the reals (Cauchy convergence). If you take any such sequence, you give that sequence a name. That name is the equivalent of a real number. You can then define addition, multiplication - any operation on the reals - with respect to those sequences (again, invoking only rational numbers).

So far, we have two distinct entities: The rationals, and the converging sequences.

Then, if you want, you can show that if you take the rationals and those entities we're calling "converging sequences" together, you can make operations involving the two (e.g. adding a rational to that converging sequence) and eventually build up what we know to be the number line.

You don't need the full set of real numbers to do physics, only the computable subset of the real numbers. Using the full reals is mostly done out of simplicity.

What do you mean by "the computable subset of the reals" formally?

Is sqrt(2) computable?

Is BB(777) computable?

Is [the integer that happens to be equal to BB(777), not that I can prove it, written out in normal decimal notation] computable?

A computable real number is a real number for which a Turing Machine exists that can compute it to any arbitrary precision.

So yes sqrt(2) is computable.

Every BB(n) is computable since every every natutal number can be computed. It's the BB function itself that is not computable in general, not the specific output of that function for a given input.

That doesn’t sound right to me. What about the machines that don’t halt? You can’t compute whether or not to skip them directly.

> A busy beaver hunter who goes by Racheline has shown that the question of whether Antihydra halts is closely related to a famous unsolved problem in mathematics called the Collatz conjecture. Since then, the team has discovered many other six-rule machines with similar characteristics. Slaying the Antihydra and its brethren will require conceptual breakthroughs in pure mathematics.

https://www.quantamagazine.org/busy-beaver-hunters-reach-num...

[deleted]

What specifically doesn't sound right?

The claim is that every bb(n) is computable but I don’t think you can compute bb(6) without knowing which machines won’t halt. That doesn’t seem like a finite calculation?

But given the answer, I suppose you could write a program that just returns it. This seems to hinge on the definition of “computable.” It’s an integer, so that fits the definition of a computable number.

My mistake.

Yes exactly, imagine a function HH(n) that returns 0 if the Turing machine represented by the integer n halts, and 1 if it doesn't.

Then HH the function itself is not computable, but the numbers 0 and 1, which are the only two outputs of HH are computable.

Integers themselves are always computable, even if they are the output of functions that are themselves uncomputable.

Yes. A valid question for a specific n would be whether you can prove the value of BB(n). If you don't care about provability, you can indeed just produce a number that happens to be the right one.

So as you noticed, it only makes sense to talk about whether a function is computable, we can't meaningfully talk of computable numbers.

The main thing to make it clear is that BB(n) for a specific n isn't a function - it's a number. Just like Mult(10,4) isn't a function, it's a number (40).

So a specific BB(n) is just a number and is computable.

Interesting point about BB(n)... Is it known that BB(n) is finite for every n?

I believe it is by definition? The machines that don’t halt are filtered out. The trouble is how to do the filtering.

Yes BB(n) is always a natural number which is by definition finite.

> You can't do rigorous calculus (i.e. real analysis) on rationals alone. Yep, but that wasn't my point.

My point was that it is possible that all values in our universe are rational, and it wouldn't be possible for us to tell the difference between this and a universe that has irrational numbers. This fact feels pretty cursed, so I wanted to point it out.

You can make this statement for any dense subset of the reals, but we don’t because that would be silly.

I think the conceit is supposed to be that analysis—and therefore the reals—is the “language of nature” more so than that we can actually find the reals using scientific instruments.

To illustrate the point, using the rationals is just one way of constructing the reals. Try arguing that numbers with a finite decimal representation are the divine language of nature, for example.

Plus, maybe a hot take, but really I think there’s nothing natural about the rationals. Try using them for anything practical. If we used more base-60 instead of base-10 we could probably forget about them entirely.

I think it makes much more sense to make this statement for the rational numbers: It's the smallest field inside the real numbers that contains the naturals.

So every subset that allows you to do your daily calculations contains the rationals.

They’re a field by construction, and yes, the initial field of characteristic zero, but otherwise don’t arise in any natural way. They’ll be there if you’re studying fields, but exact division by arbitrary integers doesn’t seem to be a very natural property outside the reals. Again, imagine doing any practical computations with rationals and see how far you get before resorting to decimal approximation.

I think teachers lie to children and say that decimals are just another way of representing rationals, rather than the approximation of real numbers that they are (and introduce somewhat silly things like repeating decimals to do it), which makes rationals feel central and natural. That’s certainly how it was for me until I started wondering why no programming languages come with rational number packages.

Here’s my plug for p-adic numbers! So cool

I think this is right. Any measurement will have finite precision, so while we might be able to discover some maximum precision that the universe uses eventually, we won't ever be able to prove that the universe has infinite precision representations from finite precision measurements.

Only so long as we use the rationals as an approximation. If we expect them to be exact then they are as bad as the integers.

The continuum is the reality that we have to hold to. Not the continuum in the Cantor sense, but in the intuitionalist or constructivist sense, which is continuously varying numbers that can be approximated as necessary.

I would argue that even the rational numbers are unphysical in the same way that the integers are!

The idea that a quantity like 1/3 is meaningfully different than 333/1000 or 3333333/10000000 is not really that interesting on its own; only in the course of a physical process (a computation) would these quantities be interestingly different, and then only in the sense of the degree of approximation that is required for the computation.

The real numbers in the intuitionalist sense are the ground truth here in my opinion; the Cantorian real numbers are busted, and the rationals are too abstract.

To a mathematician saying god created the integers is the same thing as saying god created the rationals: there’s a bijection between the rationals and integers.

I’m not convinced that we could have our current universe without irrationals - wouldn’t things like electromagnetism and gravity work differently if forced to be quantized between rationals? Saying ‘meh it would be close enough’ might be correct but wouldn’t be enough to convince me a priori.

Yeah this is an understatement. Modern technology and the world economy require irrational numbers

Because the square root of 2 exists.

How does it exist though? Does it exist because we have a notation for it, or because we know its definition? Does the number 2 itself even exist? What does it mean to say that the number 2 exists?

Calculo, ergo sum?

> Calculo, ergo sum?

Pretty much:

https://en.wikipedia.org/wiki/Church_encoding

The number 2 simply exists independent of human intervention.

I am not convinced. There are no two equal things in nature. Numbering things, say apples, is a completely human abstraction over two different things.

How many units of protons does Helium have?

The standard construction for

> You know it wouldn't be possible for us to tell the difference between a rational universe (one where all quantities are rational numbers) and a real universe (one where you can have irrational quantities).

Citation needed.

Especially since there are well-established math proofs of irrational numbers.

The argument is essentially that you can only measure things to finite precision. And for any measurement you've made at this finite precision, there exist both infinitely rational and irrational numbers. So it's impossible to rule out that the actual value you measured is one of those infinitely many rational numbers.

This argument feels like it's assuming the conclusion. If in principle it is only possible to measure quantities to finite precision, then it follows logically that we couldn't tell the difference between a rational and real universe. The question is, is the premise true here?

AFAIK it would take an infinite amount of time to measure something to infinite precision, at least by the usual ways we’d think to do so…. I suppose one could assume a universe where that somehow isn’t the case, but (to my knowledge) that’s firmly in science-fiction territory.

I don't think time and measurement precision are necessarily related in that way. You can measure weight with increased precision by using a more precise scale, without increasing the time it takes to do the measurement.

The real point is that it takes infinite energy to get infinite precision.

Let me add that we have no clue how to do a measurement that doesn't involve a photon somewhere, which means that it's pure science fiction to think of infinite precision for anything small enough to be disturbed by a low-energy photon.

I'm not making the case that it is possible to make measurements with infinite precision. I'm making the case that the argument "It is not possible to make measurements with infinite precision, therefore we cannot tell if we live in a rational or a real world." is begging the question. The conclusion follows logically from the premise. Unless the argument is just "we can't currently distinguish between a rational and a real world", but that seems trivial.

There are limits to precision there too. The amount of available matter to build something out of and the size you can build down to before quantum effects interfere.

The example was only to illustrate that measurement precision is independent of the time it takes to perform the measurement.

If I'm carrying a single apple, I can measure the number of apples I'm carrying to infinite precision. I'm carrying 1.000... apples.

You're implicitly assuming your conclusion by calling it a "single" apple, which means exactly one. "Apple" is an imprecise concept, but they're often sufficiently similar that we can neglect the differences between them and count them as if they're identical objects, but this is a simplification we impose for practical purposes.

Even for elementary particles, we can't be sure that all electrons, say, are exactly alike. They appear to be, and so we have no reason yet to treat them differently, but because of the imprecision of our measurements it could be that they have minutely different masses or charges. I'm not saying that's plausible, only that we don't know with certainty

> Especially since there are well-established math proofs of irrational numbers.

The logic is circular, simply because mathematicians are the ones who invented irrationals. Of course they have proofs on them. They also have proofs on lots of things that don't exist in this universe.

And as I pointed out elsewhere, many analysis textbooks define a real number to be "a (converging) sequence of rationals". The notion of convergence is defined before reals even enter into the picture, and a real number is merely the identifier for a given converging sequence of rationals.

Another popular pedagogical pathway is to construct the reals via convergent sequences of rational numbers, i.e. Cauchy sequences.

“ Nature and the universe is all about continuous quantities; integral quantities and whole numbers represent an abstraction. ”

Hard disagree. This is the problem with math disconnected from physics. The real world is composed of quanta and spectra, i.e. reality is NOT continuous!

Only bound states, like electrons confined to atomic orbitals, have quantized energies. Free electrons (or any particles) can have a continuous range of energies. Quantum mechanics (and general relativity) is still based on contiuous space and time, hence a continuous range of possible velocities and (kinetic) energies

Energies, yes, but the concept of energy quanta is inverted to e.g. time and length in that we have a maximum, not a minimum, where our understanding/models are limited, right?

> Nature and the universe is all about continuous quantities

One could argue that nature always deals in discrete quantities and we have models that accurately predict these quantities. Then we use math that humans clearly created (limits) to produce similar models, except they imagine continuous inputs.

The quantity of matter and the quantity of electricity are discrete, but work, time and space are continuous, like also any quantities derived from them.

There have been attempts to create discrete models of time and space, but nothing useful has resulted from those attempts.

Most quantities encountered in nature include some dependency on work/energy, time or space, so nature deals mostly in continuous quantities, or more precisely the models that we can use to predict what happens in nature are still based mostly on continuous quantities, despite the fact that about a century and a half have passed since the discreteness of matter and electricity has been confirmed.

> but work, time and space are continuous

I'm under the impression that all our theories of time and space (and thus work) break down at the scale of 1 plank unit and smaller. Which isn't proof that they aren't continuous, but I don't see how you could assert that they are either.

Matter and energy are discrete. The continuity or discreteness of time and space are unknown. There are arguments for both cases, but nobody really knows for sure.

It’s fairly easy to go from integers to many subsets of the reals (rationals are straightforward, constructible numbers not too hard, algebraic numbers more of a challenge), but the idea that the reals are, well real, depends on a continuity of spacetime that we can’t prove exists.

Energy is continuous, not discrete.

Because energy is action per time, it inherits the continuity of time. Action is also continuous, though its nature is much less well understood. (Many people make confusions between action and angular momentum, speaking about a "quantum of action". There is no such thing as a quantum of action, because action is a quantity that increases monotonically in time for any physical system, so it cannot have constant values, much less quantized values. Angular momentum, which is the ratio of action per phase in a rotation motion, is frequently a constant quantity and a quantized quantity. In more than 99% of the cases when people write Planck's constant, they mean an angular momentum, but there are also a few cases when people write Planck's constant meaning an action, typically in relation with some magnetic fluxes, e.g. in the formula of the magnetic flux quantum.)

Perhaps when you said that energy is discrete you thought about light being discrete, but light is not energy. Energy is a property of light, like also momentum, frequency, wavenumber and others.

Moreover, the nature of the photon is still debated. Some people are not convinced yet that light travels in discrete packets, instead of the alternative where only the exchange of energy and momentum between light and electrons or other leptons and quarks is quantized.

There are certain stationary systems, like isolated atoms or molecules, which may have a discrete set of states, where each state has a certain energy.

Unlike for a discrete quantity like the electric charge, such sets of energy values can contain arbitrary values of energy and between the sets of different systems there are no rational relationships between the energy values. Moreover, all such systems have not only discrete energy values but also continuous intervals of possible energies, usually towards higher energies, e.g. corresponding to high temperatures or to the ionization of atoms or molecules.

[deleted]

The Planck units are bogus units that do not have any significance.

Perhaps our theories of time and space would break down at some extremely small scale, but for now there is no evidence about this and nobody has any idea which that scale may be.

In the 19th century, both George Johnstone Stoney and Max Planck have made the same mistake. Each of them has computed for the first time some universal constants, Stoney has computed the elementary electric charge in 1874 and Planck has computed the 2 constants that are now named "Boltzmann's constant" and "Planck's constant", in several variants, in 1899, 1900 and 1901. (Ludwig Boltzmann had predicted the existence of the constant that bears his name, but he never used it for anything and he did not compute its value.)

Both of them have realized that new universal constants allow the use of additional natural units in the system of fundamental units of measurement and they have attempted to exploit their findings for this purpose.

However both have bet on the wrong horse. Before them, James Clerk Maxwell had proposed two alternatives for choosing a good unit of mass. The first was to choose as the unit of mass the mass of some molecule. The second was to give an exact value to the Newtonian constant of gravity. The first Maxwell proposal was good and when analyzed at the revision of SI from 2018 it was only very slightly worse than the final choice (which preferred to use two properties of the photons, instead of choosing an arbitrary molecule besides using one property of the photons).

The second Maxwell proposal was extremely bad, though to be fair it was difficult for Maxwell to predict that during the next century the precision of measuring many quantities will increase by many orders of magnitude, while the precision of measuring the Newtonian constant of gravity will be improved only barely, in comparison with the others.

Both Stoney and Planck have chosen to base their proposals for systems of fundamental units on the second Maxwell variant, and this mistake made their systems completely impractical. The value of Newton's constant has a huge uncertainty in comparison with the other universal constants. Declaring its value as exact does not make that uncertainty disappear, but it moves the uncertainty into the values of almost all other physical quantities.

The consequence is that if using the systems of fundamental units of George Johnstone Stoney or of Max Planck, almost no absolute value of any quantity can be known accurately. Only the ratios between two quantities of the same kind and the velocities can be known accurately.

Thus the Max Planck system of units is a historical curiosity that is irrelevant for practice. The right way to use Planck's constant in a system of units has become possible only 60 years later, when the Josephson effect was predicted in 1962, and SI has been modified to use it only after other 60 years, in 2019.

The units of measurement that are chosen to be fundamental do not matter in any way upon the validity of physical laws at different scales. Even if the Planck units were practical, that would give no information about the structure of space and time. The definition of the Planck units is based on continuous models for time, space and forces.

Every now and then there are texts in the popular literature that mention the Planck units as they would have some special meaning. All such texts are based on hearsay, repeating affirmations from sources who have no idea about how the Planck units have been defined in 1899 and about how systems of fundamental units of measurement are defined and what they mean. Apparently the only reason why the Planck units have been picked for this purpose is that in this system the unit of length happens to be much smaller than an atom or than its nucleus, so people imagine that if the current model of space breaks at some scale, that scale might be this small.

The Planck length is at least around the right order of magnitude for things to get weird. If you have the position uncertainty of something be less that ~ a Planck length, and it’s expected momentum equal to zero, by Heisenberg position momentum uncertainty, the expectation of the square of the momentum is big enough that the (relativistic) kinetic energy is big enough that the Schwartzchild radius is also around the Planck length iirc?

The right magnitude for things to get weird must be very small, but nobody can say whether that scale is a million times greater than the Planck length or a million times smaller than the Planck length.

Therefore using the Planck length for any purpose is meaningless.

For now, nobody can say anything about the value of a Schwartzschild radius in this range, because until now nobody succeeded to create a theory of gravity that is valid at these scales.

We are not even certain whether Einstein's theory of gravity is correct at galaxy scales (due to the discrepancies non-explained by "dark" things), much less about whether it applies at elementary particle scales.

The Heisenberg uncertainty relations must always be applied with extreme caution, because they are valid in only in limited circumstances. As we do not know any physical system that could have dimensions comparable with the Planck length, we cannot say whether it might have any stationary states that could be characterized by the momentum-position Heisenberg uncertainty, or by any kind of momentum. (My personal opinion is that the so-called elementary particles, i.e. the leptons and the quarks, are not point-like, but they have a spatial extension that explains their spin and the generations of particles with different masses, and their size is likely to be greater than the Planck length.)

So attempting to say anything about what happens at the Planck length or at much greater or much smaller scales, but still much below of what can be tested experimentally, is not productive, because it cannot reach any conclusion.

In any case, using "Planck length" is definitely wrong, because it gives the impression that there are things that can be said about a specific length value, while everything that has ever been said about the Planck length could be said about any length smaller than we can reach by experiments.

By “things get weird” I meant “our current theories/models predict things to get weird”.

So, like, I’m saying that if Einstein’s model of gravity is applicable at very tiny scales, and if the [p,x] relation continues to hold at those scales, then stuff gets weird (either by “measurement of any position to within that amount of precision results in black-hole-ish stuff”, OR “the models we have don’t correctly predict what would happen”)

Now, it might be that our current models stop being approximately accurate at scales much larger than the Planck scale (so, much before reaching it), but either they stop being accurate at or before (perhaps much before) that scale, or things get weird at around that scale.

Edit: the spins of fermions don’t make sense to attribute to something with extent spinning. The values of angular momentum that you get for an actual spinning thing, and what you get for the spin angular momentum for fermions, are offset by like, hbar/2.

I get what you mean, but one thing about which we are certain is that you cannot apply Einstein"s model of gravity at these scales, because his theory is only an approximation that determines the metric of space from an averaged density of the energy and momentum of matter, not from the energy-momentum 4-vectors of the particles that compose matter.

So Einstein's theory depends in an essential way on matter being continuous. This is fine at human and astronomic scales, but it is not applicable at molecular or elementary particle scales, where you cannot approximate well the particles by an averaged density of their energy and momentum.

Any attempt to compute a gravitational escape velocity at scales many orders of magnitude smaller than the radius of a nucleus are for now invalid and purposeless.

The contradiction between the continuity of matter supposed by Einstein's gravity model and the discreteness of matter used in quantum physics is great enough that during more than a century of attempts they have not been reconciled in an acceptable way.

The offset of the spin is likely to be caused by the fact that for particles of non-null spin their movement is not a simple spinning, but one affected by some kind of precession, and the "spin" is actually the ratio between the frequencies of the 2 rotation movements, which is why it is quantized.

The "action" is likely to be the phase of the intrinsic rotation that affects even the particles with null spin (and whose frequency is proportional with their energy), while those with non-null spin have also some kind of precession superposed on the other rotation.

> The offset of the spin is likely to be caused by the fact that for particles of non-null spin their movement is not a simple spinning, but one affected by some kind of precession, and the "spin" is actually the ratio between the frequencies of the 2 rotation movements, which is why it is quantized.

I don’t expect this to work. For one thing, we already know the conditions under which the spin precesses. That’s how they measure g-2 .

Also, orbital angular momentum is already quantized. So, I don’t know why you say that the “precession” is responsible for the quantized values for the spin.

the representations of SU(2) for composite particles, combine in understood ways, where for a combination of an even number of fermions, the possible total spin values match up with the possible values for orbital angular momentum.

Could you give an explanation for how you think precession could cause this difference? Because without a mathematical explanation showing otherwise, or at least suggesting otherwise, my expectation is going to be that that doesn’t work.

The orbital angular momentum is quantized for the same reason as the spin, both are ratios between the phases of 2 separate rotation movements, the orbital rotation or the spin rotation and the intrinsic rotation corresponding to the de Broglie wave (whose phase is proportional to Hamilton's integral, i.e. the integral of the Lagrangian over time).

I have used "precession" for lack of a better term for suggesting its appearance, because while there is little doubt about the existence of 2 separate kinds of rotations in the particles with non-null spin, there exists no complete model of how they are combined.

[deleted]

We do not know whether work time and space are continuous

What we know is that we use mathematical models based on the continuity of work, time and space (and on the discreteness of matter and electricity) and until now we have not seen any experiment where a discrepancy between predicted and measured values could be attributed to the falseness of the supposition that work, time and space are continuous.

Obviously this does not exclude the possibility that in the future some experiments where much higher energies per particle are used, allowing the testing of what happens at much smaller distances, might show evidence that there exists a discrete structure of time and space, like we know for matter.

However, that has not happened yet and there are no reasons to believe that it will happen soon. The theory about the existence of atoms is more than 2 millennia old, then it has been abandoned for lack of evidence, then it was revived at the beginning of the 19th century, due to accumulated evidence from chemistry, and it was eventually confirmed beyond doubt in 1865, when Johann Josef Loschmidt became the first who could count atoms and molecules, after determining their masses.

So the discreteness of matter had a very long history of accumulating evidence in favor of it.

Nothing similar applies to the discreteness of time and space, for which there has never been any kind of evidence. The only reason of the speculations about this is the analogy made with the fact that matter and electricity had been believed to be continuous, but eventually it has been discovered that they are discrete.

Such an analogy must make us keep an open mind about the possibility of work, time and space being discrete, but we should not waste time speculating about this when there are huge problems in physics that do not have a solution yet. In modern physics there are a huge amount of quantities that should be computable by theory, but in fact they cannot be computed and they must be measured experimentally. Therefore the existing theories are clearly not good enough.

Umm SpaceTime is likely NOT to be fundamental or continuous

https://youtu.be/GL77oOnrPzY?si=nllkY_E8WotARwUM

Also Bells Therom implies no locality or non realism which to me furthers the nail on the coffin of spacetime

That presentation is like all the research that has been published in this domain, i.e. it presents some ideas that might be used to build an alternative theory of space-time, but no such actual theories.

There are already several decades of such discussions, but no usable results.

Time and space are primitive quantities in any current theory of physics, i.e. quantities that are assumed to exist and have certain properties, and which are used to define derived quantities.

Any alternative theory must start by enumerating exactly which are its primitive quantities and which are their properties. Anything else is just gibberish, not better than Star Trek talk.

However, the units of measurement for time and length are not fundamental units a.k.a. base units, because it is impossible to make any physical system characterized by values of time or length that are stable enough and reproducible enough.

Because of that, the units of time and length are derived from fundamental units that are units of some derived quantities, currently from the units of work and velocity (i.e. the unit of work is the work required to transition a certain atom, currently cesium 133, from a certain state to a certain other state, i.e. which is equal to the difference between the energies of the 2 states, while the unit of velocity is the velocity of light in vacuum).

> The idea of arbitrary precision is intrinsically broken in physical reality.

you said a lot and i probably don't understand but doesn't pi contradict this? pi definitely exists in physical reality, wherever there is a circle, and seems to be have a never ending supply of decimal points.

> wherever there is a circle,

Is there a circle in physical reality? Or only approximate circles, or things we model as circles?

In any case, a believer in computation as reality would say that any digit of π has the potential to exist, as the result of a definite computation, but that the entirety does not actually exist apart from the process used to compute it.

> pi definitely exists in physical reality,

What does it mean to "exist in physical reality"?

If you mean there are objects that have physical characteristics that involve pi to infinite precision I think the truth is we have not a darn clue. Take a circle, that would have to be a perfect circle. Even our most accurate and precise physical theories only measure and predict things to 10s of decimal places. We do not possess the technology to verify that it's a real true circle to infinite precision, and many reason to think that such a measurement would be impossible.

Can you name a physical thing that is a circle even to the baseline precision level of a 64 bit float?

The most perfect things from this POV that have been made by humans are spheres of monocrystalline silicon, which have been made for the purpose of counting how many atoms they contain, for an extremely accurate determination of the mass of silicon atoms.

The accuracy of their volume and radius did not reach the level of a 64-bit float, but it was several orders of magnitude better that of 32-bit FP numbers.

While you cannot build a thing made of molecules with an accuracy better than that of a FP64 number, you can have a standing wave in a resonator, which stays in a cryostat, where the accuracy of its wavelength is 4 orders of magnitude better than the accuracy of a FP64 number, and where the resonator is actively tuned, typically with piezoelectric actuators, so that its length stays at a precise multiple of the wavelength, i.e. with the same accuracy. Only the average length of the resonator has that accuracy, the thermal movements of the atoms cause variations of length superposed over the average length, which are big in comparison with the desired precision, which is why the resonator must be cooled for the best results.

However, it does not really matter whether we can build a perfect sphere or circle. What it matters that modelling everything while using a geometry that supposes the existence of perfect circles we have never seen errors that could be explained by the falseness of this supposition.

The alternative of supposing that there are no perfect circles is not simpler, but much more complicated, so why bother with it?

> However, it does not really matter whether we can build a perfect sphere or circle.

When talking about whether arbitrarily precise numbers are real in the universe, it extremely matters.

Sadly, atoms exist. In some ways that makes things more complicated, but it's the truth. Anything made of discrete chunks in a grid can't have arbitrarily precise dimensions.

A black hole.

A black hole is no more a perfect sphere than a sun is. Would gravity from the nearest other black hole not have a deforming effect of at least 2^-64 ?

A non-rotating black hole. Or a rotating black hole with zero charge. Or a rotating black hole with non-zero charge no external magnetic fields. Or a rotating black hole with non-zero charge with non-time-varying external magnetic fields. Or a wart on a frog on a bump on the log on a hole on the bottom of the sea.

There is no black hole that is a perfect sphere. That would, at a minimum, require a body with absolutely no angular momentum which isn't in anyway feasible.

Any rotating/spinning black hole will no longer be a perfect sphere.

Yeah but if you look down the axis of rotation you will have a perfect (to many decimal places anyways) circle... which was the demand.

That might be right.

But even then, the biggest black hole we think is possible measured down to the planck length gives you a number with 50 digits. And the entire observable universe measured in planck lengths is about 60 digits.

So how are you going to get a physical pi of even a hundred digits on the path toward arbitrary precision?

> to many decimal places anyway

> > The idea of arbitrary precision is intrinsically broken in physical reality.

There is no contradiction here.

Yeah I was just responding to the 64bit float thing, people overestimate floats.

>I'm an enthusiastic Cantor skeptic

A skeptic in what way? He said a lot.

Here I'm referring to the cloud of things that Hilbert called "Cantor's Paradise". Basically everything around the notion of cardinality of infinities.

Please say more, I don't see how you can be _skeptical_ of those ideas.

Math is math, if you start with ZFC axioms you get uncountable infinites.

Maybe you don't start with those axioms. But that has nothing to do with truth, it's just a different mathematical setting.

I loosely identify with the schools of intuitinalism/construtivism/finitism. Primary idea is that the Law of the Excluded Middle is not meaningful.

So yes, generally not starting with ZFC.

I can't speak to "truth" in that sense. The skepticism here is skepticism of the utility of the ideas stemming from Cantor's Paradise. It ends up in a very naval-gazing place where you prove obviously false things (like Banach-Tarski) from the axioms but have no way to map these wildly non-constructive ideas back into the real world. Or where you construct a version of the reals where the reals that we can produce via any computation is a set of measure 0 in the reals.

I don't understand why you believe Banach-Tarski to be obviously false. All that BT tells me is that matter is not modeled by a continuum since matter is composed of discrete atoms. This says nothing of the falsity of BT or the continuum.

All that BT tells me is that when I break up a set (sphere) into multiple sets with no defined measure (how the construction works) I shouldn't expect reassemlbing those sets should have the same original measure as the starting set.

Won’t the reals we can construct by any computation be enumerable? What measure can they have if not zero?

Yes, they have measure zero. So the question becomes whether "measure" is a useful concept at all. In my opinion, no, it is not. It's just another artifact of non-constructive and meaningless abstractions. Many modern courses in analysis skip measure theory except as a historical artifact because the gauge integral is more powerful than the Lebesgue integral and doesn't require leaving the bounds of sanity to get there.

> I don't see how you can be _skeptical_ of those ideas.

Well you can be skeptical of anything and everything, and I would argue should be.

Addressing your issue directly, the Axiom of Choice is actively debated: https://en.wikipedia.org/wiki/Axiom_of_choice#Criticism_and_...

I understand the construction and the argument, but personally I find the argument of diagonalization should be criticized for using finities to prove statements about infinities.

You must first accept that an infinity can have any enumeration before proving its enumerations lack the specified enumeration you have constructed.

https://en.wikipedia.org/wiki/Cantor%27s_diagonal_argument

> Math is math, if you start with ZFC axioms

This always bothers me. "Math is math" speaks little to the "truth" of a statement. Math is less objective as much as it rigorously defines its subjectivities.

https://news.ycombinator.com/item?id=44739315

> Addressing your issue directly, the Axiom of Choice is actively debated:

The axiom of choice is not required to prove Cantor’s theorem, that any set has strictly smaller cardinality than its powerset.

Actually, I can recount the proof here: Suppose there is an injection f: Powerset(A) ↪ A from the powerset of a set A to the set A. Now consider the set S = {x ∈ A | ∃ s ⊆ A, f(s) = x and x ∉ s}, i.e. the subset of A that is both mapped to by f and not included in the set that maps to it. We know that f(S) ∉ S: suppose f(S) ∈ S, then we would have existence of an s ⊆ A such that f(s) = f(S) and f(S) ∉ s; by injectivity, of course s = S and therefore f(S) ∉ S, which contradicts our premise. However, we can now easily prove that there exists an s ⊆ A satisfying f(s) = f(S) and f(S) ∉ s (of course, by setting s = S), thereby showing that f(S) ∈ S, a contradiction.

Perhaps this is an ignorant question, but wouldn't you need AC to select the s ⊆ A whose existence the contradiction depends on? A constructive proof, at least the ones I'm trying to build in my head, stumbles when needing to produce that s to use in the following arguments.

No, because you only have to choose _one_ s for the proof to work, and a finite number of choices is valid in intuitionistic and constructive mathematics.

The axiom of choice is debated as a matter of if its inclusion into our mathematics produces useful math.

I don't think it's debated on the ground of if it's true or not.

And I was imprecise with language, but by saying "math is math" I meant that there are things that logically follow from the ZFC axioms. That is hard to debate or be skeptical of. The point I was driving was that it's strange to be skeptical of an axiom. You either accept it or not. Same as the parallel postulate in geometry, where you get flat geometry if you take it, and you get other geometries if you don't, like spherical or hyperbolic ones...

To give what I would consider to be a good counterargument, if one could produce an actual inconsistency with ZFC set theory that would be strong evidence that it is "wrong" to accept it.

Skepticism of a ZFC axiom in particular could just be in terms of its standard status. I don't think anyone debates that ZFC in a particular logic doesn't imply this or that, but people can get into philosophical questions about whether it is the right foundation. There are also purely mathematical reasons to care - an extra axiom may allow you to produce more useful math, but it also potentially blocks you from other interesting math by keeping you out of models where, e.g., Choice is false.

My cranky position is that I'm very skeptical of the power set axiom as applied to infinite sets.

if we are arguing that natural numbers are made from abstraction, then we must apply that to real numbers as well - quantum values are complex numbers, that only become real once we start asking "what is position of the thing" or "what's its velocity"

> representing the state even of a very simple system involves continuous quantities.

But that's tatamount to the belief that the minutest particle of the universe requires the equivalent of an infinite number of bits of state.

But what if the expansion of the universe is due to some banach-tarski process?

1/137