This work is more focused on proof assistants. i.e. programming languages designed for highly generic algorithms (mathematical proofs). I don't see how miscommunication or bugs are possible in this context. The assumptions and conclusions are all laid out in the types. Barring a bug in the type checker or use of e.g. `sorry`, a proof that compiles shouldn't be able to contain a bug.

This is true in the sense that such a proof can't have "implementation bugs". But it can very much still have "specification bugs", where the formal specification doesn't actually match the fuzzy intention of whoever asked for the work.

For a very simplistic example, say I ask for a trusted program that can add up all of my expenses. The person who writes the spec than defines this as `add_expenses xs = reduce - xs`. They then prove that write a program that matches this specification. The program will be "bug free" in the sense that it will perfectly match this spec. But it will certainly not be what I asked for - so from my point of view, it would be an extremely buggy program.

For a more subtle example, consider encryption algorithms. It's generally fairly easy to specify the encryption scheme, and then check that a program implements that exact algorithm. And yet, it's very likely that a program built only to such a spec will actually leak secrets through various side-channels in practice (either key-dependent timing or even key-dependent CPU usage, and others). Again, it would be fair both to say that the implementation is bug-free (it perfectly matches the spec) and that it is deeply buggy (it doesn't actually protect your secrets).

None of this has anything to do with proof assistants. What you're saying is mathematical modeling doesn't perfectly capture real world phenomena. Everyone knows this. It also has nothing to do with proof assistants. The statement I was replying to was

> I feel that massive effort is being misdirected because of some fundamental misunderstandings

The misunderstanding here is with them and what they're looking at. The author doesn't discuss business problems, but lists this as a motivation in their "basic" summary:

> Category theory is a powerful and highly-general branch of mathematics, which is applied for all kinds of purposes. It has been formalized in several existing proof assistants, but only as a result of a ton of difficult work. The type system of (1,1)-directed type theory is specifically optimized for category theory; it manages to automate a lot of the difficult "bureaucracy" of formalizing category-theoretic results, so it doesn't require as much work.

i.e. it makes certain proofs easier.

A formal proof really can’t have a specification bug. You say in the declaration what proposition you intend to prove and if you succeed in instantiating that type then you have succeeded in your proof.

The system also will only allow that proof to be used for the proposition you have declared so even if you somehow declared the wrong thing and proved it without realising, that wouldn’t affect the consistency of the system at all.

A formal proof can contain a "miscommunication bug" if it proves not the exact thing that it was intended to prove. This is just like an ordinary program that doesn't exactly do what it is supposed to do.

The types say exactly what is proved. There's no real room for confusion. With a written proof you could accidentally make some assumption in the middle (like using some lemma that assumes a thing is finite or compact or whatever when you did not state that assumption), but a computer proof cannot do this. It won't typecheck.

Reading mathematical sentences (types, basically) is like step one in maturing from baby to toddler in math.

> The types say exactly what is proved.

To whom? Fully formalized mathematics is extremely dense and difficult, and few mathematicians actually practice it, especially for complex problems. Just because the symbols are theoretically unambiguous doesn't mean that any human can actually understand what they mean, and whether they accurately encode some less formal system you were hoping to study.

And this becomes much more problematic once you go from the realm of pure mathematics into other realms, like engineering or economics or anything else that is seeking to model real-world phenomena. A model can be perfectly self-consistent and still be a very bad and even useless model of a real-world thing. And it can be highly non-trivial to figure out to which extent and under what constraint a model actually matches the phenomenon you intended.

For examples of both kinds of issues, look at theoretical physics. Lots of major physics theories are only partly formalized, and rely on unproven or sometimes even known to be unsound mathematical foundations (like the Dirac delta at the time it was first introduced, or like renormalization in quantum physics). And, of course, there are still open questions on how well these theories actually model various physical phenomena (like GR and black hole curvature singularities).

Sometimes, and for some people, the right language can actually simplify problems and hide complexity that isn't immediately necessary to the problem at hand. The whole idea is to abstract away models and worry about them separately. This is one of the motivations for the various synthetic theories that are getting researched in the last several years - the one mentioned here happens to be for Category Theory.

This doesn't work for everyone though; some people find it easier to stay grounded in specific models.

I replied to the other stuff elsewhere, but FWIW for the amount I've played with typing up undergrad tier knowledge in Lean, I've found it to be pretty straightforward. Fully formalized math isn't dense. Actually it's the opposite: full of tedious bookkeeping that you learn how to formalize in an intro logic/proofs course. The hard part is making it more dense so that you can read it, and something like Lean actually makes things surprisingly less tedious than you'd expect.

> The types say exactly what is proved. There's no real room for confusion.

Since Lean code isn't the same as natural language or even unobservable human intention, there might always be a mismatch between what the author of a Lean proof claims (or believes) it proves, and what it actually proves.

Anyone capable of reading and writing proofs can also read/write propositions. That's step one. Writing propositions in formal language is exactly why there's no mismatch: it proves what it says it proves. There is no ambiguity. There's nothing else to believe.

Like if you have some f:R->R, a:R in context and write ∀ϵ>0,∃δ>0 s.t. x: R, |x−a|<δ ⟹ |f(x)−f(a)|<ϵ, no one is confused about what you mean.

If you instead write ∃δ>0 s.t. ∀ϵ>0 x: R, |x−a|<δ ⟹ |f(x)−f(a)|<ϵ, you've written something different. Everyone in the field knows this, and actually it's even more obvious when you look at types in something like Lean: roughly ϵ -> (δ, h) vs. (δ, ϵ->h).

Since it's clearly not always trivial to translate conjectures (let alone whole proofs) into Lean code, translating Lean code into understandable conjectures can't be always trivial either.