I think you're missing a more internecine debate within software correctness/formal methods circles than the more general one about "writing good/correct software". Deductive theorem proving was all the rage in the 1970s, to the point that prominent people (Tony Hoare, Dijkstra) believed that it's the only way to write correct software. Over the years, the focus has shifted to other methods [1], from model-checking to unsound methods such as concolic testing and even property-based testing and code reviews, which have proven more effective than the deductive proof proponents had envisioned (Tony Hoare admitted his mistake in the nineties).
The problem with deductive proofs is that it's all-or-nothing, i.e. either you prove the theorem or you don't, and it's much harder to turn a knob to trade off cost for confidence and vice-versa than with other methods. Consequently, theorem proving has not benefitted from automation as much as it's alternatives, and is now viewed as having a too low bang-for-the-buck. But if LLMs can help automate deductive theorem proving, maybe that particular approach, which has gradually become less popular within formal methods circles but still has its fans, could make a comeback.
Of coure, specification is equally important in all methods, but the particular method of theorem proving has been disadvantaged compared to other formal and semi-formal methods, and I think that's what the author wishes could change.
[1]: Deductive theorem proving may be more prominent on HN than other formal methods, but as with most things, HN is more representative of things that spark people's imagination than of things that work well. I'm not counting that as a strike against HN, but it's something to be aware of. HN is more Vogue/GQ than the Times, i.e. it's more news about what people wish they were doing than what they're actually doing.
> The problem with deductive proofs is that it's all-or-nothing, i.e. either you prove the theorem or you don't, and it's much harder to turn a knob to trade off cost for confidence and vice-versa than with other methods.
This is not really true though. Sound type checking is a kind of deductive proof, but it's practically usable because it establishes very simple properties and is generally "local", i.e. it follows the same structure as program syntax.
Well, that's true for simple type systems. Dependent types are not so practically usable. You're absolutely right that simple types prove some "simple" properties, but we know what characterises them: they're inductive (or composable) invariants, i.e. P(a ∘ b) ⇔ P(a) ∧ P(b), for some program composition operator ∘ (I'm papering over some important but nuanced details). It's just that most program correctness properties we're interested in are not inductive, so the "proving power" of simple types is not strong (another way of describing it is that it's not powerful enough to express propositions with interleaved universal and existential quantifiers).
So when we talk about formal methods, we're usually interested in those that can express and/or "verify" (to some degree of confidence), non-composable properties and propositions that involve interleaved quantifiers.