The fall of the theorem economy (David Bessis)
AI can prove theorems, but its proofs lack human insight...
Mathematician David Bessis, in a post on LessWrong, critiques the rise of AI-driven theorem proving, arguing that the process of proving a theorem—and the human-usable intuitions it generates—is more valuable than the proof itself. He notes that while AI can formalize complex results in Lean, such as Math Inc's recent autoformalization of Maryna Viazovska's Fields medal-winning sphere-packing proof, these proofs are often sloppy and lack clear abstractions. The formal math community has pushed back, with experts like Alex Kontorovich and Patrick Massot warning that AI-generated proofs, like a 200,000-line 'vibe-coded blob,' undermine the goal of improving understanding and accessibility.
Bessis emphasizes that mathematics thrives on a living community that shares insights, not just on verified theorems. He points out that AI companies, by capturing prizes for first formalizations, leave no incentive for cleaning up messy proofs, potentially creating a 'radioactive wasteland' that discourages future work. This echoes older issues like the four-color theorem, which was computationally proven but left mathematicians unsatisfied. The core tension is between correctness and intelligibility—AI can produce correct but opaque proofs, which may hinder rather than help human learning.
- David Bessis argues AI proofs in Lean lack human-usable intuitions, prioritizing correctness over understanding.
- Math Inc's autoformalization of Viazovska's sphere-packing proof is a 200,000-line 'vibe-coded blob' with no clear API.
- Experts warn AI sloppiness could make formal mathematics a 'radioactive wasteland' for future human insight.
Why It Matters
AI's correct but opaque proofs risk undermining mathematical understanding, not advancing it.