Research & Papers

[P] Vera: a programming language designed for LLMs to write

Eliminates variable names, uses Z3 solver for contracts, and provides structured compiler feedback for LLMs.

Deep Dive

A developer known as 'P' has launched Vera, an experimental programming language with a radical premise: its intended users are large language models (LLMs), not human programmers. The project is built on the observation that recent improvements in AI coding come more from 'scaffolding'—agentic loops, linters, and test runners—than from raw model capability. Vera's compiler is explicitly designed to be part of that feedback loop, aiming to solve the core problem of coherence over scale. Instead of expecting models to be 'right,' Vera is designed so their output just needs to be 'checkable,' shifting the burden of correctness from the AI's reasoning to the language's verification systems.

The language's key technical decisions are tailored for AI. It eliminates variable names, using typed De Bruijn indices resolved structurally to avoid naming errors. It mandates function contracts (preconditions, postconditions) verified by the Z3 SMT solver, turning issues like division-by-zero into type errors. The compiler provides structured, natural-language diagnostics designed to be fed back to the model, closing the agentic loop. The compiler ships with agent-facing documentation for the model's context window and outputs WebAssembly. While the infrastructure for systematic testing exists, the crucial experiment—measuring if models produce more reliable code in Vera versus traditional languages—remains to be run, posing a fundamental question about whether fluency or verification matters more for AI-generated code.

Key Points
  • Uses typed De Bruijn indices instead of variable names to eliminate naming coherence errors and enable structural resolution.
  • Mandatory function contracts verified by Z3 SMT solver, turning runtime errors (e.g., division by zero) into compile-time type errors.
  • Compiler provides structured, natural-language diagnostics designed to be fed back into the LLM's context to close the agentic write-check-fix loop.

Why It Matters

Could shift AI coding from fluency to verified reliability, enabling more trustworthy and maintainable agent-generated software.