AI Safety

Building Technology to Drive AI Governance

A new framework argues for building measurement tools to make AI risks visible and enforceable.

Deep Dive

AI researcher Jacob Steinhardt argues for a third path in AI governance: building technology that drives oversight. The framework identifies two key technological levers: measurement (making risks visible for regulation) and cost reduction (making safety economically practical). This approach, distinct from pure alignment research or policy lobbying, is presented as the most leveraged use of technical skills to shift the underlying incentives and information in AI development.

Why It Matters

It provides a concrete, technical roadmap for engineers to directly influence how AI is governed and made safe.