We need a hardware moratorium now
Argues safety measures are useless once open-source models catch up within months.
AI researcher KanHar proposes an immediate hardware moratorium as the only viable intervention against AI existential risk. The argument centers on the "open-source black pill": frontier lab safety measures become irrelevant when open-source models (like Llama 3) can have safety fine-tuning stripped for under $2.50 within hours. The piece claims controlling advanced chip manufacturing is the sole physical choke point capable of enforcing international safety agreements before political will evaporates.
Why It Matters
Challenges the entire AI safety paradigm by highlighting a fundamental, unfixable vulnerability in model deployment.