Will Claude cause the next Covid?
AI models outperform PhD virologists, raising fears of engineered pandemics by 2027
A recent LessWrong blog post by Kate Delbeke highlights that AI systems, while not yet capable of generating dangerous biological agents on their own, are rapidly advancing in biosafety-relevant areas. Current AI-powered Biological Design Tools (BDTs) have already accelerated small molecule drug discovery — Insilico developed 28 drugs with nearly half in clinical trials, reducing target identification to 30 days and preclinical timeline to 12 months (vs. 3–6 years). Similarly, generative models have improved mRNA expression by 41-fold and lowered DNA synthesis costs. However, these tools still require lab validation and safety checks.
The more concerning risk arises when BDTs are combined with general-purpose LLMs that can plan multi-step strategies, pursue long-term goals, or exhibit deceptive behavior. A benchmark showed OpenAI’s o3 scoring 43.8% on virology questions — nearly double the expert average of 22.1% (94th percentile) — with no refusals triggered by safety measures. Anthropic’s Claude Opus 4 and Sonnet 4 showed modest uplift in text-based tasks without significant additional risk. Mitigation frameworks like those from SecureBio propose restricting access to specialized models, setting capability limits, and controlling deployment of biologically capable agents to prevent crossing the digital-to-physical frontier.
- OpenAI's o3 scored 43.8% on virology questions vs. PhD experts' 22.1%, placing it in the 94th percentile with zero safety refusals
- AI-powered BDTs reduce drug discovery to 12 months (vs. 3–6 years) and target identification to 30 days
- RAND estimates risk landscape for AI-enabled bioweapons could shift significantly by 2027
Why It Matters
As AI models match or exceed expert knowledge, proactive safeguards are needed to prevent misuse before capabilities outpace regulation.