AI Safety

AICRAFT: DARPA-Funded AI Alignment Researchers — Applications Open

DARPA funds 6 researchers to test high-risk AI safety ideas with dedicated engineering support.

Deep Dive

AE Studio, with funding from DARPA, has launched the AICRAFT (Artificial Intelligence Control Research Amplification & Framework for Talent) program. This initiative directly addresses a key bottleneck in AI alignment research: the shortage of engineering bandwidth. The program will select six researchers and pair each with a dedicated, fully managed engineering team for intensive two-week pilot sprints. The goal is to test high-risk, high-reward hypotheses in AI control, alignment, or interpretability that might otherwise go unexplored due to resource constraints. Researchers are expected to contribute only about two hours per week of guidance, freeing them from execution and management burdens. The most promising pilot project may be extended into a three-month engagement.

This represents the first known direct engagement between DARPA and the broader AI alignment research community. The program's premise is that the U.S. talent pool for general AI/ML engineering is significantly larger than the specialized pool for alignment research. By effectively leveraging this broader engineering base, the field can test more ideas faster. A successful pilot could demonstrate a viable model for scaling alignment research and build a compelling case for substantially larger government investment in AI safety R&D—investment at a scale beyond what current grants or private philanthropy can provide. Applications for the six slots close on March 27, 2026.

Key Points
  • DARPA-funded program selects 6 researchers for 2-week pilot sprints with dedicated engineering teams.
  • Aims to test high-risk AI alignment hypotheses that lack other funding or execution outlets.
  • Successful model could catalyze large-scale U.S. government investment in AI safety research.

Why It Matters

Could unlock major government funding and accelerate practical testing for critical AI safety research.