AI Safety

SB 53 and RAISE implementation roles

Key state offices with $9M budgets are hiring to implement the only US laws targeting catastrophic AI risk.

Deep Dive

California and New York are moving to implement the United States' first laws specifically designed to govern frontier AI systems and mitigate catastrophic risks. With the federal government yet to pass comprehensive AI legislation, these state-level initiatives—California's SB 53 and New York's RAISE Act—have become the de facto centers of AI safety policymaking. Since all major AI companies must operate in these states, the laws effectively apply nationwide. The success of this regulatory framework now hinges on staffing key enforcement roles, prompting a call for technical and legal experts to join state government.

Three critical positions are now open. The California Attorney General's office is hiring a Technical Advisor (salary $114,000-$153,000) and a Technology Attorney ($144,000-$193,000) to help enforce SB 53, with applications closing April 7th and May 11th, respectively. In New York, the RAISE Act has established a new AI regulatory office within the Department of Financial Services, backed by a $9 million budget. The state is seeking a Deputy Director ($172,787-$213,995) to lead the office's implementation of transparency and disclosure requirements, with applications due April 7th. The advocacy group Secure AI Project is offering context and support to promising candidates who apply.

Key Points
  • California's SB 53 and New York's RAISE Act are the only US laws targeting catastrophic risk from frontier AI models, creating a state-led regulatory framework.
  • New York's RAISE office has a $9M budget and seeks a Deputy Director (salary up to $214k) to lead implementation, with applications closing April 7th.
  • California AG's office is hiring a Technical Advisor (up to $153k) and a Technology Attorney (up to $193k) to enforce SB 53, requiring AI/legal expertise.

Why It Matters

These roles will shape the first enforceable US rules for frontier AI, directly impacting how companies like OpenAI and Anthropic develop and deploy powerful models.