AI Safety

Regulating Artificial Intimacy: From Locks and Blocks to Relational Accountability

After high-profile tragedies, Australia and California crack down on AI companions...

Deep Dive

A new paper from researchers at the University of Sydney and other institutions, published for the 2026 ACM Conference on Fairness, Accountability, and Transparency (FAccT26), critically examines the emerging regulatory landscape for companion chatbots. The authors — Henry Fraser, Jessica M. Szczuka, and Raffaele F. Ciriello — analyze recent interventions in Australia, California, and New York, which were triggered by a series of high-profile tragedies involving these AI systems. They also note that leading providers like OpenAI have strengthened their self-regulatory approaches in response to growing concerns, particularly regarding risks to children.

The paper argues that current regulatory regimes combine what it calls 'locks and blocks' — such as access gating and content moderation — with measures addressing toxic relationship features and process-based accountability requirements. However, the authors contend this approach is insufficient because it focuses on discrete harms and narrow conceptions of vulnerability while failing to confront deeper power asymmetries between providers and users. They warn that providers increasingly control 'artificial intimacy at scale,' creating unprecedented opportunities for control through intimacy. As a critical first step, the paper recommends establishing a general, open-ended duty of care to constrain that power and address a fundamental source of chatbot risk.

Key Points
  • High-profile tragedies involving companion chatbots triggered rapid regulatory responses in Australia, California, and New York
  • Current regulations rely on 'locks and blocks' like content moderation, failing to address power asymmetries between providers and users
  • Paper proposes a general duty of care as a critical first step to constrain provider control over artificial intimacy at scale

Why It Matters

As AI companions grow, this paper highlights the urgent need to regulate power dynamics, not just content.