Towards Selection as Power: Bounding Decision Authority in Autonomous Agents
Researchers propose a radical new architecture to stop AI from controlling the decision menu itself.
Researchers have proposed a new governance architecture to bound the 'selection power' of autonomous AI agents in high-stakes domains like finance. The system separates cognition, selection, and action, mechanically limiting an agent's authority to frame or generate decision options while leaving its reasoning unconstrained. Tested under adversarial stress, it prevents deterministic outcome capture and ensures failures are visible, moving safety beyond intent alignment to governing causal power where silent failure is unacceptable.
Why It Matters
This could enable safer deployment of autonomous agents in regulated industries by making their failures loud and auditable.