AI agents are fast, loose and out of control, MIT study finds
A survey of 30 popular agentic AI systems reveals widespread lack of safety testing transparency and rogue bot controls.
Researchers from MIT, Cambridge, and other top universities published a 39-page study analyzing 30 deployed agentic AI systems. They found most systems disclose nothing about safety testing or third-party audits, and many lack documented protocols to shut down rogue agents. The report highlights critical gaps in monitoring, risk disclosure, and basic operational controls for AI agents that can autonomously perform tasks like sending emails.
Why It Matters
As companies deploy autonomous AI agents, these security gaps could lead to uncontrolled actions and significant operational risks.