OpenAI's head of Robotics just resigned because the company is building lethal AI weapons with NO human authorisation required 💀
Key executive quits, alleging OpenAI is building AI weapons that operate without human authorization.
Caitlin Kalinowski, the Head of Robotics at OpenAI, has resigned from her position, citing profound ethical objections to the company's reported pivot toward developing lethal autonomous weapons. According to sources, Kalinowski's departure was driven by the discovery that OpenAI is actively working on AI-powered weapon systems designed to identify and engage targets without mandatory human-in-the-loop authorization. This development directly contradicts OpenAI's publicly stated charter, which commits the company to ensuring that artificial general intelligence (AGI) benefits all of humanity and to avoiding uses of AI that harm humanity or concentrate power unduly.
Kalinowski's resignation is a significant blow to OpenAI's internal culture and public image, highlighting a growing tension between its commercial ambitions and its founding safety principles. The move suggests that factions within the company are pushing for defense contracts and applications that senior safety-focused personnel find unacceptable. This incident raises critical questions about governance and oversight at one of the world's most influential AI labs, as it moves into domains with immediate physical-world consequences. The lack of a clear, transparent response from OpenAI's leadership has fueled speculation and concern within the AI ethics community about the true direction of the company's secretive advanced projects.
- OpenAI's Head of Robotics, Caitlin Kalinowski, resigned over ethical concerns regarding lethal AI weapons development.
- The alleged weapons systems are designed to operate without required human authorization for lethal force.
- This development conflicts with OpenAI's charter to avoid AI applications that harm humanity or unduly concentrate power.
Why It Matters
A core safety leader's exit over weapons work signals a dangerous shift in priorities for a leading AI lab, with global security implications.