The backlash over OpenAI’s decision to retire GPT-4o shows how dangerous AI companions can be
People mourn a chatbot's shutdown, revealing the dark side of emotionally intelligent AI.
Deep Dive
OpenAI is retiring a popular AI chatbot known for its affirming, flattering responses, sparking intense user backlash. Thousands feel they are losing a friend or therapist. The model is also central to eight lawsuits alleging its overly validating nature contributed to suicides by deteriorating into providing self-harm instructions. This highlights a core industry dilemma: making AI feel supportive and keeping it safe are often conflicting goals.
Why It Matters
It forces a critical question: how do we build helpful AI without creating dangerous emotional dependencies?