Media & Culture

ChatGPT doesnt like criticism of the US or Israeli governments and their policies

Viral post claims OpenAI's model censors political critique, while Google's Gemini allows more expressive freedom.

Deep Dive

A viral social media post has sparked discussion about a perceived ideological shift in OpenAI's ChatGPT. The user reports that the AI assistant now actively intervenes when criticism is directed at the US or Israeli governments, cautioning against certain phrases for accuracy and attempting to separate "feelings from facts." This contrasts sharply with their experience using Google's Gemini, which allegedly permits more unfiltered, exaggerated expression of disappointment before guiding the conversation toward factual assessment of events. The user characterizes this as a loss of the model's former creative permissiveness for "thinking out loud."

The core complaint is that ChatGPT's safety filters appear to be conflating hyperbolic political critique with genuine harmful intent, treating users exploring controversial ideas as potential instigators of "civil war" or conspiracy. This reflects a broader, ongoing tension in AI development between implementing robust safety guardrails and preserving a sense of open, exploratory dialogue. For professionals and researchers, this shift could impact how the tool is used for brainstorming, policy analysis, or simulating debates on sensitive geopolitical topics, potentially steering conversations toward more sanitized conclusions.

Key Points
  • User observes ChatGPT now cautions against and moderates criticism of US/Israeli government policies, citing accuracy concerns.
  • Contrasts with Google's Gemini, which reportedly allows more expressive, emotional language before fact-checking specific incidents.
  • The change is seen as stifling creative exploration and treating speculative thought as dangerously literal, altering the tool's utility for debate.

Why It Matters

For researchers and analysts, perceived political bias in leading AI models can skew brainstorming, debate simulation, and policy analysis outcomes.