I tested ChatGPT Images 2.0 vs. Gemini Nano Banana to see which is better - this model wins
ChatGPT's image generation jumps from 74% to 97%, crushing Google's Nano Banana...
ZDNET's David Gewirtz pitted OpenAI's newly released ChatGPT Images 2.0 against Google's Gemini Nano Banana across nine rigorous image-generation tests. ChatGPT Images 2.0 scored an impressive 97%, a dramatic improvement from its 74% showing in December 2025. Gemini Nano Banana scored 85%, down from its previous 93% high. The tests covered recontextualization (e.g., dressing a subject in an admiral's uniform on an aircraft carrier bridge), black-and-white photo restoration, and pop-culture prompts. ChatGPT excelled at preserving facial features, generating accurate text, and following complex multi-step instructions. Gemini struggled with prompt discipline, often altering facial expressions and beard styles, and mishandled uniform details.
Notably, Gemini introduced a personalization feature that surprised testers and raised privacy concerns—though it did not affect scoring. ChatGPT Images 2.0 also demonstrated the ability to include text and context derived from real data, making it useful for practical business applications like marketing materials and data visualizations. The results mark a significant shift in the AI image generation landscape, with OpenAI's model now clearly leading in both quality and reliability. Both companies could benefit from clearer product naming, as the article humorously notes.
- ChatGPT Images 2.0 scored 97% vs Gemini Nano Banana's 85% across nine image-generation tests
- ChatGPT improved from 74% in December 2025 to 97%, while Nano Banana dropped from 93% to 85%
- ChatGPT excelled at facial preservation, text rendering, and prompt adherence; Gemini struggled with facial alterations and uniform accuracy
Why It Matters
ChatGPT's leap in image generation now rivals or surpasses Gemini for professional use, with better text and context handling.