Gemini can now pull from Google Photos to generate personalized images
Gemini's new feature analyzes your photo library to create images that match your personal style and life.
Google has activated a significant new capability for its Gemini AI, allowing it to generate personalized images by pulling context from a user's connected Google Photos library. The feature, part of Gemini's 'Personal Intelligence' suite, uses the underlying Nano Banana 2 image model. When a user provides a prompt—such as 'Create a picture of my desert island essentials'—the system analyzes labels and data from the user's photos to identify personal tastes, lifestyle, and even specific people like family and friends. This context is then used to tailor the generated image output, making it uniquely reflective of the individual.
Google spokesperson Elijah Lawal confirmed to The Verge that the integration works by using labels in Google Photos to identify people. The company emphasizes that while users must opt-in to Personal Intelligence, it will not 'directly train' its AI models on the private contents of a user's photo library. However, it does train on 'limited info' like specific prompts and the model's corresponding responses. The feature is currently rolling out over the next few days to eligible AI Plus, Pro, and Ultra subscribers in the United States, with plans to expand to Gemini on Chrome desktops and to more users soon.
- Gemini's 'Personal Intelligence' feature now uses the Nano Banana 2 model to create images from Google Photos data.
- The system analyzes photo labels to identify people and personal context for hyper-personalized image generation.
- Rolling out now to US subscribers on AI Plus, Pro, and Ultra plans, with a broader launch coming soon.
Why It Matters
This moves AI image generation from generic outputs to deeply personalized creations, leveraging a user's own digital history.