Attackers prompted Gemini over 100,000 times while trying to clone it, Google says
A single session tried to reverse-engineer Google's flagship AI model through brute force.
Google revealed "commercially motivated" actors attempted "model extraction" on its Gemini AI, with one adversarial session prompting the model over 100,000 times in various languages. The goal was to collect outputs to train a cheaper, copycat model—a practice known as distillation. Google frames this as intellectual property theft, despite its own models being trained on scraped web data. The company identified the campaign and adjusted Gemini's defenses but didn't name suspects.
Why It Matters
This exposes a new front in the AI arms race, where models can be cloned without accessing their core code or data.