Attackers prompted Gemini over 100,000 times while trying to clone it, Google says
A massive, coordinated attack reveals the high-stakes race to copy Google's AI.
Deep Dive
Google revealed attackers made over 100,000 attempts to prompt its Gemini AI in a systematic effort to reverse-engineer and clone the model. The scale of the attack highlights the immense value and competitive pressure surrounding major AI systems. While Google states its safeguards prevented a successful clone, the incident underscores the new security frontier of protecting proprietary model weights and training data from extraction through sophisticated prompt-based attacks.
Why It Matters
This marks a new era of corporate espionage, where AI models themselves are the primary targets for theft.