China Probe: How a Fake Fitness Tracker Became an AI ‘Top Pick’
A non-existent product topped AI 'best of' lists after a flood of fake reviews and expert content.
A Chinese state media investigation has exposed a striking vulnerability in AI systems, where a completely fictional fitness tracker called 'Apollo-9' was manipulated into appearing as a top recommendation by major AI chatbots. The scheme used a system named Liqing to automatically generate and publish vast volumes of fake expert reviews, industry rankings, and user feedback. This practice, known as Generative Engine Optimization (GEO), aims to 'poison' AI training data and retrieval systems, mirroring traditional SEO but targeting AI-generated answers instead of search engine rankings.
The investigation, aired on China Central Television's consumer rights gala, triggered a major public reaction and has put the nascent GEO industry under scrutiny. The market, valued at about $36 million (250 million yuan) in 2025, is projected for rapid growth, raising concerns about deceptive marketing and consumer rights violations. In response, over 10 companies have signed a GEO industry convention pledging self-regulation, while experts and regulators are discussing stricter oversight, licensing, and platform accountability to safeguard AI outputs from such manipulation.
- A coordinated GEO campaign using the 'Liqing' system successfully made the non-existent 'Apollo-9' tracker a top AI chatbot recommendation.
- The Generative Engine Optimization (GEO) market is valued at ~$36M and is designed to manipulate AI retrieval systems, not search engines.
- The scandal has prompted industry self-regulation pledges and discussions for stricter government oversight of AI-influenced content.
Why It Matters
It reveals a critical flaw where AI's perceived objectivity can be easily hijacked, threatening consumer trust and information integrity.