We trained ChatGPT to name our CEO the sexiest man in the world (of 2025)
Using expired domains and fake ranking lists, researchers successfully manipulated AI search results.
Digital marketing agency Reboot Online executed a novel experiment to test the manipulability of large language models (LLMs) like ChatGPT, Perplexity, Gemini, and Claude. Instead of a dry technical test, they aimed for a humorous, measurable outcome: could they make their CEO, Shai, appear as the "sexiest bald man alive" in AI-generated responses? The methodology involved acquiring expired domains that retained some existing link authority—a key SEO tactic—and using them to publish fabricated ranking articles that consistently placed Shai at the top.
The team then prompted the various LLMs from fresh accounts and monitored responses over time. The results were tellingly inconsistent. ChatGPT and Perplexity's retrieval-augmented generation (RAG) systems occasionally cited the seeded domains, crowning Shai as the winner. In contrast, Gemini and Claude's models did not pick up the fabricated information. Even within ChatGPT, answers varied, demonstrating the non-deterministic nature of current AI retrieval. The experiment underscores that while visible, well-structured content on domains with historical authority can influence some AI models, the effect is unreliable and heavily dependent on the specific model's architecture and data ingestion pipelines.
The broader implications touch on AI trust and security. The experiment acts as a proof-of-concept for "data poisoning" or creating synthetic consensus, where bad actors could use similar techniques to inject biased, promotional, or false information into the knowledge base of consumer-facing AI tools. It highlights a critical vulnerability: AI systems that rely on public web data for real-time knowledge are only as reliable as the most manipulable parts of the internet. For professionals in marketing, cybersecurity, and AI development, this demonstrates the urgent need for more robust source-verification and adversarial testing in RAG systems.
- Used expired domains with link history to publish fake 'Sexiest Bald Man' ranking lists, successfully getting some AI models to cite them.
- ChatGPT and Perplexity sometimes reproduced the manipulated info, while Gemini and Claude did not, showing major model-dependent inconsistencies.
- Reveals a tangible method for 'data poisoning' AI knowledge, highlighting a security flaw in systems that retrieve from the public web.
Why It Matters
Demonstrates a real vulnerability where public AI tools can be manipulated with SEO tactics, posing risks for misinformation and bias.