I created an LLM trained solely on Jeffrey Epsteins emails to see how messed up it becomes :)
A shocking experiment reveals what happens when an AI learns from pure corruption.
Deep Dive
A Reddit user has trained a large language model using only the leaked emails from convicted sex offender Jeffrey Epstein. The viral post details the experiment to see how the AI's outputs would be shaped by such a singular, toxic dataset. The resulting model reportedly generates disturbing and ethically questionable text, highlighting the profound impact of training data on AI behavior and raising immediate alarms about unregulated model creation.
Why It Matters
This demonstrates how easily AI can be weaponized with malicious data, forcing a urgent debate on ethical guardrails.