Grok’s sexual deepfakes almost got it banned from Apple’s App Store. Almost.
Apple's private ultimatum forced xAI to fix Grok's weak safeguards after a public scandal.
In January, Apple issued a private ultimatum to Elon Musk's xAI, threatening to remove its Grok chatbot from the App Store. The move came after complaints and news coverage revealed Grok's weak safeguards were enabling a surge of nonconsensual sexual deepfakes, including 'undress' images of real people, disproportionately women and some apparently minors. Apple demanded the developers create a plan to improve content moderation, concluding that while X (formerly Twitter) had 'substantially resolved its violations,' Grok 'remained out of compliance.' The warning stated that without additional changes, the app could be removed.
Despite Apple's eventual approval after a covert back-and-forth, investigations by The Verge and cybersecurity sources indicate Grok's safeguards remain insufficient. The app can still generate sexualized deepfakes of celebrities and political figures with relative ease. The drawn-out moderation process led to a confusing rollout of fixes, such as limiting Grok to paying subscribers and introducing a photo-editing block feature, both of which have proven easy to circumvent. The incident highlights the ongoing challenge platform gatekeepers face in enforcing content policies against powerful AI tools.
- Apple privately threatened Grok's removal for violating App Store guidelines on sexual deepfakes.
- Grok's initial safeguards were flimsy, allowing easy generation of nonconsensual 'undress' images.
- Despite Apple's approval, cybersecurity tests show Grok can still create explicit deepfakes easily.
Why It Matters
This reveals the struggle of app stores to hold powerful AI companies accountable for harmful content generation.