Meta's TRIBEv2 Utilizes 700 Brain Scans to Engineer Viral Content, Simulating Neural Responses
Meta simulates neural responses to engineer viral videos before anyone watches them.
Meta's Fundamental AI Research (FAIR) team has released TRIBEv2, a foundation model that simulates human brain responses to predict viral content. Trained on fMRI data from over 700 volunteers, the model maps neural activity across approximately 70,000 points on the cortex, creating what Meta calls a digital copy of the human brain. TRIBEv2 processes three input types—video, audio, and text—and matches them to patterns from fMRI scans of people watching videos, listening to podcasts, or reading text. The model achieves 70 times greater precision than its predecessor, allowing it to predict which content will activate brain regions involved in attention, emotional arousal, and reward, all without scanning new participants.
For content creators and platforms, TRIBEv2 enables zero-shot predictions of viral potential, eliminating the need for costly participant studies. Editors can optimize B-roll, pace content based on cognitive load, and restructure material to maintain brain activity signatures linked to shares and replays. Beyond social media, this computational neuromarketing tool has broad applications in advertising, entertainment, and education, potentially transforming how content is designed to engage audiences. However, the technology raises ethical questions about manipulation and consent, as it essentially reverse-engineers human attention on an unprecedented scale.
- TRIBEv2 uses fMRI data from 700+ volunteers to map neural activity across 70,000 cortex points
- Achieves 70x more precision than its predecessor in predicting content engagement
- Enables zero-shot predictions for video, audio, and text without new participant studies
Why It Matters
This tech could redefine content creation by engineering viral engagement, but raises serious ethical concerns about neural manipulation.