From dots to faces: Individual differences in visual imagery capacity predict the content of Ganzflicker-induced hallucinations
Researchers used NLP to analyze hallucination descriptions, revealing how your mind's eye shapes what you see.
A research team led by Ana Chkhaidze has published a study using AI-powered natural language processing (NLP) to decode the content of visual hallucinations. By analyzing free-text descriptions from over 4,000 participants who viewed a rapidly alternating red and black display (Ganzflicker), they discovered a direct correlation between an individual's visual imagery capacity and the complexity of their hallucinations. The findings provide a novel, data-driven window into the subjective nature of perception.
Using topic modeling on the massive dataset, the researchers found a clear spectrum of experience. Participants who self-reported having strong, vivid visual imagery described seeing complex, naturalistic content such as faces and detailed scenes. In stark contrast, individuals with weak or absent imagery predominantly reported simple geometric patterns like dots and lines. The team further quantified this by applying crowd-sourced sensorimotor norms, revealing that strong imagers used language with richer perceptual associations.
The study, titled "From dots to faces," suggests these differences may stem from variations in how early visual brain areas coordinate with higher-order regions. By applying computational linguistics tools to a cognitive neuroscience question, the research demonstrates how AI methods can systematically analyze and categorize subjective human experiences that were previously difficult to measure at scale.
- Used NLP topic modeling on 4,000+ hallucination descriptions to categorize content.
- Found strong imagers see complex scenes (faces) while weak imagers see simple patterns (dots, lines).
- Linked findings to potential neural coordination differences between visual brain regions.
Why It Matters
Provides an AI-driven framework for quantifying subjective experience, with implications for understanding perception, mental imagery disorders, and even AI model interpretability.