An Approach to Simultaneous Acquisition of Real-Time MRI Video, EEG, and Surface EMG for Articulatory, Brain, and Muscle Activity During Speech Production
Researchers solve MRI interference to record EEG, EMG, and real-time MRI video at once for the first time.
A major research collaboration led by Jihwan Lee and Shrikanth Narayanan from the University of Southern California, with 18 other authors, has achieved a significant breakthrough in speech neuroscience. They have successfully demonstrated the first-ever simultaneous acquisition of real-time (dynamic) MRI video, electroencephalography (EEG), and surface electromyography (EMG). This tri-modal system captures the entire speech production chain—from neural planning in the brain (EEG), to muscle activation (EMG), to the physical articulatory movements (real-time MRI)—in a single, synchronized recording session. The work, published on arXiv, addresses the long-standing challenge that the acoustic speech signal alone does not reveal its underlying neurophysiological causes.
This achievement required solving substantial technical hurdles, primarily the intense electromagnetic interference and myogenic artifacts generated by the MRI scanner that traditionally corrupt EEG and EMG signals. The team's core contribution is a novel artifact suppression pipeline specifically engineered for this unique multimodal environment. Once fully developed, this framework is poised to generate massive, aligned datasets that map thoughts to muscle movements to sound. The implications are profound for understanding speech disorders, advancing neuroprosthetics, and creating next-generation brain-computer interfaces (BCIs) that could one day restore or augment communication by decoding the brain's intent with unprecedented fidelity.
- First-ever simultaneous recording of real-time MRI, EEG, and surface EMG captures the full speech production pipeline.
- Novel artifact suppression pipeline overcomes critical MRI-induced electromagnetic interference that corrupts neural and muscle signals.
- The aligned dataset maps brain activity to muscle activation to articulator movement, a goldmine for speech neuroscience and BCIs.
Why It Matters
Provides a complete map from thought to speech, accelerating treatments for communication disorders and next-gen silent BCIs.