One Brain, Omni Modalities: Towards Unified Non-Invasive Brain Decoding with Large Language Models
New LLM architecture bridges brain signal gaps, achieving higher accuracy by fusing electromagnetic and metabolic data.
A research team led by Changli Tang has developed NOBEL (Neuro-Omni-modal Brain-Encoding Large Language Model), a breakthrough architecture that unifies traditionally isolated non-invasive brain recording modalities. The system bridges the extreme discrepancy between high-frequency electromagnetic signals (EEG/MEG) and low-frequency metabolic signals (fMRI) by integrating them within an LLM's semantic embedding space, representing a significant step toward holistic brain activity interpretation that has eluded neuroscience due to fragmented analysis pipelines.
The NOBEL architecture features a unified encoder for EEG/MEG with a novel dual-path strategy for fMRI, aligning heterogeneous brain signals and external sensory stimuli into a shared token space before leveraging an LLM as a universal backbone. Extensive evaluations show the model serves as a robust generalist across standard single-modal tasks while demonstrating that synergistic fusion of electromagnetic and metabolic signals yields higher decoding accuracy than unimodal baselines. The system exhibits strong stimulus-aware decoding capabilities, effectively interpreting visual semantics from multi-subject fMRI data on the NSD and HAD datasets while uniquely leveraging direct stimulus inputs to verify causal links between sensory signals and neural responses.
- NOBEL unifies EEG, MEG, and fMRI signals in LLM semantic space using a dual-path fMRI strategy
- Model achieves higher decoding accuracy through synergistic fusion than unimodal approaches
- Demonstrates strong stimulus-aware decoding on NSD and HAD datasets with causal link verification
Why It Matters
Advances brain-computer interfaces and neuroscience research by creating unified models from fragmented signal types.