Audio & Speech

Dementia classification from spontaneous speech using wrapper-based feature selection

A new study analyzes spontaneous speech to classify Alzheimer's with minimal computing power

Deep Dive

A team of researchers from the University of Jyväskylä in Finland has published a new study on arXiv demonstrating a machine learning framework that classifies dementia from spontaneous speech. The approach uses acoustic features extracted from entire audio recordings of picture description tasks, rather than only speech-active segments, reducing the number of feature vectors and improving computational efficiency without sacrificing classification performance. The study analyzed data from the ADReSS and Pitt Corpus datasets, which include recordings from cognitively healthy individuals and people with Alzheimer's disease.

Using classifier-based wrapper feature selection, the team identified diagnostically relevant acoustic characteristics. Among the models tested, the Extreme Minimal Learning Machine achieved competitive classification accuracy with substantially lower computational cost, reflecting an inherent property of its formulation and learning procedure. The results suggest this framework is computationally efficient, interpretable, and well suited as a supportive tool for speech-based dementia assessment, addressing the need for noninvasive, cost-effective, and scalable methods for detecting cognitive deficiencies in aging populations.

Key Points
  • Used openSMILE toolkit to extract acoustic features from entire recordings, not just speech-active segments
  • Extreme Minimal Learning Machine achieved high accuracy with low computational cost
  • Analyzed ADReSS and Pitt Corpus datasets with picture description tasks from Alzheimer's patients and healthy controls

Why It Matters

This could enable low-cost, scalable dementia screening using just a voice recording, aiding early diagnosis.