Research & Papers

Feature Anchors for Time-Series Sensor-Based Human Activity Recognition

New model boosts activity recognition accuracy by up to 14.6 points

Deep Dive

Researchers Ruijie Yao, Chenhang Li, Danyang Zhuo, Tingjun Chen, and Xiaoyue Ni from Duke University and other institutions have introduced TCNet (Temporal Conditioning Network for Feature Anchors), a novel architecture for wearable Human Activity Recognition (HAR). The key innovation is treating handcrafted time-series features (TSFs) as 'feature anchors'—explicit intermediate representations that remain inside the model and are adjusted by neural context, rather than being discarded as fixed preprocessing outputs. TCNet extracts these anchors, encodes complementary time-domain and frequency-domain context from raw IMU windows, and predicts context-conditioned scale, bias, and gating parameters to modulate anchor groups directly in feature space.

On five standard HAR benchmarks, TCNet achieves strong results: 70.2% mF1 on USC-HAD, 85.1% mF1 on Daphnet, 93.9% mF1 on MHealth, and 94.5% mF1 on PAMAP2. Relative to the prior state-of-the-art rTsfNet, it improves by 4.5 points on USC-HAD, 14.6 points on Daphnet, and 6.5 points on MHealth. Ablation studies confirm that gains come primarily from anchor guidance rather than simple branch fusion, and feature-space analyses reveal that several discriminative TSF families are not reliably accessible in standard latent representations. The code is available on GitHub.

Key Points
  • TCNet achieves 94.5% mF1 on PAMAP2 and 93.9% mF1 on MHealth
  • Outperforms prior SOTA rTsfNet by 4.5 to 14.6 points across three datasets
  • Uses handcrafted features as explicit, adaptable anchors instead of fixed preprocessing

Why It Matters

Bridges the gap between interpretable handcrafted features and adaptable deep learning for wearable sensors.