Research & Papers

A Taxonomy and Resolution Strategy for Client-Level Disagreements in Federated Learning

Rosendal and Oprescu's multi-track approach guarantees strict client exclusion in federated learning.

Deep Dive

Researchers Daan Rosendal and Ana Oprescu, publishing in IEEE BigData 2025, tackle a critical gap in Federated Learning (FL): the assumption of unconditional collaboration. Their paper introduces a taxonomy of 'client-level disagreements'—scenarios where clients must exclude each other for strategic, regulatory, or competitive reasons. To resolve these, they propose a robust multi-track resolution strategy that creates and manages isolated model update paths ('tracks'), ensuring strict client exclusion and preventing cross-contamination and unfairness.

Empirical evaluation across 34 scenarios using MNIST and N-CMAPSS datasets validates the approach for permanent, temporal, and overlapping disagreement patterns. The server-side algorithm's overhead is negligible (<1ms per round) even under heavy load, while a submodel reuse strategy effectively mitigates client-side training load from multiple tracks. This work enhances FL's practical applicability for policy compliance and strategic control.

Key Points
  • Server-side overhead <1ms per round under heavy load.
  • Validated across 34 scenarios with MNIST and N-CMAPSS datasets.
  • Submodel reuse strategy reduces client-side training load.

Why It Matters

Enables scalable FL in competitive or regulated environments where policy compliance is non-negotiable.