A Privacy-Preserving Machine Learning Framework for Edge Intelligence: An Empirical Analysis
Differential privacy keeps speed but can slash accuracy by 35% on complex models.
Researchers Quoc Lap Trieu, Bahman Javadi, and Jim Basilakis have published a new privacy-preserving machine learning (PPML) framework specifically designed for Edge Intelligence (EI) applications. The framework comprises a four-layer system architecture and supports three leading privacy approaches: Differential Privacy (DP), Secure Multi-party Computation (SMC), and Fully Homomorphic Encryption (FHE). The team evaluated these methods on real hardware and trace-based simulations, measuring model accuracy, response time, and energy consumption across various neural network architectures.
Key findings reveal stark trade-offs. DP offers near-plaintext throughput and latency but suffers accuracy degradation—up to 35% on complex models like AlexNet, though under 18% on simpler ones like LeNet for the FordA dataset. SMC's performance is network-bound: doubling link capacity from 250 to 500 Mbps cuts latency by roughly 30%. FHE proved the most computationally intense, with response times approximately 1000 times slower than DP. The study also examines privacy-utility-extractability trade-offs, noting that DP reduces attackers' data efficiency in model stealing, while SMC and FHE require additional output controls to achieve similar resistance.
- DP retains throughput close to plaintext but accuracy falls by up to 35% on complex models like AlexNet
- SMC latency depends on network: increasing link from 250 to 500 Mbps reduces latency by about 30%
- FHE incurs roughly 1000x response time increase compared to DP, making it impractical for real-time edge tasks
Why It Matters
Guides developers in balancing privacy, accuracy, and speed for real-world edge AI deployments.