Research & Papers

PowerLens: Taming LLM Agents for Safe and Personalized Mobile Power Management

A new multi-agent system uses LLMs to personalize Android power management, achieving 81.7% action accuracy.

Deep Dive

A research team led by Xingyu Feng and Huanqi Yang has developed PowerLens, a novel system that harnesses Large Language Models (LLMs) to revolutionize mobile power management on Android devices. Unlike traditional static rules or coarse-grained heuristics, PowerLens employs a multi-agent architecture that bridges the semantic gap between user activities and system parameters. The system recognizes user context from UI semantics and generates holistic power policies across 18 device parameters, enabling zero-shot, context-aware policy generation that adapts to individual preferences through implicit feedback.

PowerLens incorporates a PDL-based constraint framework that verifies every action before execution, ensuring strong safety guarantees. A two-tier memory system learns individualized preferences from implicit user overrides through confidence-based distillation, requiring no explicit configuration and converging within 3-5 days. Extensive experiments on rooted Android devices show PowerLens achieves 81.7% action accuracy and 38.8% energy savings over stock Android, outperforming both rule-based and LLM-based baselines. The system itself consumes only 0.5% of daily battery capacity while delivering high user satisfaction and fast preference convergence.

The research demonstrates how LLMs' commonsense reasoning can be effectively "tamed" for practical system optimization tasks, moving beyond traditional AI applications to address real-world energy efficiency challenges. By learning from implicit user behavior rather than requiring explicit configuration, PowerLens represents a significant advancement in personalized system optimization that could extend beyond power management to other adaptive system behaviors.

Key Points
  • Achieves 38.8% energy savings over stock Android with 81.7% action accuracy
  • Learns user preferences through implicit feedback, converging within 3-5 days without explicit configuration
  • Uses multi-agent LLM architecture to adjust 18 device parameters based on UI semantics and user context

Why It Matters

Demonstrates practical LLM application for system optimization, potentially extending battery life significantly through personalized, adaptive management.