Agentic AI-Based Joint Computing and Networking via Mixture of Experts and Large Language Models
New framework uses LLMs to dynamically select and combine specialized optimization experts.
Future 6G mobile networks will rely on diverse, specialized optimization experts for tasks like resource allocation and delay minimization. But orchestrating these experts based on high-level operator goals remains a challenge. Researchers from multiple institutions propose an agentic AI framework that combines mixture-of-experts (MoE) architectures with large language models (LLMs). The LLM serves as a semantic gate, reasoning over human-readable network intents (e.g., 'minimize delay') and dynamically selecting the right combination of optimization agents. The framework is model-agnostic and bridges high-level objectives with low-level resource allocation decisions, enabling flexible optimization across heterogeneous conditions.
As a concrete test, the team applied the framework to a joint communication and computing network. They built a library of experts covering throughput, fairness, and delay-driven objectives under both regular and robust operating conditions. Numerical simulations showed that the agentic MoE consistently achieved near-optimal performance compared to exhaustive expert combinations, while outperforming any single expert. The results suggest that LLM-driven orchestration can make 6G networks more adaptive and efficient without requiring manual tuning for every scenario.
- LLM acts as a semantic gate to interpret operator intent and dynamically compose optimization agents.
- Framework is model-agnostic, bridging human-readable network goals with low-level resource allocation.
- Numerical results show near-optimal performance vs. exhaustive expert combinations, beating individual experts on delay and throughput.
Why It Matters
This could enable fully autonomous 6G networks that adapt to operator goals in real time without manual reconfiguration.