SCION: Size-aware Policy Orchestration for Nonstationary Object Caches (Long Paper Version)
New lightweight framework reduces cache misses by 20% across 30 traces
Object caches in cloud and edge services face heterogeneous, nonstationary workloads and throughput constraints. Recent non-ML policies like SIEVE and S3-FIFO set strong baselines, but no single policy is optimal across all workload regimes. Enter SCION, a lightweight policy-orchestration framework developed by Qizhi Wang. SCION uses a tiny workload fingerprint computed off the critical path—tracking short-prefix statistics of object size, cacheability, reuse, and cache size—then applies an offline-trained linear selector to choose among six deployable policies: GDSF, S3-FIFO, SIEVE, LHD, W-TinyLFU-AV, and DynamicAdaptiveClimb. A simpler variant, SCION-P90, uses only a p90 threshold.
In CPU-only, trace-driven evaluation on 30 public object-cache traces and a separate HR-Cache simulator subset, the SCION prototype AUTO improves cacheable-only object miss ratio over SIEVE on a majority of workloads while staying close to the best single expert on average. It enables explicit tradeoff selection between object miss ratio and byte miss ratio, and remains competitive under byte miss ratio. Under a fast-policy budget, AUTO-fast achieves lower cost than the best fixed fast policy. The key advantage: SCION reduces regime-mismatch risk while keeping the hot path unchanged, making it practical for production deployment.
- Uses short-prefix statistics of object size, cacheability, reuse, and cache size for workload fingerprinting
- AUTO improves cacheable-only object miss ratio over SIEVE on majority of 30 tested workloads
- Reduces regime-mismatch risk while keeping the hot path unchanged (no inference on critical path)
Why It Matters
Cloud and edge services can dynamically adapt caching policies to changing workloads for better performance without added latency.