Image & Video

Learning from Compressed CT: Feature Attention Style Transfer and Structured Factorized Projections for Resource-Efficient Medical Image Analysis

New framework analyzes JPEG-compressed chest CTs almost as well as full-resolution volumes.

Deep Dive

A team of researchers (Yousuf et al.) has proposed CT-Lite, a resource-efficient framework for analyzing JPEG-compressed chest CT volumes without full decompression. The approach addresses a critical bottleneck in medical AI: volumetric CT data is massive (often exceeding hundreds of megabytes), making deployment on edge devices or in bandwidth-constrained settings impractical. CT-Lite combines two novel components: Feature Attention Style Transfer (FAST) and Structured Factorized Projection (SFP). FAST is a distillation framework that uses Gram-matrix-based attention style preservation and dual-attention feature alignment to transfer activation patterns from high-fidelity (uncompressed) representations to a spatiotemporal encoder working with compressed inputs. This allows the model to extract meaningful features even from degraded JPEG volumes. SFP leverages Block Tensor Train decomposition to replace dense projection layers, cutting the projection-head parameter count by nearly half while maintaining representational power.

Tested on three chest CT datasets (CT-RATE, NIDCH, Rad-ChestCT), CT-Lite achieves an AUROC within 5-7% of the uncompressed-input baseline across all datasets. This is remarkable given that the inputs are JPEG-compressed and the model uses significantly fewer parameters. The contrastive learning pipeline incorporates SigLIP-based multimodal alignment. Practical implications are substantial: hospitals with limited computational resources could run AI triage on compressed scans, electronic data transfer for telemedicine becomes faster, and the reduced parameter count lowers memory and energy requirements. The work demonstrates that intelligent compression-aware architectures can close the gap to uncompressed performance, accelerating the path to scalable AI diagnostics in resource-constrained environments.

Key Points
  • CT-Lite analyzes JPEG-compressed chest CT volumes, reducing storage and bandwidth requirements.
  • Feature Attention Style Transfer (FAST) preserves high-fidelity activation patterns in compressed inputs using Gram-matrix attention.
  • Structured Factorized Projection (SFP) uses Block Tensor Train decomposition to cut projection-head parameters by half while maintaining AUROC within 5-7% of uncompressed baselines.

Why It Matters

Enables AI-powered chest CT diagnosis on low-resource devices, telemedicine, and faster data transfer without sacrificing accuracy.