Semantic Segmentation for Histopathology using Learned Regularization based on Global Proportions
No pixel-level labels needed—just disease spread estimates.
Researchers have developed Variational Segmentation from Label Proportions (VSLP), a novel two-stage framework that generates detailed pixel-wise segmentations of histopathology images using only global label proportions—such as the percentage of tumor tissue—without any pixel-level annotations. The first stage employs a pre-trained transformer model with test-time augmentation to produce a pixel-wise confidence estimate. In the second stage, these estimates are fused by solving a variational optimization problem that incorporates a Wasserstein data fidelity term alongside a learned regularizer. Unlike end-to-end networks, this variational approach allows visualization of the fidelity-regularization energy, making the segmentation more interpretable.
The method was validated on two public datasets, achieving superior performance over existing weakly supervised and unsupervised methods. For one dataset, proportions were estimated by an experienced pathologist to provide a realistic benchmark. VSLP also scaled effectively to an in-house dataset with noisy pathologist labels, severely outperforming state-of-the-art methods. This demonstrates practical applicability in clinical settings where fine-grained annotations are scarce but global tissue proportion estimates are routinely available. The code and data will be made publicly available upon acceptance.
- VSLP requires only global label proportions (e.g., 30% tumor), not pixel-level annotations.
- Uses a pre-trained transformer + test-time augmentation for initial confidence estimates.
- Outperforms state-of-the-art weakly supervised and unsupervised methods on two public datasets and a noisy in-house dataset.
Why It Matters
Enables accurate tissue segmentation in pathology where only coarse disease spread estimates exist, reducing annotation cost.