Fixing Unsupervised Hyperbolic Contrastive Loss [D]
Euclidean beats hyperbolic 64% vs 57% in unsupervised learning benchmark.
A Reddit user (u/arjun_r_kaushik) shared experimental results comparing unsupervised hyperbolic contrastive loss against standard Euclidean cosine contrastive loss on the ImageNet-1k dataset. Using a Lorentzian manifold embedding with expmap() and projx() projections, the hyperbolic variant reached only 57% 1-NN accuracy, while the simple Euclidean version topped 64% with the same batch size (2048) and learning rate (1e-4). The posted code uses a distance-based logits approach with temperature 0.07.
This result challenges the growing interest in hyperbolic representations for hierarchical data (like ImageNet's natural taxonomies). Potential issues include improper manifold optimization (e.g., Riemannian SGD vs Euclidean), numerical instability, or the fact that contrastive learning relies on uniform distribution on the hypersphere—properties not yet proven for hyperbolic spaces. The community is now debating whether the geometry itself is unsuitable for unsupervised contrastive objectives or if better hyperparameter tuning could close the gap.
- Hyperbolic contrastive loss achieved only 57% 1-NN accuracy vs 64% for Euclidean on ImageNet-1k
- Used Lorentzian manifold with expmap/projx projections, batch size 2048, LR 1e-4
- Result questions whether hyperbolic geometry benefits unsupervised contrastive learning for hierarchical data
Why It Matters
Could signal fundamental limitations of hyperbolic spaces for self-supervised learning, impacting future representation learning research.