Research & Papers

[R] Joint Embedding Variational Bayes (TMLR ’26)

New method uses Student-t likelihood to stabilize non-contrastive representation learning.

Deep Dive

Published in TMLR, a new paper adds operational variational semantics to joint-embedding architectures for non-contrastive representation learning. It makes three coupled choices: factorizing embedding likelihood into directional and radial terms, anchoring posterior variance to likelihood scale, and using a heavy-tailed Student-t likelihood instead of Gaussian. This enables learning anisotropic/feature-wise uncertainty, evaluated in downstream OOD detection experiments.

Key Points
  • Factorizes embedding likelihood into directional and radial terms to decouple angular alignment from representation norm.
  • Uses heavy-tailed Student-t likelihood instead of Gaussian to ensure training stability and avoid catastrophic failure.
  • Enables learning of anisotropic/feature-wise uncertainty, validated on OOD detection tasks against VI-SimSiam.

Why It Matters

Brings uncertainty quantification to non-contrastive learning, improving OOD detection reliability in unsupervised settings.