TY - GEN
T1 - Tight Mutual Information Estimation With Contrastive Fenchel-Legendre Optimization
AU - Guo, Qing
AU - Chen, Junya
AU - Wang, Dong
AU - Wang, Yuewei
AU - Deng, Xinwei
AU - Carin, Lawrence
AU - Li, Fan
AU - Huang, Jing
AU - Tao, Chenyang
N1 - KAUST Repository Item: Exported on 2023-07-10
Acknowledgements: The authors would like to thank the anonymous reviewers for their insightful comments. Q Guo gratefully appreciate the support of Amazon Fellowship. X Deng would like to thank the Advanced Research Computing program at Virginia Tech and Virginia's Commonwealth Cyber Initiative (CCI) AI testbed for providing computational resources, also appreciate the CCI and CCI-Coastal grants to Virginia Tech. Part of this work is done before C Tao joined Amazon, and he was funded by National Science Foundation Grant No. 1934964. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1548562 [78] and used the Extreme Science and Engineering Discovery Environment (XSEDE) PSC Bridges-2 and SDSC Expanse at the service-provider through allocation TG-ELE200002 and TG-CIS210044.
PY - 2022/1/1
Y1 - 2022/1/1
N2 - Successful applications of InfoNCE (Information Noise-Contrastive Estimation) and its variants have popularized the use of contrastive variational mutual information (MI) estimators in machine learning. While featuring superior stability, these estimators crucially depend on costly large-batch training, and they sacrifice bound tightness for variance reduction. To overcome these limitations, we revisit the mathematics of popular variational MI bounds from the lens of unnormalized statistical modeling and convex optimization. Our investigation yields a new unified theoretical framework encompassing popular variational MI bounds, and leads to a new simple and powerful contrastive MI estimator we name FLO. Theoretically, we show that the FLO estimator is tight, and it converges under stochastic gradient descent. Empirically, the FLO estimator overcomes the limitations of its predecessors and learns more efficiently. The utility of FLO is verified using extensive benchmarks, and we further inspire the community with novel applications in meta-learning. Our presentation underscores the foundational importance of variational MI estimation in data-efficient learning.
AB - Successful applications of InfoNCE (Information Noise-Contrastive Estimation) and its variants have popularized the use of contrastive variational mutual information (MI) estimators in machine learning. While featuring superior stability, these estimators crucially depend on costly large-batch training, and they sacrifice bound tightness for variance reduction. To overcome these limitations, we revisit the mathematics of popular variational MI bounds from the lens of unnormalized statistical modeling and convex optimization. Our investigation yields a new unified theoretical framework encompassing popular variational MI bounds, and leads to a new simple and powerful contrastive MI estimator we name FLO. Theoretically, we show that the FLO estimator is tight, and it converges under stochastic gradient descent. Empirically, the FLO estimator overcomes the limitations of its predecessors and learns more efficiently. The utility of FLO is verified using extensive benchmarks, and we further inspire the community with novel applications in meta-learning. Our presentation underscores the foundational importance of variational MI estimation in data-efficient learning.
UR - http://hdl.handle.net/10754/692835
UR - https://proceedings.neurips.cc/paper_files/paper/2022/hash/b5cc526f12164b2144bb2e06f2e84864-Abstract-Conference.html
UR - http://www.scopus.com/inward/record.url?scp=85163142696&partnerID=8YFLogxK
M3 - Conference contribution
SN - 9781713871088
BT - 36th Conference on Neural Information Processing Systems, NeurIPS 2022
PB - Neural information processing systems foundation
ER -