Abstract
This paper proposes an efficient approach to learning disentangled representations with causal mechanisms based on the difference of conditional probabilities in original and new distributions. We approximate the difference with models' generalization abilities so that it fits in the standard machine learning framework and can be computed efficiently. In contrast to the state-of-the-art approach, which relies on the learner's adaptation speed to new distribution, the proposed approach only requires evaluating the model's generalization ability. We provide a theoretical explanation for the advantage of the proposed method, and our experiments showthat the proposed technique is 1.9-11.0× more sample efficient and 9.4-32.4× quicker than the previous method on various tasks. The source code is available at https://github.com/yuanpeng16/EDCR.
Original language | English (US) |
---|---|
Pages | 54-71 |
Number of pages | 18 |
State | Published - 2024 |
Event | 1st Conference on Parsimony and Learning, CPAL 2024 - Hongkong, China Duration: Jan 3 2024 → Jan 6 2024 |
Conference
Conference | 1st Conference on Parsimony and Learning, CPAL 2024 |
---|---|
Country/Territory | China |
City | Hongkong |
Period | 01/3/24 → 01/6/24 |
ASJC Scopus subject areas
- Artificial Intelligence
- Software
- Control and Systems Engineering
- Statistics and Probability