Studying disentangled representations from unlabelled information is a basic problem in machine studying. Fixing it might unlock different issues, equivalent to generalization, interpretability, or equity. Though remarkably difficult to resolve in idea, disentanglement is commonly achieved in follow by prior matching. Moreover, current works have proven that prior matching approaches could be enhanced by leveraging geometrical issues, e.g., by studying representations that protect geometric options of the information, equivalent to distances or angles between factors. Nonetheless, matching the prior whereas preserving geometric options is difficult, as a mapping that totally preserves these options whereas aligning the information distribution with the prior doesn’t exist usually. To deal with these challenges, we introduce a novel strategy to disentangled illustration studying based mostly on quadratic optimum transport. We formulate the issue utilizing Gromov-Monge maps that transport one distribution onto one other with minimal distortion of predefined geometric options, preserving them as a lot as could be achieved. To compute such maps, we suggest the Gromov-Monge-Hole (GMG), a regularizer quantifying whether or not a map strikes a reference distribution with minimal geometry distortion. We display the effectiveness of our strategy for disentanglement throughout 4 commonplace benchmarks, outperforming different strategies leveraging geometric issues.
*Equal contribution
**Equal advising
†CREST-ENSAE
‡Helmholtz Munich
§TU Munich
¶MCML
††Tubingen AI Heart