Sampled Gromov Wasserstein

作者:Tanguy Kerdoncuff, Rémi Emonet, Marc Sebban

摘要

Optimal Transport (OT) has proven to be a powerful tool to compare probability distributions in machine learning, but dealing with probability measures lying in different spaces remains an open problem. To address this issue, the Gromov Wasserstein distance (GW) only considers intra-distribution pairwise (dis)similarities. However, for two (discrete) distributions with N points, the state of the art solvers have an iterative O(N4) complexity when using an arbitrary loss function, making most of the real world problems intractable. In this paper, we introduce a new iterative way to approximate GW, called Sampled Gromov Wasserstein, which uses the current estimate of the transport plan to guide the sampling of cost matrices. This simple idea, supported by theoretical convergence guarantees, comes with a O(N2) solver. A special case of Sampled Gromov Wasserstein, which can be seen as the natural extension of the well known Sliced Wasserstein to distributions lying in different spaces, reduces even further the complexity to O(N log N). Our contributions are supported by experiments on synthetic and real datasets.

论文关键词:Optimal transport, Gromov Wasserstein, Convergence guarantees

论文评审过程:

论文官网地址:https://doi.org/10.1007/s10994-021-06035-1