Recent preprints and publications

Complete list of publications here.

Learning maps between data samples is fundamental. Applications range from representation learning, image translation and generative …

In this paper, we show that smoothness of solutions of optimal transport can actually be leveraged to define statistical estimators …

Comparing metric measure spaces (i.e. a metric space endowed with a probability distribution) is at the heart of many machine learning …

Continuous-depth neural networks can be viewed as deep limits of discrete neural networks whose dynamics resemble a discretization of …

The squared Wasserstein distance is a natural quantity to compare probability distributions in a non-parametric setting. This quantity …

Large deformation diffeomorphic metric mapping (LDDMM) is a popular approach for deformable image registration with nice mathematical …

Talks

I explain the formulation of the Wasserstein-Fisher-Rao distance, introduced a few years ago as the natural extension of the Wasserstein L2 metric to the space of positive Radon measures. We present the equivalence between the dynamic formulation and a static formulation, where we relax the marginal constraints with relative entropy. Then, we will present a connection with standard optimal transport on a cone space. The second part of the talk is motivated by this optimal transport distance and we study a generalized Camassa-Holm equation, for which we study existence of minimizing generalized geodesics, a la Brenier.

After presenting some background on optimal transport, we explain why the curse of dimensionality can be encountered in estimating optimal transport distances. we then explain computational techniques for solving optimal transport and in particular entropic regularization. We present Sinkhorn divergences and show how they have been proven to overcome the curse of dimension, but not for approximating optimal transport distances. Then, we show that smoothness of solutions of optimal transport can actually be leveraged to define statistical estimators which are amenable to computation. We use the dual formulation of optimal transport, whose minimizers enjoy a particular structure. These solutions can be written as a sum of squares in Reproducing Kernel Hilbert Spaces. As is standard in the SOS literature, it can be solved using an SDP formulation. By using a soft-penalty in Sobolev spaces on the optimal potential and a trace class positive self-adjoint operator, we can define an estimator of optimal transport which is both statistically and computationally efficient

We show how to break the curse of dimension for the estimation of optimal transport distance between two smooth distributions for the Euclidean squared distance. The approach relies on essentially one tool: represent inequality constraints in the dual formulation of OT by equality constraints with a sum of squares in reproducing kernel Hilbert space. By showing this representation is tight in the variational formulation, one can then leverage smoothness to break the curse.

Contact

Experience

 
 
 
 
 

Professor

Université Gustave Eiffel

Sep 2018 – Present Noisy-Champs
Member of the research lab LIGM (Laboratoire d’informatique Gaspard Monge) and teaching signal processing.
 
 
 
 
 

Assistant professor

University Paris-Dauphine

Sep 2011 – Sep 2018 Paris
Member of the research lab Ceremade, UMR CNRS 7534, and teaching applied mathematics.
 
 
 
 
 

Research assistant

Imperial College

May 2009 – Sep 2011 London
Member of the Institute for Mathematical Sciences and the Math department.
 
 
 
 
 

PhD candidate

ENS Cachan (ENS Paris-Saclay)

Sep 2005 – May 2009 Paris area
Member of the lab CMLA.

PhD and HDR

Habilitation à diriger les recherches

Optimal transport, Diffeomorphisms and applications to imaging

PhD in applied mathematics

Hamiltonian formulation of diffeomorphic image matching