In this talk, I present two very different applications related by the simple idea of invertible transformations.
- The first topic is an analysis of inverse consistency penalty in image matching in conjunction with the use of neural networks. We show that neural networks favours the emergence of smooth transformation for the inverse consistency. Experimentally, we show that this behaviour is fairly stable with respect to the chosen architecture. This is joint work with H. Greer, R. Kwitt and M. Niethammer.
-The second topic is an analysis of global convergence of residual networks when the residual block is parametrized via reproducing kernel Hilbert space vector field. We prove that the resulting problem satisfies the so-called Polyak-Lojasiewicz property, for instance ensuring global convergence if the iterates are bounded. We show that this property applies in a continuous limit as well as in the fully discrete setting. This is joint work with R. Barboni and G. Peyré.