Skip to content
Main menu hidden.

In Search of Projectively Equivariant Neural Networks

Time Wednesday 28 September, 2022 at 15:30 - 16:15
Place MIT.A.346, MIT-building

Abstract: A key concept within the field of Geometric Deep Learning is that of equivariance. Put simply, a network is equivariant towards a group of transformations if it reacts properly to the input being transformed. A prominent example is that of convolutional neural networks: Here, a translation of the input causes the output to translate with it. In recent years, networks for a number of other transformation groups have been successfully constructed and applied.In this talk, we investigate the question of equivariance in a projective sense, and in particular the connection to

equivariance in the standard sense. Our main motivation for studying projective equivariance is the pinhole camera model in computer vision, but other applications may be possible. As in many other works, we concentrate on equivariant multilayered perceptrons, and in particular its linear layers. Our main theoretical finding is that in several important special cases, the problem of finding projectively equivariant linear layers is actually equivalent to the standard equivariance problem. We also present some small, proof-of-concept, numerical experiments.

This talk is based on joint work with Georg Bökman and Fredrik Kahl.

Event type: Seminar
Staff photo Axel Flinth
Axel Flinth
Assistant professor
Read about Axel Flinth
Antti Perälä
Read about Antti Perälä