Our research in geometric deep learning is concerned with investigating the role of symmetries and other geometric structures in the context of deep learning.
The group conducts research in deep learning models involving different geometric constructions such as graphs, groups and manifolds. Our aim is to develop the theoretical understanding of the mathematical foundations underlying modern high performance machine learning models, e.g., deep neural networks.
Of particular interest are so called equivariant models, which respect symmetry transformations on the data processed by the networks. An example of this property is the manifestly position independent object classification in image data obtained by convolutional neural networks, but the principle can be generalized to arbitrary global symmetries.
Another research interest is so called neural differential equations, where one considers the limit of infinite depth for neural networks. The sequence of transformations implemented by the layers of the network is then viewed as the discretization of a continuous dynamic propagating information through the network, which is described by a dynamical system of ordinary differential equations.