Machine learning is a key technique in many different areas, and machine learning accounts for many of the recent successes in artificial intelligence. Data is however scarce in most applications, which is why well-posed priors and penalties have been very important in reigning in the often high-dimensional problems considered.
Structured priors and penalties take this a step further, by not just penalising single variables in isolation but penalising deviations from particular structures. Structured priors open up a toolbox of means to encode general domain-specific knowledge into a machine learning model. This project will develop a generic modeling framework with potential applications in many areas of medicine, science, and technology. We will develop novel structured priors and sampling algorithms to improve interpretability, variable selection, and uncertainty estimation in machine learning.
A recurring problem in most application areas, however, is the unavailability of large amounts of qualitative data. In situations with small amounts of data, there is a risk that spurious relationships are discovered, which do not depend on actual relationships between measured variables and the target outputs, but instead are a result of overfitting the model to the training data. One way to handle the problem with small amounts of data in high-dimensional machine learning problems is to use prior information—usually by prior distributions or penalties. Structured priors and penalties take this a step further, by not just penalising single variables in isolation but penalising deviations from particular structured relationships between measured variables.
Encode prior knowledge
Structured relationships mean that we encode, i.e. express domain-specific, expert, or prior knowledge about a problem as a part of the machine learning model. This could be for instance that pixels in a neighbourhood in an image should be similar, to reduce noise in a reconstructed image, it could be that homogeneous regions in a picture, instead of individual variables, correlate with the target variable, or to select genetic pathways instead of individual genes when predicting phenotypes from transcriptomic data. By encoding a particular structure into the model, i.e. known structured relationships between the measured variables, the model can for instance select relevant groups of variables and at the same time reduce the risk of overfitting the training data. This also substantially improves our ability to interpret the model and understand the data by analysing the relationships between the measured variables.
Moving the boundaries
Current means to encode prior information either do not allow structured relationships, do not guarantee to draw samples close to the true posterior, have sub-optimal convergence rates, or do not provide uncertainty estimates of the parameters or of the model predictions.
The goal of this project is to solve these problems, and to develop a generic Bayesian machine learning framework within which structured relationships between measured variables can be encoded. This development has potential applications throughout medicine, science, and technology.
The project will develop new methods, theory, and algorithms through novel structured priors and sampling algorithms and by that improve interpretability, variable selection, and uncertainty estimation for a wide variety of machine learning methods. We will move the boundary for what is possible to express with a prior distribution today, allow very general prior distributions (and many existing prior distributions as special cases), and open up for a wider use of structured prior distributions in Bayesian machine learning.
The methods that we develop will be evaluated in medical imaging applications. We will evaluate them in applications for reconstruction of quantitative magnetic resonance images, and for predictive modelling of schizophrenia, bipolar disorder, and Alzheimer’s disease. The developed methodology will have general applicability.