Bayesian inverse problems provide a coherent mathematical and algorithmic framework that enables researchers to combine mathematical models with data. The ability to solve such inverse problems depends crucially on the efficient calculation of quantities relating to the posterior distribution, which itself requires the solution of high dimensional optimization and sampling problems. In this talk, we will study different algorithms for efficient sampling from the posterior distribution under two different prior modelling paradigms. In the first one, we use specific non-smooth functions, such as for example the total variation norm, to model the prior. The main computational challenge in this case is the non-smoothness of the prior which leads to “stiffness” for the corresponding stochastic differential equations that need to be discretised to perform sampling. We address this issue by using tailored stochastic numerical integrators, known as stochastic orthogonal Runge-Kutta Chebyshev (S-ROCK) methods, and show that the corresponding algorithms are able to outperform the current state of the art methods. In the second modelling paradigm, the prior knowledge available is given in the form of training examples and we use machine learning techniques to learn an analytic representation for the prior. We exhibit numerically that this “data-driven” approach improves the performance in a number of different imaging tasks, such as image denoising and image deblurring.