Computational statistics and simulation methods

Table of contents

Research group description
Many problems in modern statistics (and related areas such as machine learning, engineering and economics) rely on data with substantial variability, missing observations and/or complicated inter-dependencies. Such data is usually best analysed in terms of a large probabilistic model which connects all the observed quantities with the unknowns.
We aim to develop reliable general inference algorithms for such models, without imposing restrictive modelling assumptions. In the Bayesian setting, the most successful algorithms to date are based on sophisticated Monte Carlo simulation methods. Simulation is useful also in likelihood-based inference, where stochastic gradient type optimisation algorithms can be applied.
The group collaborates with applied researchers, seeking for interesting applications with inferential challenges. We are always interested in new collaboration opportunities!
Most of our research contributions are either theoretical or methodological:
Analysis of Monte Carlo algorithms
We seek theoretical understanding about when certain Monte Carlo methods are efficient and why. This allows for methodological development, choosing the right methods for certain problem types, and tuning the algorithms in an optimal manner.
Scalable Monte Carlo
Many Monte Carlo methods, including the popular Markov chain Monte Carlo (MCMC), work well with small data sets, but are problematic when data size increases. The project develops new Monte Carlo inference methods, which are suitable with bigger data sets. The methods are designed to be used efficiently with parallel and distributed computing facilities.