The London Meeting on Computational Statistics 2026 is a two-day workshop that will bring together researchers at the forefront of computational statistics to discuss recent advances in the field. A broad range of topics will be covered, with a focus on the intersection of computational statistics and machine learning. Examples of topics include (but are not limited to):

  • Monte Carlo methods
  • Gradient flows
  • Simulation-based inference
  • Variational inference

The workshop is scheduled alongside the UCL Institute of Mathematics and Statistical Sciences (IMSS) Annual Lecture, which will take place on 27 April 2026 and will feature Dr Lester Mackey as the keynote speaker.

Invited Speakers

Registration, Talks and Posters

Registration

The cost is £30. Please register through this link. Coffee and lunch will be provided!


Contributed talks & posters

We welcome contributions on monte carlo methods, simulation-based inference, gradient flows, and variational inference.

Submit a talk or poster.

Deadline: 15th March 5pm GMT.

Schedule

Tuesday, April 28th 2026

9:00–9:30 Registration
9:30–9:45 👋 Welcome from the organisers
9:45–10:15 Title: Minimum distance summaries for robust neural posterior estimation
Dennis Prangle (University of Bristol)
[show abstract]
Neural posterior estimation enables approximate Bayesian inference using conditional density estimation from simulated prior-data pairs, typically reducing the data to low-dimensional summary statistics. NPE is susceptible to misspecification when observations deviate from the training distribution. We introduce minimum-distance summaries, a plug-in robust NPE method that adapts queried test-time summaries independently of the pretrained NPE. Leveraging the maximum mean discrepancy (MMD) as a distance between observed data and a summary-conditional predictive distribution, the adapted summary inherits strong robustness properties from the MMD. We demonstrate that the algorithm can be implemented efficiently with random Fourier feature approximations, yielding a lightweight, model-free test-time adaptation procedure. We provide theoretical guarantees for the robustness and consistency of our algorithm and empirically evaluate it on a range of synthetic and real-world tasks, demonstrating substantial robustness gains with minimal additional overhead.
10:15–10:45 Title: A computationally-tractable measure of global sensitivity for Bayesian inference
Arina Odnoblyudova (University College London)
[show abstract]
Bayesian inference should ideally not be overly sensitive to the choice of prior or hyperparameters, but even defining and measuring this sensitivity is challenging. Existing global sensitivity measures typically involve significant trade-offs between strength of the measure, interpretability, and computational tractability. Unfortunately, most methods are unable to serve the needs of modern Bayesian inference due to their high computational cost and poor performance in multiple dimensions. To address these limitations, we introduce a new approach to global sensitivity analysis which only requires a set of samples from a reference posterior and the ability to evaluate score functions, making it broadly computationally tractable. We demonstrate our proposed method on challenging Bayesian inference problems which are practically out of reach of existing approaches, including Bayesian inference for heavy-tailed time series, simulation-based inference for problems in telecommunications engineering, and generalised Bayesian inference for doubly-intractable models.
10:45–11:30 ☕ Coffee break
11:30–12:00 Title: TBA
Contributed Talk
[show abstract]
TBA
12:00–12:30 Title: TBA
Contributed Talk
[show abstract]
TBA
12:30–13:45 🥗 Lunch
13:45–14:15 Title: Convergence of a class of gradient-free optimisation schemes when the objective function is noisy, irregular, or both
Mathieu Gerber (University of Bristol)
[show abstract]
We investigate the convergence properties of a class of iterative algorithms designed to minimize a potentially non-smooth and noisy objective function, which may be algebraically intractable andwhose values may be obtained as the output of a black box. The algorithms considered can be cast under the umbrella of a generalised gradient descent recursion, where the gradient is that of a smooth approximation of the objective function. The framework we develop includes as special cases model-based and mollification methods, two classical approaches to zero-th order optimisation. The convergence results are obtained under very weak assumptions on the regularity of the objective function and involve a trade-off between the degree of smoothing and size of the steps taken in the parameter updates. As expected, additional assumptions are required in the stochastic case. We illustrate the relevance of these algorithms and our convergence results through a challenging classification example from machine learning.
14:15–14:45 Title: TBA
Marina Riabiz (King's College London)
[show abstract]
TBA
14:45–15:30 ☕ Coffee break
15:30–16:00 Title: TBA
Contributed Talk
[show abstract]
TBA
16:00–16:30 Title: TBA
Sarah Filippi (Imperial College London)
[show abstract]
TBA

Wednesday, April 29th 2026

9:00–9:30 ☕ Morning coffee
9:30–10:00 Title: TBA
Anna Korba (ENSAE, CREST, Institut Polytechnique de Paris)
[show abstract]
TBA
10:00–10:30 Title: A computable measure of suboptimality for entropy-regularised variational objectives
Heishiro Kanagawa (Fujitsu Research)
[show abstract]
Several emerging post-Bayesian methods target a probability distribution for which an entropy-regularised variational objective is minimised. This increased flexibility introduces a computational challenge, as one loses access to an explicit unnormalised density for the target. To mitigate this difficulty, we introduce a novel measure of suboptimality called gradient discrepancy, and in particular a kernel gradient discrepancy (KGD) that can be explicitly computed. In the standard Bayesian context, KGD coincides with the kernel Stein discrepancy (KSD), and we obtain a novel characterisation of KSD as measuring the size of a variational gradient. Outside this familiar setting, KGD enables novel sampling algorithms to be developed and compared, even when unnormalised densities cannot be obtained.
10:30–11:15 ☕ Coffee break
11:15–11:45 Title: Self-speculative masked diffusions
Arnaud Doucet (University of Oxford)
[show abstract]
We present self-speculative masked diffusions, a new class of masked diffusion generative models for discrete data that require significantly fewer function evaluations to generate samples. Standard masked diffusion models predict factorized logits over currently masked positions. A number of masked positions are then sampled; however, the factorization approximation means that sampling too many positions in one go leads to poor sample quality. As a result, many simulation steps and therefore neural network function evaluations are required to generate high-quality data. We reduce the computational burden by generating non-factorized predictions over masked positions. This is achieved by modifying the final transformer attention mask from non-causal to causal, enabling draft token generation and parallel validation via a novel, model-integrated speculative sampling mechanism. This results in a non-factorized predictive distribution over masked positions in a single forward pass. We find that we can achieve a ~2x reduction in the required number of network forward passes relative to standard masked diffusion models.
11:45–12:15 Title: Scaling-up simulation-based inference with diffusion models
Gilles Louppe (University of Liège)
[show abstract]
Deep generative models are transforming how we solve some of science's hardest puzzles: inverse problems where we must work backwards from noisy, incomplete observations to uncover hidden physical states. In this talk, we will explore three scales of application, from characterizing the atmospheres of distant exoplanets light years away, to reconstructing turbulent fluid dynamics from sparse measurements, to assimilating satellite data across the entire Earth's atmosphere in real time. We will see how normalizing flows, score based diffusion models, and latent space compression allow us to tackle problems spanning tens to billions of variables, revealing not just single solutions but entire distributions of physically plausible states.
12:15–13:15 🥗 Lunch
13:15–14:45 🪧 Poster session
14:45–15:30 ☕ Coffee break
15:30–16:00 Title: TBA
Contributed Talk
[show abstract]
TBA
16:00–16:15 👋 Closing remarks

Organisers

To learn more about our organisers, see the FSML research group webpage!

Acknowledgements

This workshop was supported financially through the EPSRC grant "Transfer Learning for Monte Carlo Methods" (EP/Y022300/1) and the UCL department of Statistical Science's section on Computational Statistics and Machine Learning. The organisers are also particularly grateful to the UCL ELLIS unit and the Royal Statistical Society's section on Computational Statistics and Machine Learning for supporting this event.

UCL logo Ellis logo CSML logo RSS logo

Location

The workshop will be hosted at the London Mathematical Society in Central London.
The address is De Morgan House, 57-58 Russell Sq, London WC1B 4HS.

Google Maps