Abstract: Bayesian inference in applied fields of science and engineering can be challenging because in the best-case scenario the likelihood is a black-box (e.g., mildly-to-very expensive, no gradients) and more often than not it is not even available, with the researcher being only able to simulate data from the model. In this talk, I review a recent sample-efficient framework for approximate Bayesian inference, Variational Bayesian Monte Carlo (VBMC), which uses only a limited number of potentially noisy log-likelihood evaluations. VBMC produces both a nonparametric approximation of the posterior distribution and an approximate lower bound of the model evidence, useful for model selection. VBMC combines well with a technique we (re)introduced, inverse binomial sampling (IBS), that obtains unbiased and normally-distributed estimates of the log-likelihood via simulation. VBMC has been tested on many real problems (up to 10 dimensions) from computational and cognitive neuroscience, with and without likelihoods. Our method performed consistently well in reconstructing the ground-truth posterior and model evidence with a limited budget of evaluations, showing promise as a general tool for black-box, sample-efficient approximate inference — with exciting potential extensions to more complex cases.
A MATLAB toolbox is available at: https://github.com/lacerbi/vbmc
Bio: Luigi Acerbi recently joined the Department of Computer Science of the University of Helsinki as Assistant Professor of Artificial and Human Intelligence. After studying theoretical physics and computer science in Milan (Italy), he obtained his doctorate in computational neuroscience from the University of Edinburgh (UK), building Bayesian models of supposedly-Bayesian brains. In his post-doctoral work at NYU (NY, USA) and Geneva (Switzerland), besides modeling human behavior and decision making, he investigated ideas from machine learning in computational neuroscience, which led him to his current line of research on Bayesian optimization and approximate inference. Luigi Acerbi developed Bayesian Adaptive Direct Search (BADS), a toolbox for model fitting via fast hybrid Bayesian optimization, currently in use in dozens of computational labs across the globe; and the VBMC toolbox, recently released. He is an affiliate researcher of the International Brain Laboratory and an off-site visiting scholar at New York University.
Speakers: Luigi Acerbi
Affiliation: Department of Computer Science, Helsinki University
Place of Seminar: Zoom (Now available on YouTube)