Abstract: Modern statistics and machine learning tools are being applied to increasingly complex phenomenon, and as a result make use of increasingly complex models. A large class of such models are the so-called intractable likelihood models, where the likelihood is either too computational expensive to evaluate, or impossible to write down in closed form. This creates significant issues for classical approach such as maximum likelihood estimation or Bayesian inference, which are entirely reliant on evaluations of a likelihood. In this talk, we will cover several novel inference schemes which by-pass this issue. These will be constructed from kernel-based discrepancies such as maximum mean discrepancies and kernel Stein discrepancies, and can be used either in a frequentist or Bayesian framework. An important feature of our approach is that it will be provably robust, in the sense that a small number of outliers or mild model misspecification will not have a significant impact on parameter estimation. In particular, we will show how the choice of kernel can allow us to trade statistical efficiency with robustness. The methodology will then be illustrated on a range of intractable likelihood models in signal processing and biochemistry.
Speaker: François-Xavier Briol
Francois-Xavier Briol is a Lecturer (equivalent to Assistant Professor) in the Department of Statistical Science at University College London, as well as a Group Leader at The Alan Turing Institute, the UK’s national institute for Data Science and AI, where he is affiliated to the Data-Centric Engineering programme. His research interests are at the interface of computational statistics, machine learning and applied mathematics, and his work focuses on methodology for statistical computation and inference for large scale and computationally expensive probabilistic models.
Affiliation: University College London & Alan Turing Institute
Place of Seminar: Zoom