Abstract: It is easy to propose a new algorithm for solving a Machine Learning problem. It is much harder to convince other people that the proposed algorithm actually works. The “gold standard” of tight theoretical guarantees is often out of reach. So what do we do? Typically, an algorithm is validated on a couple of test problems and its output is compared with that of algorithms that are known to work. This is not a great strategy.
In this talk, I will outline a general strategy for assessing whether an algorithm for approximate Bayesian computing works on a given problem. This method does not require evaluation of the true posterior and also indicates ways in which the computed posterior systematically deviates from the true posterior.
Speaker: Daniel Simpson
Affiliation: Professor of Stastical Sciences, University of Toronto
Place of Seminar: Aalto University