Back to All Events

Arno Solin: Neural networks that know what they do not know

Abstract: Deep feedforward neural networks have become an essential component of modern machine learning. These models are known to reinforce hidden data biases, making them unreliable and difficult to interpret. In Bayesian deep learning, the interests are two-fold: encoding prior knowledge into models and performing probabilistic inference under the specified model. In this talk, the focus is on the former and we seek to build models that 'know what they do not know' by introducing inductive biases in the function space. This is done by studying the connection between random (untrained) networks and Gaussian process priors. We will focus on stationary models, which act as a proxy for capturing sensitivity. Stationarity indicates translation-invariance, meaning that the joint probability distribution does not change when the inputs are shifted. This seemingly naive assumption has strong consequences in the sense that it induces conservative behaviour across the input domain, both in-distribution and outside the observed data. This talk relates to two papers (https://arxiv.org/abs/2010.09494 and https://arxiv.org/abs/2110.13572) that were published in NeurIPS 2020 and 2021, respectively.

Speakers:  Arno Solin

Affiliation: Aalto University

Place of Seminar:  Zoom

Later Event: February 23
Turing and FCAI Meetup