Abstract: Tractable circuits are a recent development in machine learning and act at the intersection of neural networks, traditional probabilistic models like mixture models, and propositional logic. Contrary to traditional representations of probabilistic models, circuits utilize a low-level representation through simple arithmetic operations and allow us to guarantee tractability (exact and efficient computation) of certain inferences based on the properties of the circuit itself. Henceforth, they have become a valuable tool for reasoning about classes of distributions which allow tractable computation of, for example, marginal probabilities – a central object in many machine learning applications, such as Bayesian inference, model selection, and when handling missing values. In this talk, I will briefly review tractable circuits as a tool for flexible and trustworthy probabilistic reasoning and showcase recent advancements. In particular, I will discuss recent endeavours in modelling heterogeneous data and how to equip mixture models with negative weights. Lastly, I will highlight future directions and how circuits can be useful in contemporary machine learning contexts.
Bio: Martin is an Academy of Finland postdoctoral researcher in Aalto CS, and a member of the ELLIS society. He works with prof. Arno Solin on topics related to probabilistic machine learning, tractable models, and Bayesian deep learning. Prior to that he completed his PhD at Graz University of Technology, Austria, under Franz Pernkopf and Robert Peharz. He finished his master’s degree in computational intelligence at Vienna University of Technology, Austria.
Time and date: Monday Feb 12th, 2pm at T4 (Otaniemi CS) and on zoom