Back to All Events

Sara Tähtinen: Explainable AI in computer vision

Abstract: Deep learning models are widely used technique that help humans to go through millions of images quickly. Automatic scanning of images is used almost everywhere: labelling user's images in their phone, deleting disturbing photos from social media, detecting diseases from medical images, checking through passports, .. the list goes on. The more power we give to these algorithms, the more important it is to make sure they are fair. As these models are getting more and more complex, the harder it is to explain their results. But if we cannot understand our models, how can we make sure they work the way we intended them to?

In this talk I will go through why explainable AI is important topic and what kind of ideas are suggested for images in the explainable AI literature. The methods can be roughly divided into two categories: pre-model explainability (i.e. understanding the training data) and post-model explainability (i.e. understanding the model's results). The field of explainable AI is still developing but interesting papers are already published; the focus of this talk is to go through the most promising ones and understand the ideas behind them. Welcome to hear more!

Speakers:  Sara Tähtinen
Sara Tähtinen has a background in research (PhD in theoretical particle physics) and is working in an explainable AI project in her current job. Due to her work as well as personal interest she has gone through extensively the available papers on explainable AI and images, and she is happy to share insights on how to make models more understandable and fair.

Affiliation:  DAIN studios

Place of Seminar:  Zoom (Available afterwards on Youtube)