UH Guest lecture: Rethinking Interpretability and Adaptivity of Deep Learning
Date and time: October 24th, 2025 from 11:15-12:00 EEST
Event type: Hybrid
Venue: Exactum, C323, University of Helsinki Kumpula Campus, Pietari Kalmin katu 5
ZOOM: LINK, Passcode: 316936
Speaker: Prof. Plamen Angelov, Director of Research Lancaster University, founding Director of the Lancaster Intelligent, Robotic and Autonomous systems (LIRA) Centre and a Fellow of the IEEE, of the IET, of ELLIS and of AAIA.
Abstract: The success of deep learning fuelled by the Large Language Models (LLMs), Transformers such as ViT and Foundation Models combined with the abundance of digital data led to the temptation to short-cut from Data to Predictions bypassing the deeper insight, reasoning, semantics, causality and logic which are traditionally related to the model structure or architecture. Deep Learning the way we know it offers unparallel accuracy and generalisation, and class separability, but this comes at the cost of opaque and amorphous internal structure offering little to the increasing demands for human agency and oversight and interpretability in regards to the way the decisions are being made.
In this talk, deep learning pipeline will be re-examined and compared to the traditional machine learning pipeline on one hand and to Cognitive Sciences and Agentic AI pipeline on the other, some parallels with the Brain and the way humans make decisions will also be sketched. Based on this, an alternative to the so called “end-to-end” mantra will be discussed offering a modular approach based on prototypes which provides more degrees of freedom in regards to interpretability, human agency and oversight and, interestingly, in regards to adaptivity and continual learning. While traditionally, adaptivity (not only in deep learning) is being addressed by additive updates in this talk this is being critically analysed and considered as one of the reasons for the so called “catastrophic forgetting”. An alternative is considered instead - adaptation of atomic knowledge representations (KR) in the form of prototypes and clusters. We argue that they are more suitable KR than weights of a deep neural network as sometimes suggested in literature. We further demonstrate that such KRs are practically invariant to latent space transformations and further adaptation during a continual learning process.
Some examples and applications are used mostly as a proof of the concept.
References:
[1] A. Aghasanli, Y. Li, P. Angelov, Prototype-Based Continual Learning with Label-free Replay Buffer and Cluster Preservation Loss, Proceedings of the Computer Vision and Pattern Recognition Conference, CVPR-2025, 6545-6554
[2] P. Angelov, E. Soares, Towards explainable deep neural networks (xDNN), Neural Networks 130, 185-194
[3] A. Aghasanli, P. Angelov, fast Prototype-based t-SNE for Large-Scale and Online Data, TMLR, 2025, https://openreview.net/forum?id=7wCPAFMDWM
Speaker bio: Prof. Angelov holds a Chair in Intelligent Systems and leads AI group at the School of Computing and Communications, Lancaster University where he served as School's Director of Research (2020-2025); this academic year being on sabbatical. Prof. Angelov is founding Director of the Lancaster Intelligent, Robotic and Autonomous systems (LIRA) Centre and a Fellow of the IEEE, of the IET, of ELLIS and of AAIA. He has over 400 publications in leading journals (like TPAMI, Information Fusion, IEEE Transactions on Cybernetics, etc.), peer-reviewed conference proceedings (such as CVPR, ICLR, AAAI, ICCV, ECCV, IEEE), 3 granted US patents, 3 research monographs (by Wiley, 2012 and Springer, 2002 and 2018) cited over 18500 times with an h-index of 66 and is ranked top 0.2% (731th out of 399064 researchers in AI worldwide in 2024) according to the “Top 2% Scientists” Stanford University’s list" and has 12 highly cited papers; around half of his publications are in top 10% venues according to SciVal. He has an active research portfolio in the area of interpretable (explainable-by-design) deep learning, interests in adaptive and continual deep learning and internationally recognised results into explainable deep learning, evolving systems for streaming data and computational intelligence. More recently, he published Recursive SNE – the fastest recursive SNE method for visualisation that allows incremental and per sample visualisation, TMLR paper: https://openreview.net/pdf?id=7wCPAFMDWM code: https://github.com/Aghasanli-Angelov/RSNE Prof. Angelov leads numerous projects funded by UK research councils, EC, European Space Agency, DSTL, GCHQ, Royal Society, Faraday Institute, industry. He is recipient of the Dennis Gabor award (2020) for "outstanding contributions to engineering applications of neural networks", IEEE awards ‘For outstanding Services’ (2013 and 2017) and other awards. He is Editor-in-Chief of Springer’s journal Evolving Systems (recipient of Editorial Excellence awards for 2020 and 2024)and Associate Editor of IEEE Transactions on Cybernetics, IEEE Transactions on AI and other journals. He gave 40 keynote talks and was General co-Chair of a number of high profile IEEE conferences, including IJCNN. He is founding Chair of the Technical Committee on Evolving Intelligent Systems, SMC Society of the IEEE and was previously chairing the Standards Committee of the Computational Intelligent Society of the IEEE (2010-2012) where he initiated and chaired the Work Group P2976 on the IEEE standard on explainable AI. He is the founding co-Director of one of the funded programmes by ELLIS (on Human-centred machine learning). He was also a member of International Program Committee of over 150 international conferences (primarily IEEE). More details can be found at www.lancs.ac.uk/staff/angelov