Abstract: This talk intends to shed light on some hardware/software integration challenges to accelerate (large) AI models on (custom) edge AI hardware. We will start with Neural networks, by first detailing the necessary computational steps to execute inference tasks with neural networks on general-purpose processors. We will then see how novel and dedicated hardware architectures, such as in-memory computing, enable a more efficient hardware execution. We will also show associated new challenges related to these architectures, both from hardware and software perspectives. Further, we will look into more emerging models, such as probabilistic circuits, that can be good candidates for the next generation of edge AI devices (as part of my group’s current research focus).
Speaker: Martin Andraud
Martin Andraud is an assistant professor in the Department of Electronics and Nanoengineering in Aalto University, Helsinki, Finland. He received the Ph.D. degree in micro- and nano electronics from TIMA Laboratory, University of Grenoble Alpes, France, in 2016. Between 2016 and 2019, he was a Post-Doctoral Researcher, successively with TU Eindhoven, The Netherlands and KU Leuven, Belgium. His current research interests are the design of low-power integrated circuits and mixed-signal edge AI hardware accelerators for self-adaptive and self-learning applications.
Affiliation: Aalto University
Place of Seminar: Zoom