Q. Vera Liao: Human-Centered AI Transparency: Bridging the Sociotechnical Gap
Date: May 14, 2025
Time: 12:00-13:00
Location: 1501 Sklodowska-Curie, Kide, Aalto University (Konemiehentie 1, Espoo)
Abstract: Transparency—enabling appropriate understanding of AI technologies—is considered a pillar of Responsible AI. The AI community have developed an abundance of techniques in the hope of achieving transparency, including explainable AI (XAI), model evaluation, and uncertainty quantification. However, there is an inevitable sociotechnical gap between these computational techniques and the nuanced and contextual human needs for understanding AI. Mitigating the sociotechnical gap has long been a mission of the HCI research community, but the age of AI has brought new challenges to this mission. In this talk, I will discuss these new challenges and some of our approaches to bridging the sociotechnical gap for AI transparency: conducting critical investigation into dominant AI transparency paradigms; studying people’s transparency needs in diverse contexts; and shaping technical development by embedding sociotechnical perspectives in the evaluation practices.
Bio: Q. Vera Liao is a Principal Researcher at Microsoft Research, where she is part of the FATE (Fairness, Accountability, Transparency, and Ethics in AI) group, and an incoming Associate Professor of Computer Science at the University of Michigan. Previously she worked as a Research Staff Member at IBM T.J Watson Research Center. Her current research interests are in human-AI interaction and responsible AI, with an overarching goal of bridging emerging AI technologies and human-centered perspectives. Her work has received many paper awards at HCI and AI venues. She currently serves as the co-editor-in-chief for the Springer HCI Book Series, and has served on the editorial or organizing teams for many conferences and journals including CHI, CSCW, FAccT, IUI, and ACM TiiS.