Bridge program at AAAI focused on collaborative AI and modelling of humans

The Finnish Center for Artificial Intelligence FCAI, ELLIS Unit Helsinki and partners organized a one-day workshop at the AAAI meeting in Vancouver in February. The goal was to connect separate scientific communities whose joint efforts are needed to solve one of the most pressing questions in AI: how can humans and AI systems best work together to solve problems. This is one of the main research focuses and strengths of FCAI and ELLIS Unit Helsinki.

The logo of the CAIHu Bridge program, image via Freepik.com

With nearly 400 registrations, this was one of the largest Bridge programs at AAAI. Keynote speakers included Microsoft’s Chief Scientific Officer Eric Horvitz (pictured in the X post below). For one of the organizers, the need for this Bridge was clear. “We need collaborative AI tools to solve tough problems, and these tools work better when they incorporate data and knowledge about humans,” said FCAI director and Aalto University professor Samuel Kaski. “Through simulation-based virtual laboratories, AI tools will become widely available across scientific fields.” Kaski and doctoral researcher Sebastiaan de Peuter gave a tutorial on user modeling for cooperative AI—a summary of that is available below.

For co-organizer Sammie Katt, a postdoctoral fellow at FCAI, the Bridge was an opportunity to gather and interact with the top researchers in the field. "The amount of interest and participation is telling," said Katt. "It was particularly instructive to be part of the discussion between otherwise disconnected communities, each with their own motivation and solution techniques. The support of the ELLIS network, especially the members from the units in Delft and Alicante, has been invaluable in organizing the event and bridging collaborative AI and human modeling".

The Bridge included two lively poster sessions in which researchers mixed in an informal setting, as well as a panel of experts that stressed the importance of actively including and accounting for humans in real-world problems. Given positive responses from the community, possibilities of future recurring events are being considered as next steps.

This Bridge program at AAAI was also made possible by the CIFAR Pan-Canadian AI Strategy, the ELISE project, the Research Council of Finland and the UK Research and Innovation Turing AI World-Leading Researcher Fellowship.


User modeling for cooperative AI — summary of the tutorial

Samuel Kaski presented the broader picture of the research being done in the Probabilistic Machine Learning (PML) group at Aalto University and within FCAI at large. He talked about collaborative AI, and why that requires user modelling. Specifically, for AI to be more collaborative, it needs to understand the user it is interacting with. This requires user modelling, of course, but also requires improvements in user modelling to better capture the strategic and suboptimal behaviours that humans exhibit. Better user models introduce their own complexities, and Kaski highlighted two areas where the PML group has done work on such issues.

1. Reducing the cost of computation of user models by learning computationally cheap surrogates
2. Being able to work with and adapt to users that have various levels of expertise in the given domain

Lastly, Kaski highlighted that improvements in collaborative AI and user modelling will likely be applicable across domains, and hence that it makes sense to imagine an AI assistance toolbox for virtual laboratories to make maximal use of this transferability.

Sebastiaan de Peuter followed with a technical deep-dive into some of the work recently done on learning from preferences. In his part, Kaski pointed out the need for improved user modelling, and the potential for using insights from cognitive science to do so. De Peuter showed a specific instance of this. They specifically looked at preference learning, where human users are asked to state their preferences over small sets of objects (for example, the best of three hotels) and from the stated preferences an underlying objective function over the objects is inferred. This is done multiple times; a user will express their preferences over many such sets, and the objective function from all the collected stated preferences are inferred.

Current user models that are used to model how people make these choices, and thus to infer the underlying objective function which drives these choices, are simplistic. However, there is extensive work in cognitive science on how people make these choices, and that work shows the existence of context effects that induce a number of biases. Context effects are basically situations in which options look relatively worse or better than they would do on their own due to the presence of other options—a hotel may look better than it would in isolation because another hotel is slightly more expensive and slightly worse. These biases are not modelled by current user models. Kaski and de Peuter then used an existing model from cognitive science (which does model these biases) and showed that when these biases are present (as we know they are in real humans), inferring an objective function using this model gave more accurate inferences, in other words led to better understanding of what the user wanted and therefore to better recommendations.