Postdocs, research fellows and doctoral students in machine learning and artificial intelligence (16 funded positions)

 

Join us to work on new machine learning techniques at the Finnish Center for Artificial Intelligence FCAI! We have several exciting topics available – your work can be theoretical or applied, or both.

Nordic Probabilistic AI Summer School hosted in Helsinki in June 2022. Photo: Melanie Balaz / University of Helsinki

 

We are looking for multiple postdocs, research fellows and PhD students in machine learning. The positions are in the following areas of research:

1) Reinforcement learning
2) Probabilistic methods
3) Simulator-based inference
4) Privacy and federated learning
5) Multi-agent modeling

There are several specific topics related to each research area. You are also welcome to suggest other topics that relate to our core areas of research. Below are further descriptions.

Open positions and areas of research

1) Reinforcement learning

We develop reinforcement learning techniques to enable interaction across multiple agents including AIs and humans, with potential applications from AI-assisted design to autonomous driving. Methodological contexts of the research include deep reinforcement learning, inverse reinforcement learning, hierarchical reinforcement learning as well as multi-agent and multi-objective reinforcement learning.

Related positions:

AI-assisted design »

FCAI is working on a new paradigm of AI-assisted design that aims to cooperate with designers by supporting and leveraging the creativity and problem-solving of designers. The challenge for such AI is how to infer designers' goals and then help them without being needlessly disruptive. We use generative user models to reason about designers' goals, reasoning, and capabilities. In this call, FCAI is looking for a postdoctoral scholar or research fellow to join our effort to develop AI-assisted design. Suitable backgrounds include deep reinforcement learning, Bayesian inference, cooperative AI, computational cognitive modelling, and user modelling.

Example publications by the team:

  1. https://arxiv.org/abs/2107.13074v1
  2. https://dl.acm.org/doi/abs/10.1145/3290605.3300863
  3. https://ieeexplore.ieee.org/abstract/document/9000519/
  4. http://papers.nips.cc/paper/9299-machine-teaching-of-active-sequential-learners

Supervision: Profs. Antti Oulasvirta, Samuel Kaski, Perttu Hämäläinen

Keywords: AI-assisted design, user modeling, cooperative AI

Level: Research fellow, postdoc

Computational rationality »

Computational rationality is an emerging integrative theory of intelligence in humans and machines (1) with applications in human-computer interaction, cooperative AI, and robotics. The theory assumes that observable human behavior is generated by cognitive mechanisms that are adapted to the structure of not only the environment but also the mind and brain itself (2). Implementations use deep reinforcement learning to approximate optimal policy within assumptions about cognitive architecture and their bounds. Cooperative AI systems can utilize such models to infer causes behind observable behavior and plan actions and interventions in settings like semiautonomous vehicles, game-level testing, AI-assisted design etc. FCAI researchers are at the forefront in developing computational rationality as a generative model of human behavior in interactive tasks (e.g., (3,4,5)) as well as suitable inference mechanisms (5). We collaborate with University of Birmingham (Prof. Andrew Howes) and Université Pierre et Marie Curie (UPMC, CNRS) (Dr. Julien Gori, Dr. Gilles Bailly).

In this call, we are looking for a talented postdoctoral scholar or research fellow to join our effort to develop computational theory as a model of human behavior. Suitable backgrounds include deep reinforcement learning, computational cognitive modeling, and reinforcement learning.

References:

  1. S. Gershman et al. Computational rationality: A converging paradigm for intelligence in brains, minds, and machines. Science 2015.
  2. R. Lewis, A. Howes, S. Singh. Computational rationality: A converging paradigm for intelligence in brains, minds, and machines. Topics in Cognitive Science 2014.
  3. J. Jokinen et al. Parameter Inference for Computational Cognitive Models with Approximate Bayesian Computation. Proc. CHI'21, ACM Press.
  4. C. Gebhardt et al. Hierarchical Reinforcement Learning Explains Task Interleaving Behavior. Computational Brain & Behavior 2021.
  5. J. Takatalo et al. Predicting Game Difficulty and Churn Without Players. Proc. CHI Play 2020.
  6. A. Kangasrääsiö et al. Parameter Inference for Computational Cognitive Models with Approximate Bayesian Computation. Cognitive Science 2019.

Supervision: Profs. Antti Oulasvirta, Andrew Howes (University of Birmingham), Samuel Kaski, Arto Klami, Perttu Hämäläinen

Keywords: Computational rationality, computational cognitive modeling, deep reinforcement learning

Level: Research fellow, postdoc

Explainable AI for virtual laboratories »

FCAI is actively developing methods and software for virtual laboratories to enable AI assistance of the research process itself. Efficient human-AI collaboration requires methods that are either inherently capable in providing explanations for the decisions, or methods that can explain decisions of other AI models. For instance, the user needs to know why AI is recommending a particular experiment to be conducted or why AI is predicting a particular outcome, and they should always be aware of the reliability of the AI models. We are looking for a candidate that can conduct research on explainable AI and uncertainty quantification. The project will be conducted in a team consisting of AI researchers with access to researchers specialized in various application areas. The applicant should be interested in incorporating the techniques as part of general virtual laboratory software developed at FCAI for broad applicability.

Supervision: Profs. Kai Puolamäki, Arto Klami

Keywords: Virtual laboratory, explainable AI, uncertainty quantification, human-AI collaboration

Level: Research fellow, postdoc, PhD student

Intrinsic motivation-driven user modeling »

AI-assisted decision making requires human-centric AI capable of inferring a user’s motivations and predicting accurately how their experience and behavior will change as outcome of a decision on either side. There is common agreement amongst cognitive scientists that much of our behavior and experience is not only driven by separate consequences or instrumental outcomes, but also by intrinsic motivations (1). Crucially though, despite offering important benefits such as domain independence, computational models of intrinsic motivations have not been extensively leveraged for user modeling. This project will push this agenda further by addressing amongst others, what constitutes psychologically plausible models of intrinsic motivation, investigating which model can serve as a predictor for certain types of experience and behavior, and inferring the best model from interaction with the user. The project sets out to tackle these challenges for player modeling in videogames as quintessential intrinsically motivating activities. It will then translate the insights into other domains of human-computer interaction. The supervisors have established the basis for this work through pioneering qualitative and quantitative proof-of-concepts (2,3) as well as theoretical studies (4).

The research fellow, postdoc or PhD student will design, implement and execute studies to push the state-of-the-art of user experience and behavior modeling. A strong candidate will have solid coding experience, good knowledge of deep reinforcement learning and an interest in cognitive modeling and videogames. Prior experience in conducting user studies is an asset.

References:

  1. Ryan & Deci. (2000). Self-Determination Theory and the Facilitation of Intrinsic Motivation, Social Development, and Well-Being. American Psychologist, 55(1), 68–78.
  2. Guckelsberger, Salge, Gow & Cairns. (2017). Predicting Player Experience Without the Player. An Exploratory Study. Proc. CHI Play, 305–315.
  3. Roohi, Guckelsberger, Relas, Heiskanen, Takatalo & Hämäläinen. (2021). Predicting Game Difficulty and Engagement Using AI Players. Proc. CHI Play, pp.1-17.
  4. Roohi, Takatalo, Guckelsberger & Hämäläinen. (2018). Review of Intrinsic Motivation in Simulation-Based Game Testing. Proc. CHI, 1–13.

Supervision: Profs. Christian Guckelsberger, Perttu Hämäläinen

Keywords: intrinsic motivation, user modeling, reinforcement learning, human-computer interaction, human-centric AI, cognitive science, videogames

Level: Research fellow, postdoc, PhD student

Learning for behavior as communication »

Communication is fundamental for the successful cooperation of humans and autonomous agents in shared environments. While explicit verbal communication forms the basis of many human-human interactions, it is not applicable in applications such as traffic. Instead, in road traffic we can identify three types of communication: formal communication between the road infrastructure and the participants (traffic lights), formal between the traffic participants (turn signals) and informal (or implicit) between traffic participants (positioning in the traffic lane, the distance between vehicles). Out of these three types, the last one is the most challenging for autonomous vehicles (AV), yet equally if not more important. The critical challenge to enabling AV to participate in this type of communication is the lack of formalised code and the need to learn it through experience, which may be dangerous in the real-world and needs to be replaced with simulation.

The topic will enable the candidate to work on the intersection of multi-agent reinforcement learning, autonomous driving, human-robot interaction and sim-to-real transfer. The candidate's role will be twofold: first data-driven modeling of the communication behaviors to build a realistic simulation environment. The key challenge here is to develop a method that, based on pre-recorded data, can generate unique context-specific messages through vehicle behaviors. Second, to enable the autonomous agent to automatically create and understand behavior based cues and adequately act and react, accomplished through reinforcement learning.

Supervision: Profs. Tomasz Kucner, Ville Kyrki, Joni Pajarinen, Laura Ruotsalainen

Keywords: Reinforcement learning, autonomous driving, intention communication

Level: Postdoc

Learning to handle rare events in autonomous driving »

In autonomous driving, rare events are events that are typically not encountered during simulation and very rarely in the real world. What makes rare events challenging is that they cannot be exactly simulated due to the huge variety of different possible events and the lack of real world data. However, as evidenced by human drivers, responding reasonably to rare events is possible. To cope with rare events this project focuses on learning conditional object-centric representations from unlabeled computer vision data and using these representations in models that can be quickly updated and conditioned on the current driving context. Further, the models will be extended into an adaptive contextual replanning framework that allows for fast response with non-stationary models. The learned models and replanning framework will be made robust in a simulation environment where adversarial agents will learn to cause rare events. An ideal candidate has background knowledge in computer vision, deep learning, or/and reinforcement learning. Depending on the background, a successful candidate may focus on all of the proposal parts above, or, only a subset.

Supervision: Profs. Juho Kannala, Alexander Ilin , Joni Pajarinen

Keywords: Autonomous driving, computer vision, deep learning, reinforcement learning, planning

Level: Research fellow, postdoc, PhD student

Multi-level simulation for sustainable autonomy »

To study future sustainable mobility, FCAI has built Sustainable Autonomous Mobility Virtual Laboratory. The virtual laboratory will allow studying effects of autonomous traffic starting from control of individual vehicles, to their environmental effects such as pollution and noise, as well as their socio-economic effects. The virtual laboratory will integrate several simulators including an autonomous vehicle simulator, as well as other simulators modeling relevant phenomena. A central challenge in the integration is the exchange of information between the individual simulation models with different parameterizations and objectives. We approach this as an AI challenge where parameters of all simulators are inferred jointly from pools of data for each model. The somewhat conflicting objectives of different simulations require the development of multi-objective multi-agent reinforcement learning simulations.

Supervision: Profs. Laura Ruotsalainen, Ville Kyrki, Joni Pajarinen

Keywords: Multi-level simulation, sustainability, autonomous vehicles, multi-objective reinforcement learning, multi-agent reinforcement learning

Level: Research fellow, postdoc; exceptional PhD students with reinforcement learning development experience will be considered

Probabilistic multi-agent modeling for collaborative AI assistants »

We study how to build collaborative assistants which are able to help another agent perform their task. The assistant does not know the agent’s goal in the beginning and has to learn it as a part of this “zero-shot” assistance scenario.

This is interesting as a fundamental multi-agent modeling problem, and in building collaborative AI assistants for human-AI research teams in decision making and design, formulated as sequential decision making. We are looking for a researcher interested in developing with us the theory and inference methods for this new task, or applying the assistants with other FCAI researchers to solving tough decision making and design tasks.

The work will involve probabilistic modeling, multi-agent formulations, POMDPs and reinforcement learning, and inverse reinforcement learning.

Supervision: Profs. Samuel Kaski, Frans Oliehoek (TU Delft), other FCAI professors

Keywords: Probabilistic modeling, multi-agent formulations, POMDP, reinforcement learning, inverse reinforcement learning

Level: Research fellow, postdoc; PhD students considered as well

2) Probabilistic methods

We develop AI tools using probabilistic programming, with our main expertise in Bayesian machine learning. The research is disseminated as modular open-source software, including software for the most popular probabilistic programming framework Stan.

Related positions:

AI-assisted design »

FCAI is working on a new paradigm of AI-assisted design that aims to cooperate with designers by supporting and leveraging the creativity and problem-solving of designers. The challenge for such AI is how to infer designers' goals and then help them without being needlessly disruptive. We use generative user models to reason about designers' goals, reasoning, and capabilities. In this call, FCAI is looking for a postdoctoral scholar or research fellow to join our effort to develop AI-assisted design. Suitable backgrounds include deep reinforcement learning, Bayesian inference, cooperative AI, computational cognitive modelling, and user modeling.

Example publications by the team:

  1. https://arxiv.org/abs/2107.13074v1
  2. https://dl.acm.org/doi/abs/10.1145/3290605.3300863
  3. https://ieeexplore.ieee.org/abstract/document/9000519/
  4. http://papers.nips.cc/paper/9299-machine-teaching-of-active-sequential-learners

Supervision: Profs. Antti Oulasvirta, Samuel Kaski, Perttu Hämäläinen

Keywords: AI-assisted design, user modeling, cooperative AI

Level: Research fellow, postdoc

AI-powered simulation, optimization and inference »

Recent advances in machine learning have shown how powerful emulators and surrogate models can be trained to drastically reduce the costs of simulation, optimization and Bayesian inference, with many trailblazing applications in the sciences. In this project, the candidate will join an active area of research within several FCAI groups to develop new methods for simulation, optimization and inference that combine state-of-the-art deep learning and surrogate-based kernel approaches – including for example deep sets and transformers; normalizing flows; Gaussian and neural processes – with the goal of achieving maximal sample-efficiency (in terms of number of required model evaluations or simulations) and wall-clock speed at runtime (via amortization). The candidate will apply these methods to challenging problems involving statistical and simulator-based models that push the current state-of-the-art, be it for number of parameters (high-dimensional amortized inference), number of available model evaluations (extreme sample-efficiency) or amount of data. The ideal candidate has expertise in both deep learning and probabilistic methods (e.g., Gaussian processes, Bayesian optimization, normalizing flows).

References:

  1. Acerbi (2018); NeurIPS: https://arxiv.org/abs/1810.05558
  2. Acerbi (2020); NeurIPS: https://arxiv.org/abs/2006.08655
  3. Järvenpää & Corander (2021); arXiv: https://arxiv.org/abs/2104.03942
  4. Cranmer et al. (2020); PNAS: https://doi.org/10.1073/pnas.1912789117

Supervision: Profs. Luigi Acerbi, Jukka Corander, Arno Solin, other professors involved in the topic

Keywords: Machine learning, emulators, amortized inference, Bayesian optimization, normalizing flows, simulator-based inference

Level: Research fellow, postdoc, PhD student

Explainable AI for virtual laboratories »

FCAI is actively developing methods and software for virtual laboratories to enable AI assistance of the research process itself. Efficient human-AI collaboration requires methods that are either inherently capable in providing explanations for the decisions, or methods that can explain decisions of other AI models. For instance, the user needs to know why AI is recommending a particular experiment to be conducted or why AI is predicting a particular outcome, and they should always be aware of the reliability of the AI models. We are looking for a candidate that can conduct research on explainable AI and uncertainty quantification. The project will be conducted in a team consisting of AI researchers with access to researchers specialized in various application areas. The applicant should be interested in incorporating the techniques as part of general virtual laboratory software developed at FCAI for broad applicability.

Supervision: Profs. Kai Puolamäki, Arto Klami

Keywords: Virtual laboratory, explainable AI, uncertainty quantification, human-AI collaboration

Level: Research fellow, postdoc, PhD student

Next-generation likelihood-free inference in ELFI »

ELFI (elfi.ai) is a leading software platform for likelihood-free inference of interpretable simulator-based models. The inference engine is built in a modular fashion and contains popular likelihood-free inference paradigms, such as ABC and synthetic likelihood, but also more recent approaches based on classifiers and GP emulation for accelerated inference. We are looking for doctoral students, postdoctoral researchers and research fellows to spearhead development of the next-generation version of the inference engine supporting new inference methods, including the use of PyTorch and deep neural networks for amortized inference, and using ELFI in cutting-edge applications from multiple fields of science, linked to work in several other FCAI teams. The ideal background requires programming experience with modern deep learning frameworks (e.g., PyTorch) and familiarity with probabilistic inference and simulator-based inference.

Supervision: Profs. Jukka Corander, Luigi Acerbi

Keywords: Machine learning, emulators, likelihood-free inference, simulator-based inference

Level: Research fellow, postdoc, PhD student

Probabilistic modeling for assisting human decision making »

We develop AI techniques needed for systems which can help their users make better decisions and design better solutions across a range of tasks from personalized medicine to materials design. A core insight in developing such AIs is that they need to have world models for understanding the world and interacting with it, and user models for understanding the user and interacting with them. This project develops methods and tools for helping humans in decision making tasks, where they use models for the decision. The work will build on work on the Bayesian Workflow for the world model and existing theory in cognitive sciences and human-computer interaction for the user model. The goal is to generate task-specific and informative visualizations and recommendations, tailored to be understandable for the expertise of the specific user to make better decisions.

Supervision: Profs. Samuel Kaski, Antti Oulasvirta, Aki Vehtari

Keywords: Probabilistic modeling, Bayesian inference, Bayesian workflow, decision making

Level: Research fellow, postdoc

Probabilistic multi-agent modeling for collaborative AI assistants »

We study how to build collaborative assistants which are able to help another agent perform their task. The assistant does not know the agent’s goal in the beginning and has to learn it as a part of this “zero-shot” assistance scenario.

This is interesting as a fundamental multi-agent modeling problem, and in building collaborative AI assistants for human-AI research teams in decision making and design, formulated as sequential decision making. We are looking for a researcher interested in developing with us the theory and inference methods for this new task, or applying the assistants with other FCAI researchers to solving tough decision making and design tasks.

The work will involve probabilistic modeling, multi-agent formulations, POMDPs and reinforcement learning, and inverse reinforcement learning.

Supervision: Profs. Samuel Kaski, Frans Oliehoek (TU Delft), other FCAI professors

Keywords: Probabilistic modeling, multi-agent formulations, POMDP, reinforcement learning, inverse reinforcement learning

Level: Research fellow, postdoc; PhD students considered as well

Representation learning for geometric computer vision »

Machine learning based approaches have enabled progress in many classical problems of geometric computer vision, such as stereo depth estimation, image-based 3D modeling and visual localization. One example of learnt models are deep convolutional neural networks, which can provide useful priors for under-determined problems such as stereo depth estimation. Another recent example is learning of neural radiance fields (NeRFs), which are implicit scene representations with potential applications in problems such as new view synthesis, image-based modeling and visual localization. In this project, the aim is to focus on developing learning based approaches for geometric vision problems, which are relevant for machine perception and autonomous systems. We seek motivated candidates with a background in computer vision or machine learning, and an interest in applying one to another.

Supervision: Profs. Juho Kannala, Arno Solin

Keywords: Computer vision, deep learning

Level: Research fellow, postdoc, PhD student

Synthetic psychologist: optimal experiment design for simulator models in cognitive science »

Theories in psychology are increasingly expressed as computational cognitive models that simulate human behavior. Such behavioral models are also becoming the basis for novel applications in areas such as human computer interaction, human-centric AI, computational psychiatry, and user modeling. As models account for more aspects of human behavior they increase in complexity. We aim to develop and apply methods that assist a researcher in dealing with complex and intractable cognitive models. For instance, by developing optimal experiment design methods to help with model selection and parameter inference, or by using likelihood-free methods with cognitive models. This virtual lab will also encourage avenues of research relevant to cognitive modeling and AI-assistance which can be pursued in collaboration with other FCAI teams and virtual laboratories. We are looking for excellent candidates who are excited by cognitive models, Bayesian methods, probabilistic machine learning, and in open-source software environments, in no order of preference.

Supervision: Profs. Luigi Acerbi, Andrew Howes (University of Birmingham), Samuel Kaski, Antti Oulasvirta

Keywords: Virtual laboratory, Cognitive Science, Simulator models, AI-assisted modeling

Level: Research fellow, postdoc

Uncertainty-aware language and speech understanding »

Large neural language models have pushed the frontiers of natural language understanding and generation models. Modern large-scale transformer-based model architectures are used in various downstream tasks such as machine translation, question answering and semantic reasoning tasks. Neural models so far are trained as mean-estimating networks and require huge amounts of data. Languages, on the other hand, are inherently ambiguous and the interpretation uncertainty is not taken into account by current systems. In this project, we integrate probabilistic components into state-of-the-art language models to enable faster learning and more appropriate natural language understanding with sufficient interpretation variance. We will analyze intrinsic representations and test models based on downstream tasks such as SuperGLUE and others. The methodology will be based on various types of Bayesian approximations (eg., SWAG, VAEs, Normalizing Flows) and new advances in Bayesian deep learning. We seek candidates who have experience with deep learning on sequential data and a background in Bayesian modeling. A background in natural language processing or speech technology is a plus.

Supervision: Profs. Jörg Tiedemann, Luigi Acerbi, Arno Solin, Dr. Markus Heinonen

Keywords: Natural language understanding, Bayesian deep learning, probabilistic modeling, representation learning

Level: Research fellow, postdoc

Uncertainty quantification in deep vision models »

In the last decade, substantial progress has been made w.r.t. the performance of computer vision systems, a significant part of it thanks to deep learning. However, most current models lack the ability to reason about the confidence of their predictions; integrating uncertainty quantification into vision systems will help recognize failure scenarios and enable robust applications. FCAI as a community has been active both in developing methods for Bayesian deep learning, but also in adapting them into applications such as perception in autonomous systems and reinforcement learning. We seek motivated candidates with a background in computer vision and/or probabilistic methods, and an interest in applying one to another. This project has links to several research programs and teams within FCAI, and the candidate is expected to have a drive for methods development but also interest in collaborating with application area experts.

Supervision: Profs. Arno Solin, Juho Kannala, other professors involved in the topic

Keywords: Computer vision, uncertainty quantification, probabilistic methods, Bayesian deep learning

Level: Research fellow, postdoc

3) Simulator-based inference

We develop simulation-based methods to learn generative models from the data, i.e., inference methods that replace the likelihood function with a data generating simulator function. Main initiatives include: (1) ELFI, a leading software platform for likelihood-free inference of interpretable simulator-based models and (2) numerous leading GAN-based technologies.

Related positions:

AI-powered simulation, optimization and inference »

Recent advances in machine learning have shown how powerful emulators and surrogate models can be trained to drastically reduce the costs of simulation, optimization and Bayesian inference, with many trailblazing applications in the sciences. In this project, the candidate will join an active area of research within several FCAI groups to develop new methods for simulation, optimization and inference that combine state-of-the-art deep learning and surrogate-based kernel approaches – including for example deep sets and transformers; normalizing flows; Gaussian and neural processes – with the goal of achieving maximal sample-efficiency (in terms of number of required model evaluations or simulations) and wall-clock speed at runtime (via amortization). The candidate will apply these methods to challenging problems involving statistical and simulator-based models that push the current state-of-the-art, be it for number of parameters (high-dimensional amortized inference), number of available model evaluations (extreme sample-efficiency) or amount of data. The ideal candidate has expertise in both deep learning and probabilistic methods (e.g., Gaussian processes, Bayesian optimization, normalizing flows).

References:

  1. Acerbi (2018); NeurIPS: https://arxiv.org/abs/1810.05558
  2. Acerbi (2020); NeurIPS: https://arxiv.org/abs/2006.08655
  3. Järvenpää & Corander (2021); arXiv: https://arxiv.org/abs/2104.03942
  4. Cranmer et al. (2020); PNAS: https://doi.org/10.1073/pnas.1912789117

Supervision: Profs. Luigi Acerbi, Jukka Corander, Arno Solin, other professors involved in the topic

Keywords: Machine learning, emulators, amortized inference, Bayesian optimization, normalizing flows, simulator-based inference

Level: Research fellow, postdoc, PhD student

Next-generation likelihood-free inference in ELFI »

ELFI (elfi.ai) is a leading software platform for likelihood-free inference of interpretable simulator-based models. The inference engine is built in a modular fashion and contains popular likelihood-free inference paradigms, such as ABC and synthetic likelihood, but also more recent approaches based on classifiers and GP emulation for accelerated inference. We are looking for doctoral students, postdoctoral researchers and research fellows to spearhead development of the next-generation version of the inference engine supporting new inference methods, including the use of PyTorch and deep neural networks for amortized inference, and using ELFI in cutting-edge applications from multiple fields of science, linked to work in several other FCAI teams. The ideal background requires programming experience with modern deep learning frameworks (e.g., PyTorch) and familiarity with probabilistic inference and simulator-based inference.

Supervision: Profs. Jukka Corander (University of Helsinki), Luigi Acerbi (University of Helsinki)

Supervision: Profs. Jukka Corander, Luigi Acerbi

Keywords: Machine learning, emulators, likelihood-free inference, simulator-based inference

Level: Research fellow, postdoc, PhD student

Synthetic psychologist: optimal experiment design for simulator models in cognitive science »

Theories in psychology are increasingly expressed as computational cognitive models that simulate human behavior. Such behavioral models are also becoming the basis for novel applications in areas such as human computer interaction, human-centric AI, computational psychiatry, and user modeling. As models account for more aspects of human behavior they increase in complexity. We aim to develop and apply methods that assist a researcher in dealing with complex and intractable cognitive models. For instance, by developing optimal experiment design methods to help with model selection and parameter inference, or by using likelihood-free methods with cognitive models. This virtual lab will also encourage avenues of research relevant to cognitive modeling and AI-assistance which can be pursued in collaboration with other FCAI teams and virtual laboratories. We are looking for excellent candidates who are excited by cognitive models, Bayesian methods, probabilistic machine learning, and in open-source software environments, in no order of preference.

Supervision: Profs. Luigi Acerbi, Andrew Howes (University of Birmingham), Samuel Kaski, Antti Oulasvirta

Keywords: Virtual laboratory, cognitive science, simulator models, AI-assisted modeling

Level: Research fellow, postdoc

4) Privacy and federated learning

We develop methods for efficient privacy-preserving learning and inference using differential privacy. Our work targets probabilistic methods, federated learning, deep learning and data anonymisation through synthetic data.

Related positions:

Privacy-preserving data sharing and federated learning »

Many applications of machine learning suffer from limited training data availability because data holders cannot share their data. The aim of this project is to develop solutions to this fundamental problem through privacy-preserving data sharing using differentially private synthetic data as well as through efficient privacy-preserving federated learning methods. The security and privacy will be guaranteed by a combination of differential privacy and secure multi-party computation.

In this project, you will join our group in developing new learning methods operating under these guarantees, and applying them to real-world problems. Collaboration opportunities will enable testing the methods on academic and industrial applications. A strong candidate will have a background in machine learning or a related field. Experience in privacy-preserving techniques such as differential privacy or secure multi-party computation is an asset.

Supervision: Profs. Antti Honkela, Samuel Kaski

Keywords: Differential privacy, federated learning, synthetic data

Level: Research fellow, postdoc, PhD student

5) Multi-agent modeling

We develop complex and interactive user models using probabilistic methods and inference techniques, and deploy them in realistic assistance settings. These models treat human users as agents who are collaborating with an AI assistant, instead of passive sources of data. This includes, but is not limited to, creating user models that are able to assess the user’s tacit and changing goals, eliciting their knowledge, and understanding how the user interprets the actions of the AI.

Related positions:

AI-assisted design »

FCAI is working on a new paradigm of AI-assisted design that aims to cooperate with designers by supporting and leveraging the creativity and problem-solving of designers. The challenge for such AI is how to infer designers' goals and then help them without being needlessly disruptive. We use generative user models to reason about designers' goals, reasoning, and capabilities. In this call, FCAI is looking for a postdoctoral scholar or research fellow to join our effort to develop AI-assisted design. Suitable backgrounds include deep reinforcement learning, Bayesian inference, cooperative AI, computational cognitive modelling, and user modelling.

Example publications by the team:

  1. https://arxiv.org/abs/2107.13074v1
  2. https://dl.acm.org/doi/abs/10.1145/3290605.3300863
  3. https://ieeexplore.ieee.org/abstract/document/9000519/
  4. http://papers.nips.cc/paper/9299-machine-teaching-of-active-sequential-learners

Supervision: Profs. Antti Oulasvirta, Samuel Kaski, Perttu Hämäläinen

Keywords: AI-assisted design, user modeling, cooperative AI

Level: Research fellow, postdoc

Computational rationality »

Computational rationality is an emerging integrative theory of intelligence in humans and machines (1) with applications in human-computer interaction, cooperative AI, and robotics. The theory assumes that observable human behavior is generated by cognitive mechanisms that are adapted to the structure of not only the environment but also the mind and brain itself (2). Implementations use deep reinforcement learning to approximate optimal policy within assumptions about cognitive architecture and their bounds. Cooperative AI systems can utilize such models to infer causes behind observable behavior and plan actions and interventions in settings like semiautonomous vehicles, game-level testing, AI-assisted design etc. FCAI researchers are at the forefront in developing computational rationality as a generative model of human behavior in interactive tasks (e.g., (3,4,5)) as well as suitable inference mechanisms (5). We collaborate with University of Birmingham (Prof. Andrew Howes) and Université Pierre et Marie Curie (UPMC, CNRS) (Dr. Julien Gori, Dr. Gilles Bailly).

In this call, we are looking for a talented postdoctoral scholar or research fellow to join our effort to develop computational theory as a model of human behavior. Suitable backgrounds include deep reinforcement learning, computational cognitive modeling, and reinforcement learning.

References:

  1. S. Gershman et al. Computational rationality: A converging paradigm for intelligence in brains, minds, and machines. Science 2015.
  2. R. Lewis, A. Howes, S. Singh. Computational rationality: A converging paradigm for intelligence in brains, minds, and machines. Topics in Cognitive Science 2014.
  3. J. Jokinen et al. Parameter Inference for Computational Cognitive Models with Approximate Bayesian Computation. Proc. CHI'21, ACM Press.
  4. C. Gebhardt et al. Hierarchical Reinforcement Learning Explains Task Interleaving Behavior. Computational Brain & Behavior 2021.
  5. J. Takatalo et al. Predicting Game Difficulty and Churn Without Players. Proc. CHI Play 2020.
  6. A. Kangasrääsiö et al. Parameter Inference for Computational Cognitive Models with Approximate Bayesian Computation. Cognitive Science 2019.

Supervision: Profs. Antti Oulasvirta, Andrew Howes (University of Birmingham), Samuel Kaski, Arto Klami, Perttu Hämäläinen

Keywords: Computational rationality, computational cognitive modeling, deep reinforcement learning

Level: Research fellow, postdoc

Explainable AI for virtual laboratories »

FCAI is actively developing methods and software for virtual laboratories to enable AI assistance of the research process itself. Efficient human-AI collaboration requires methods that are either inherently capable in providing explanations for the decisions, or methods that can explain decisions of other AI models. For instance, the user needs to know why AI is recommending a particular experiment to be conducted or why AI is predicting a particular outcome, and they should always be aware of the reliability of the AI models. We are looking for a candidate that can conduct research on explainable AI and uncertainty quantification. The project will be conducted in a team consisting of AI researchers with access to researchers specialized in various application areas. The applicant should be interested in incorporating the techniques as part of general virtual laboratory software developed at FCAI for broad applicability.

Supervision: Profs. Kai Puolamäki, Arto Klami

Keywords: Virtual laboratory, explainable AI, uncertainty quantification, human-AI collaboration

Level: Research fellow, postdoc, or PhD student

Intrinsic motivation-driven user modeling »

AI-assisted decision making requires human-centric AI capable of inferring a user’s motivations and predicting accurately how their experience and behavior will change as outcome of a decision on either side. There is common agreement amongst cognitive scientists that much of our behavior and experience is not only driven by separate consequences or instrumental outcomes, but also by intrinsic motivations (1). Crucially though, despite offering important benefits such as domain independence, computational models of intrinsic motivations have not been extensively leveraged for user modeling. This project will push this agenda further by addressing amongst others, what constitutes psychologically plausible models of intrinsic motivation, investigating which model can serve as a predictor for certain types of experience and behavior, and inferring the best model from interaction with the user. The project sets out to tackle these challenges for player modeling in videogames as quintessential intrinsically motivating activities. It will then translate the insights into other domains of human-computer interaction. The supervisors have established the basis for this work through pioneering qualitative and quantitative proof-of-concepts (2,3) as well as theoretical studies (4).

The research fellow, postdoc or PhD student will design, implement and execute studies to push the state-of-the-art of user experience and behavior modeling. A strong candidate will have solid coding experience, good knowledge of deep reinforcement learning and an interest in cognitive modeling and videogames. Prior experience in conducting user studies is an asset.

References:

  1. Ryan & Deci. (2000). Self-Determination Theory and the Facilitation of Intrinsic Motivation, Social Development, and Well-Being. American Psychologist, 55(1), 68–78.
  2. Guckelsberger, Salge, Gow & Cairns. (2017). Predicting Player Experience Without the Player. An Exploratory Study. Proc. CHI Play, 305–315.
  3. Roohi, Guckelsberger, Relas, Heiskanen, Takatalo & Hämäläinen. (2021). Predicting Game Difficulty and Engagement Using AI Players. Proc. CHI Play, pp.1-17.
  4. Roohi, Takatalo, Guckelsberger & Hämäläinen. (2018). Review of Intrinsic Motivation in Simulation-Based Game Testing. Proc. CHI, 1–13.

Supervision: Profs. Christian Guckelsberger, Perttu Hämäläinen

Keywords: Intrinsic motivation, user modeling, reinforcement learning, human-computer interaction, human-centric AI, cognitive science, videogames

Level: Research fellow, postdoc, PhD student

Learning for behavior as communication »

Communication is fundamental for the successful cooperation of humans and autonomous agents in shared environments. While explicit verbal communication forms the basis of many human-human interactions, it is not applicable in applications such as traffic. Instead, in road traffic we can identify three types of communication: formal communication between the road infrastructure and the participants (traffic lights), formal between the traffic participants (turn signals) and informal (or implicit) between traffic participants (positioning in the traffic lane, the distance between vehicles). Out of these three types, the last one is the most challenging for autonomous vehicles (AV), yet equally if not more important. The critical challenge to enabling AV to participate in this type of communication is the lack of formalised code and the need to learn it through experience, which may be dangerous in the real-world and needs to be replaced with simulation.

The proposed topic will enable the candidate to work on the intersection of multi-agent reinforcement learning, autonomous driving, human-robot interaction and sim-to-real transfer. The candidate's role will be twofold: first data-driven modeling of the communication behaviors to build a realistic simulation environment. The key challenge here is to develop a method that, based on pre-recorded data, can generate unique context-specific messages through vehicle behaviors. Second, to enable the autonomous agent to automatically create and understand behavior based cues and adequately act and react, accomplished through reinforcement learning.

Supervision: Profs. Tomasz Kucner, Ville Kyrki, Joni Pajarinen, Laura Ruotsalainen

Keywords: Reinforcement learning, autonomous driving, intention communication

Level: Postdoc

Probabilistic modeling for assisting human decision making »

We develop AI techniques needed for systems which can help their users make better decisions and design better solutions across a range of tasks from personalized medicine to materials design. A core insight in developing such AIs is that they need to have world models for understanding the world and interacting with it, and user models for understanding the user and interacting with them. This project develops methods and tools for helping humans in decision making tasks, where they use models for the decision. The work will build on work on the Bayesian workflow for the world model and existing theory in cognitive sciences and human-computer interaction for the user model. The goal is to generate task-specific and informative visualizations and recommendations, tailored to be understandable for the expertise of the specific user to make better decisions.

Supervision: Profs. Samuel Kaski, Antti Oulasvirta, Aki Vehtari

Keywords: probabilistic modeling, Bayesian inference, Bayesian workflow, decision making

Level: Research fellow, postdoc

Probabilistic multi-agent modeling for collaborative AI assistants »

We study how to build collaborative assistants which are able to help another agent perform their task. The assistant does not know the agent’s goal in the beginning and has to learn it as a part of this “zero-shot” assistance scenario.

This is interesting as a fundamental multi-agent modeling problem, and in building collaborative AI assistants for human-AI research teams in decision making and design, formulated as sequential decision making. We are looking for a researcher interested in developing with us the theory and inference methods for this new task, or applying the assistants with other FCAI researchers to solving tough decision making and design tasks.

The work will involve probabilistic modeling, multi-agent formulations, POMDPs and reinforcement learning, and inverse reinforcement learning.

Supervision: Profs. Samuel Kaski, Frans Oliehoek (TU Delft), other FCAI professors

Keywords: Probabilistic modeling, multi-agent formulations, POMDP, reinforcement learning, inverse reinforcement learning

Level: Research fellow, postdoc; PhD students considered as well

Synthetic psychologist: optimal experiment design for simulator models in cognitive science »

Theories in psychology are increasingly expressed as computational cognitive models that simulate human behavior. Such behavioral models are also becoming the basis for novel applications in areas such as human computer interaction, human-centric AI, computational psychiatry, and user modeling. As models account for more aspects of human behavior they increase in complexity. We aim to develop and apply methods that assist a researcher in dealing with complex and intractable cognitive models. For instance, by developing optimal experiment design methods to help with model selection and parameter inference, or by using likelihood-free methods with cognitive models. This virtual lab will also encourage avenues of research relevant to cognitive modeling and AI-assistance which can be pursued in collaboration with other FCAI teams and virtual laboratories. We are looking for excellent candidates who are excited by cognitive models, Bayesian methods, probabilistic machine learning, and in open-source software environments, in no order of preference.

Supervision: Profs. Luigi Acerbi, Andrew Howes (University of Birmingham), Samuel Kaski, Antti Oulasvirta

Keywords: Virtual laboratory, cognitive science, simulator models, AI-assisted modeling

Level: Research fellow, postdoc

Your profile

You have a strong research background in machine learning, statistics, artificial intelligence or a related field, preferably demonstrated by publications in the leading machine learning venues (e.g. AISTATS, ICML, JMLR, NeurIPS).

You should hold (or expect to shortly receive) a PhD (for postdoctoral applicants) or Master’s degree (for PhD student applicants) in computer science, statistics, electrical engineering, mathematics or a related field. If you don’t have a PhD/Master’s degree at the time of the application, please submit a plan of completion. Experienced postdoctoral applicants can be considered for research fellow positions, typically having previously worked successfully as a postdoctoral fellow for several years.

The positions require the ability to work both independently and as a part of a team in a highly collaborative and interdisciplinary environment. Any position-specific requirements are stated in the topic descriptions. 

Our offer

1) Research environment

FCAI’s research mission is to create new types of AI that are data-efficient, trustworthy, and understandable. We work towards this by building AI systems capable of helping their users make better decisions and design sustainable solutions across a range of tasks from health applications to autonomous traffic.

You will join a community of machine learning researchers who all make important contributions to our common agenda, providing each other new ideas, complementary methods, and attractive case studies. Your research can be theoretical, applied, or both. You will be part of a broader team of researchers studying similar topics, mentored by a group of several experienced professors. Our community is fully international, and the working language is English.

Our research environment provides you with a broad range of possibilities to work with companies and academic partners, and supports your growth as a researcher. FCAI, host of ELLIS Unit Helsinki, is a salient part of the pan-European ELLIS network, which further strengthens our collaboration with other leading machine learning researchers in Europe. In addition, our local and national computational services give our researchers access to excellent computing facilities, spearheaded by the EuroHPC supercomputer LUMI, the third fastest in the world.

2) Job details

All positions are fully funded, and the salaries are based on the Finnish universities’ pay scale. The positions are based either at Aalto University or at the University of Helsinki.

Research fellow contracts are made up to five years. Postdoc positions are typically 2-year full time contracts, with a possibility to extend the contract for one year. PhD positions are four-year full time contracts, with an assessment checkpoint at two years.

Starting dates are flexible. All positions are negotiated on an individual basis and may include e.g. a relocation bonus, an independent travel budget or research software engineering support.

We are strongly committed to offering everyone an inclusive and non-discriminating working environment. We warmly welcome qualified candidates from all backgrounds to apply and particularly encourage applications from women and other underrepresented groups in the field. 

How to apply?

The deadline for the postdoc/research fellow applications is on August 21 and for the PhD student applications on August 28, 2022. Please send your application to our recruitment system, links below. 

This call is administered together with the Helsinki Institute for Information Technology (HIIT) and the Helsinki Doctoral Education Network in Information and Communications Technology (HICT). You can find the details on how to apply below:

  • Postdoc and research fellow positions: APPLY HERE

    • In the application form you are asked to select the focus area(s) you are interested in; select the “Finnish Center for Artificial Intelligence” tick box to apply for FCAI positions.

    • In your cover letter, please specify one or several FCAI positions to which you apply. 

  • Doctoral researcher positions: APPLY HERE

    • In the application form you are asked to select the research area(s) you are interested in; select the “Algorithms and machine learning” tick box to apply for FCAI positions. You are also asked to select the topic(s) you are interested in; select one or many of the topics described above.

    • In your cover letter, please also specify one or several FCAI positions to which you apply.

    • General FAQ for the doctoral student call: https://hict.fi/general-hict-call-faq/ 

If you are interested in applying but none of the specific topic descriptions match your interest, you are welcome to suggest other topics that relate to our core areas of research. Please specify this, as well as the PIs that you would like to work with, in your cover letter.

Required attachments:

  1. Cover letter (1–2 pages). Please specify one or several FCAI positions to which you apply and/or supervisors with whom you want to work. This is mandatory information and we cannot guarantee your application gets the best review if this is not specified.

  2. CV

  3. List of publications (please do not attach full copies of publications)

  4. A transcript of doctoral/master’s studies and the degree certificate of your latest degree. If you are applying for a postdoc position and don’t yet have a PhD degree or for a PhD student position and don’t have a Master's degree, a plan of completion must be submitted.

  5. Contact details of possible referees from two senior academic people. We will contact your referees, if we need recommendation letters.

All materials should be submitted in English in a PDF format. Note: You can upload multiple files to the recruitment system, each max. 5MB.

 

Who we are

Finnish Center for Artificial Intelligence FCAI is a research community initiated by Aalto University, the University of Helsinki, and the Technical Research Centre of Finland VTT. We develop new types of AI that can work with humans in complex environments, and help renew industry and society.

FCAI is built on a long track record of pioneering machine learning research. Currently over 60 professors contribute to our research.

Our community organizes frequent seminars, e.g., Machine Learning Coffee Seminar and Seminar on Advances in Probabilistic Machine Learning. We offer high-quality collaboration opportunities with other leading research networks and companies. For instance, FCAI hosts ELLIS Unit Helsinki and has a joint research center with NVIDIA.

Local and national computational services spearheaded by the EuroHPC supercomputer LUMI, the fastest supercomputer in Europe, provide our researchers with access to excellent computing facilities.

About Finland

Finland is a great place for living with or without family: it is a safe, politically stable, and well-organized Nordic society, where equality is highly valued and extensive social security supports people in all life situations.

Finland's free high-quality education system is also internationally renowned. Finland has been listed as the happiest country in the world for the fifth year running. Find more information about living in Finland here and here.