Postdoc and PhD student positions in machine learning and AI
(19 funded positions)

 

Join us to work on new machine learning techniques at the Finnish Center for Artificial Intelligence FCAI! We have several exciting topics available – your work can be theoretical or applied, or both.

Nordic Probabilistic AI Summer School hosted in Helsinki in June 2022. Photo: Melanie Balaz / University of Helsinki

 

We are looking for multiple postdocs and PhD students in machine learning. The positions are in the following areas of research:

1) Reinforcement learning
2) Probabilistic methods
3) Simulator-based inference
4) Privacy and federated learning
5) Multi-agent learning

There are several specific topics related to each research area. Below are further descriptions.

Areas of research

1) Reinforcement learning

We develop reinforcement learning techniques to enable interaction across multiple agents including AIs and humans, with potential applications from AI-assisted design to autonomous driving. Methodological contexts of the research include deep reinforcement learning, inverse reinforcement learning, hierarchical reinforcement learning as well as multi-agent and multi-objective reinforcement learning.

Open positions →

2) PROBABILISTIC METHODS

We develop AI tools using probabilistic programming, with our main expertise in Bayesian machine learning. The research is disseminated as modular open-source software, including software for the most popular probabilistic programming framework Stan.

Open positions →

3) Simulator-based inference

We develop simulation-based methods to learn generative models from the data, i.e., inference methods that replace the likelihood function with a data generating simulator function. Main initiatives include: (1) ELFI, a leading software platform for likelihood-free inference of interpretable simulator-based models and (2) numerous leading GAN-based technologies.

Open positions →

4) PRIVACY AND FEDERATED LEARNING

We develop methods for efficient privacy-preserving learning and inference using differential privacy. Our work targets probabilistic methods, federated learning, deep learning and data anonymisation through synthetic data.

Open positions →

5) MULTI-AGENT Learning

We develop complex and interactive user models using probabilistic methods and inference techniques, and deploy them in realistic assistance settings. These models treat human users as agents who are collaborating with an AI assistant, instead of passive sources of data. This includes, but is not limited to, creating user models that are able to assess the user’s tacit and changing goals, eliciting their knowledge, and understanding how the user interprets the actions of the AI.

Open positions →

Open positions

You can find a list of our open positions below. Connections to areas of research are indicated after the topic:

  • Reinforcement learning: RL

  • Probabilistic methods: PM

  • Simulator-based inference: SBI

  • Privacy and federated learning: PFL

  • Multi-agent learning: MAL

F1: AI-assisted design - RL, PM, MAL »

FCAI is working on a new paradigm of AI-assisted design that aims to cooperate with designers by supporting and leveraging the creativity and problem-solving of designers. The challenge for such AI is how to infer designers' goals and then help them without being needlessly disruptive. We use generative user models to reason about designers' goals, reasoning, and capabilities. In this call, FCAI is looking for a postdoctoral scholar or research fellow to join our effort to develop AI-assisted design. Suitable backgrounds include deep reinforcement learning, Bayesian inference, cooperative AI, computational cognitive modelling, and user modelling.

Example publications by the team

Keywords: AI-assisted design, user modeling, cooperative AI

Level: Postdoctoral researcher, research fellow

Supervision: Profs. Antti Oulasvirta (Aalto University), Samuel Kaski (Aalto University), Perttu Hämäläinen (Aalto University)

F2: Amortized inference for experimental design and decision making - PM, SBI »

We develop amortized experimental design and inference techniques that take into account the down-the-line decision making task. For example, this may include delayed-reward decision making where data has to be measured, at a cost, before making the decision. This problem occurs in the design-build-test-learn cycles which are ubiquitous in engineering system design, and experimental design in sciences and medicine. The solutions need Bayesian experimental design techniques able to work well with simulators, measurement data and humans in the loop, who are both information sources and the final decision makers. For online and real-time tasks, algorithmic recommendations need to come near-instantly, thus requiring amortization of both experimental design and of the decision-making suggestions. The assistive methods need to account for uncertainty in the inference process and possibly in the utility function itself.

We are looking for a machine learning researcher with familiarity with probabilistic modelling, amortized inference via deep learning techniques, and/or Bayesian experimental design, interested in developing the new methods, with options on applying the techniques to improve modelling in the FCAI’s Virtual Laboratories.

Keywords: Sequential design of experiments, Bayesian experimental design, active learning, amortized inference

Level: Research fellow, postdoc and/or PhD student

Supervision: Luigi Acerbi (University of Helsinki), Samuel Kaski (Aalto University)

F3: Amortized surrogates for simulation and inference - PM, SBI »

Recent advances in machine learning have shown how powerful emulators and surrogate models can be trained to drastically reduce the costs of simulation, optimization and Bayesian inference, with many trailblazing applications in the sciences. In this project, the candidate will join an active area of research within several FCAI groups to develop new methods for simulation, optimization and inference that combine state-of-the-art deep learning and surrogate-based kernel approaches – including for example deep sets and transformers; normalizing flows; Gaussian and neural processes – with the goal of achieving maximal sample-efficiency (in terms of number of required model evaluations or simulations) and wall-clock speed at runtime (via amortization). The candidate will apply these methods to challenging problems involving statistical and simulator-based models that push the current state-of-the-art, be it for number of parameters (high-dimensional amortized inference), number of available model evaluations (extreme sample-efficiency) or amount of data. The ideal candidate has expertise in both deep learning and probabilistic methods (e.g., Gaussian processes, Bayesian optimization, normalizing flows).

References:

Keywords: Emulators, amortized inference, Bayesian optimization, normalizing flows, simulator-based inference

Level: Research fellow, postdoc and/or PhD student

Supervision: Profs. Luigi Acerbi (University of Helsinki), Jukka Corander (University of Helsinki)

F4: Automatic model quality assessment and improvement - RL, PM »

The objective of this research topic is to enable autonomous embodied systems to automatically assess and improve their internal models, which guide their interaction with the external world. The task is to develop application independent methods that automatically evaluate the quality of learned models and propose and execute the model improvement. The developed methods will be included in the broader work on autonomous driving.

The proposed research activity takes a step toward the genuinely long-life operation of autonomous agents. The work will focus on developing methodologies that provide information about the correctness of an entire model or its parts. This information will later be utilized in the iterative or online learning process such that the model will be selectively updated. Thus there will be no need to retrain the entire model from scratch, and there will be no risk of decreasing the overall performance. In this work, the candidate will also look into the problem of explainability so that motivation for the proposed model changes can be provided.

The candidate should have a PhD in Computer Science, Machine Learning, AI, Robotics or related fields. They should have an excellent track record in machine learning or active perception; experiences with explainable AI or robotic introspection are a plus.

Keywords: Model quality assessment, model introspection, long-term autonomy

Level: Research fellow or postdoc

Supervision: Profs. Tomasz Kucner (Aalto University); Joni Pajarinen (Aalto University)

F5: Computational rationality - RL, MAL »

Computational rationality is an emerging integrative theory of intelligence in humans and machines [1 ] with applications in human-computer interaction, cooperative AI, and robotics. The theory assumes that observable human behavior is generated by cognitive mechanisms that are adapted to the structure of not only the environment but also the mind and brain itself [2 ]. Implementations use deep reinforcement learning to approximate optimal policy within assumptions about cognitive architecture and their bounds. Cooperative AI systems can utilize such models to infer causes behind observable behavior and plan actions and interventions in settings like semiautonomous vehicles, game-level testing, AI-assisted design etc. FCAI researchers are at the forefront in developing computational rationality as a generative model of human behavior in interactive tasks (e.g., [3,4,5]) as well as suitable inference mechanisms [5]. We collaborate with University of Birmingham (Prof. Andrew Howes) and Université Pierre et Marie Curie (UPMC, CNRS) (Dr. Julien Gori, Dr. Gilles Bailly).

In this call, we are looking for a talented postdoctoral scholar or research fellow to join our effort to develop computational theory as a model of human behavior. Suitable backgrounds include deep reinforcement learning, computational cognitive modelling, and reinforcement learning.

References:

  1. S. Gershman et al. Computational rationality: A converging paradigm for intelligence in brains, minds, and machines. Science 2015.
  2. R. Lewis, A. Howes, S. Singh. Computational rationality: A converging paradigm for intelligence in brains, minds, and machines. Topics in Cognitive Science 2014.
  3. J. Jokinen et al. Parameter Inference for Computational Cognitive Models with Approximate Bayesian Computation. Proc. CHI'21, ACM Press.
  4. C. Gebhardt et al. Hierarchical Reinforcement Learning Explains Task Interleaving Behavior. Computational Brain & Behavior 2021.
  5. J. Takatalo et al. Predicting Game Difficulty and Churn Without Players. Proc. CHI Play 2020.
  6. A. Kangasrääsiö et al. Parameter Inference for Computational Cognitive Models with Approximate Bayesian Computation. Cognitive Science 2019.

Supervision: Profs. Antti Oulasvirta (Aalto University), Andrew Howes (University of Birmingham), Samuel Kaski (Aalto University), Arto Klami (University of Helsinki), Perttu Hämäläinen (Aalto University)

Keywords: Computational rationality, computational cognitive modeling, deep reinforcement learning

Level: Postdoctoral researcher, research fellow

F6: Deep learning for material science - PM, SBI »

The goal of the project is to develop novel deep learning algorithms to answer open questions in nanoscale physics such as predicting molecular structure in liquids or developing accurate, but very fast models of water. In particular we want to take advantage of the capabilities of Graph Neural Networks to provide robust and highly efficient simulators for molecular systems. These models will be coupled to state-of-the-art experimental characterisation, ultimately including a dynamic interaction where simulations are actively used to focus on information rich regions during experiments. We are looking for applicants with a strong background in deep learning and/or physical simulations.

Keywords: Deep learning, graph neural networks, material science, physics simulations

Level: Research fellow, postdoc and/or PhD student

Supervision: Profs. Adam Foster (Aalto University), Alexander Ilin (Aalto University)

F7: Deployment as a fundamental ML challenge - RL, PM, SBI, MAL »

Machine learning (ML) is now used widely in sciences and engineering, in prediction, emulation, and optimization tasks. Contrary to what we would like to think, it does not work well in practice. Why?

Because the conditions during deployment may radically differ from training conditions. This has been conceptualized as distribution shift or sim-to-real gap, and a particularly interesting challenge which we will be tackling is changes due to unobserved confounders. Solving this challenge is imperative for widespread deployment of ML. In this project, we will consider deployment as a fundamental machine learning challenge, developing new principles and methods for tackling this problem which can be argued to be the main show-stopper in making machine learning seriously useful in solving the real problems we are facing, in sciences, companies and society. We have exciting test cases in FCAI’s highlight problems, in drug design, materials science, and health applications. We are looking for candidates with strong background in probabilistic machine learning.

Keywords: ML deployment, distribution shift, out-of-distribution, unobserved confounders

Level: Research fellow, postdoc and/or PhD student

Supervision: Profs. Samuel Kaski (Aalto University), Vikas Garg (Aalto University)

F8: Efficient probabilistic modeling of speech - PM »

Probabilistic models are essential for capturing the stochastic properties in speech and audio signals. Speech synthesis in particular has recently emerged as a proving ground application for deep generative models. Autoregressive (AR) density models, such as WaveNet, achieve high quality, can be trained effectively using maximum likelihood, and have low algorithmic latency suitable for real-time applications. However, available implementations of AR inference using GPUs and popular Python libraries are slow, which has shifted research to feedforward models for parallel inference. These models can similarly achieve high quality, whether trained as Generative Adversarial Networks (GANs), Diffusion Models or Energy Based Models (with contrastive estimation). However, these training recipes often involve a complex mixture of training objectives, while feedforward models can further be relatively inefficient and introduce algorithmic latency. Meanwhile, Digital Signal Processing (DSP) methods for speech and audio contain valuable knowledge both in perceptually relevant metrics for similarity, and in design and implementation of models with feedback. First, we propose to leverage this knowledge to develop efficient DSP-based recurrent building blocks (with forward and backward stability guarantees), and integrate them into a deep learning system. Second, we aim to formulate a unified framework for evaluating the various probabilistic models for speech using Score Matching Networks, experiment on what is really needed to make a model work, and extend the current approaches to use perceptually relevant score matching functions. We are looking for applicants with a strong background in deep learning and probabilistic modeling.

Keywords: Deep generative models, digital signal processing, probabilistic modeling, speech

Level: Research fellow, postdoc and/or PhD student

Supervision: Profs. Lauri Juvela (Aalto University), Alexander Ilin (Aalto University)

F9: Evaluating and improving posterior inference for difficult posteriors - PM, SBI »

Both MCMC and distributional approximation algorithms (variational and Laplace approximations) often struggle to handle complex posteriors, but we lack good tools for understanding how and why. We study diagnostics for identifying the specific nature of the computational difficulty, seeking to identify e.g. whether the difficulty is caused by narrow funnels or strong curvature. We also develop improved inference algorithms that account for these challenges, e.g. via automated and semi-automated transformations for making the posterior easier or by better accounting for the underlying geometry. We are looking for applicants with a strong inference background and interest in working on improving inference for the hardest problems.

Keywords: MCMC, variational approximation, differential geometry, inference diagnostics, Bayesian workflow

Level: Research fellow, postdoc and/or PhD student

Supervision: Profs. Aki Vehtari (Aalto University) and Arto Klami (University of Helsinki); primary/secondary depending on the candidate’s interests

F10: Explainable AI for virtual laboratories - RL, PM, PFL »

FCAI is actively developing methods and software for virtual laboratories to enable AI assistance in the research process. We are looking for a candidate to research explainable AI and uncertainty quantification. Efficient human-AI collaboration requires methods that are either inherently capable of providing explanations for the decisions or can explain the decisions of other AI models. For instance, the user needs to know why AI recommends a particular experiment or predicts a specific outcome. They should always be aware of the reliability of the AI models. You will conduct the project with a team of AI researchers with access to researchers specialised in various application areas. The applicant should be interested in incorporating the techniques as part of virtual laboratory software developed at FCAI for broad applicability.

Keywords: Virtual laboratory, explainable AI, uncertainty quantification, human-AI collaboration

Level: Research fellow, postdoc and/or PhD student

Supervision: Profs. Arto Klami (University of Helsinki), Kai Puolamäki (University of Helsinki)

F11: Foundation models for interactive computing - RL, SBI »

Foundation models such as GPT-3 and CLIP are revolutionizing how AI is developed and applied, by providing reusable and general-purpose building blocks with unprecedented capabilities. Instead of training large-scale models from scratch for thousands or millions of GPU hours, one can solve novel tasks by combining pretrained foundation models in novel ways, or finetuning them for downstream tasks. The release of OpenAI’s CLIP, for instance, soon led to the emergence of CLIP-guided image generation models such as Disco Diffusion, Stable Diffusion, Midjourney, and DALL-E 2, as well as smaller-scale experiments such as ClipDraw and StyleClipDraw. CLIP also increasingly empowers various semantic search solutions across multiple industries.

The objective of this research topic is to study, develop, and test/validate foundation models for interactive computing, where they have received relatively less attention so far. The specific research foci depend on the hired researchers’ interests, but might include:

  • Large language models (LLMs) as textual human simulacra, e.g., in generating synthetic user research data, role-playing research participants in rapid research exploration and piloting.
  • Foundation models for simulated human movement control, such as AI “user simulators” that can be used to test interactive systems, or virtual AI motion capture actors that can follow choreographer instructions to generate animations or suggest ways to solve movement problems.
  • Multimodal extensions of the above, e.g., video-language reward models that determine how well an AI agent solves a task defined using natural language, based on visual observations, or user motivation and emotion models that can provide synthetic “think aloud” narrative based on user simulation data.
  • Generative models for designing interactive systems, e.g., generating visual designs, personas, user stories, or wireframes.

FCAI Teams have already made progress in many of the directions above (e.g., https://github.com/aikkala/user-in-the-box, https://dl.acm.org/doi/abs/10.1145/3490100.3516464, https://dl.acm.org/doi/abs/10.1145/2858036.2858233, https://github.com/NVlabs/stylegan3), providing an excellent foundation for future research.

Keywords: Large Language Models, Foundation Models, User simulation, Generative models

Level: Research fellow, postdoc and/or PhD student

Supervision: Profs. Perttu Hämäläinen (primary, Aalto University); Robin Welsch (Aalto University), Christian Guckelsberger (Aalto University), Jaakko Lehtinen (Aalto University, NVIDIA)

F12: Interactive AI using multimodal communication - RL, PM, SBI, MAL »

Intelligent machines require not only an internal model of the world but also interaction with humans and their environment. They need to make use of contextualized information and must be able to adapt to the user. This project focuses on multimodal communication that is natural for humans but difficult for machines and will deal with spontaneous spoken and written language, gestures and other forms of non-verbal types of interaction. The important aspect is to model those channels in combination to be able to deal with complementary information coming from audio-visual signals as well as traditional types of input coming from keyboards and touch-sensitive devices. We envision the development of a multimodal assistant that can interact with its users in a coherent natural way. The project will include work on cross-modal attention and sequence models that are able to capture long-distance dependencies and we will test our ideas in applications related to health and wellbeing. The framework for this work will be based on modern neural architectures and deep learning and combines aspects of supervised, unsupervised and reinforcement learning. This research problem will be solved as a collaboration between three research groups: Aalto ASR, Aalto Video Content Analysis and Helsinki-NLP, all in FCAI.

Keywords: Multimodal NLP, speech technology, computer vision

Level: Research fellow or postdoc

Supervision: Profs. Mikko Kurimo (primary, Aalto University), Jorma Laaksonen (Aalto University), Jörg Tiedemann (University of Helsinki)

F13: Language-empowered world models for RL - RL »

Incorporating accurate causal knowledge about an environment in terms of objects and their relationships into the world model of a reinforcement learning agent can yield significant improvements in reducing the amount of exploration required to solve new tasks and to generalize to new environments. Recently, causal representation learning has been proposed as a way of extracting and representing such causal knowledge from previous experience in the latent space. However, in practice data are extremely correlated and it is not straightforward to come up with ways to even disentangle relevant objects, not to mention the causal relationships between them. On the other hand, large language models (LLMs) encompass a lot of information about objects and causal relationships, for example: "User: How can I turn the lights on? GPT-3: Depending on the type of light fixture in the room, you can usually turn the lights on by flipping a switch near the entrance to the room." Leveraging this type of information when learning world models for reinforcement learning (RL) and causal inference is still mostly underexploited, although the combination of LLMs with RL is a rapidly surfacing new topic in the machine learning community, see https://larel-workshop.github.io/. We are looking for a postdoc who would leverage and expand our previous work on RL agents with language understanding abilities, to address this highly topical research theme. In particular, we focus on improving the s-o-t-a on language guided RL benchmarks by empowering the agent with deep latent variable based probabilistic world models, which accurately encompass uncertainty that serves as the foundation for reliable planning.

Keywords: Causality, language modeling, reinforcement learning, world models

Level: Research fellow, postdoc and/or PhD student

Supervision: Pekka Marttinen (Aalto University); Alexander Ilin (Aalto University)

F14: Long term planning with search graphs - RL »

In many complex sequential decision making tasks, online planning is crucial for high-performance. Monte Carlo Tree Search (MCTS) is an efficient online planning tool which employs a principled mechanism for trading off between exploration and exploitation. Following the success of MCTS in discrete control problems (such as the games of Go, Chess, and Shogi), various MCTS extensions have been proposed to continuous domains. However, the inherent high branching factor and the resulting explosion of the search tree size is limiting existing methods. In this project, we investigate novel extensions of MCTS for continuous domains based on search graphs. Our approach is built on the idea that sharing the same action policy between several states can yield efficient planning and thus high performance. This results in a limited number of stochastic action bandit nodes to produce a layered graph instead of an MCTS search tree allowing for long term planning. The designed algorithms can be used for robotic manipulation and navigation, for example, with a Boston Dynamics Spot robot. We are looking for applicants with a strong background in reinforcement learning (especially model-based) and tree search algorithms.

Supervision: Profs. Joni Pajarinen (Aalto University), Alexander Ilin (Aalto University)

Keywords: Continuous control, graph search, Monte Carlo tree search, online planning, reinforcement learning

Level: Research fellow, postdoc and/or PhD student

F15: ML4Science - RL, PM, SBI, MAL »

Machine learning is increasingly being used as a key element in research, for instance to efficiently approximate computationally costly simulations, automate design of experiments, and for integrated analysis of experimental results and multi-fidelity simulations. Much of the practical work is done in the context of specific applications in science, but our interest lies in the more general question of how ML could be used as part of the research process, essentially to improve the results and the scientific process itself. We seek solutions that work across multiple disciplines and applications. We are looking for candidates interested in aspects such as (a) how to best incorporate domain knowledge into probabilistic ML models, (b) how to integrate ML models as a part of the research process that involves e.g. also empirical experimentation, and (c) how to assist the research process itself using AI solutions. The work relates closely to our Virtual Laboratories initiative: realizing that most fields now use computational tools, essentially doing experiments first virtually with simulations, we can have scale advantages with AI methods that are cross-usable across the fields. The virtual laboratories give opportunities to demonstrate and validate the research contributions in several different natural science applications. An ideal candidate has expertise in both ML and some science domain, but candidates with strong background in either one are also considered.

Keywords: Natural science, virtual laboratory, AI-assisted research

Level: Research fellow, postdoc and/or PhD student

Supervision: Profs. Samuel Kaski (Aalto University), Arto Klami (University of Helsinki), other professors (e.g. Patrick Rinke, Aalto University) depending on candidate’s interests/qualifications

F16: Multi-agent RL for collaborative AI - RL, MAL »

We develop the new ML principles and methods needed by AI assistants to help people make better decisions, with ongoing applications in science and engineering. We use multi-agent formalisms to define the assistance problems these assistants solve, including the human agent being assisted, and develop new multi-agent RL solutions for the problem. We are particularly interested in (1) how to build and (pre-)train models of human behaviour based on cognitive science, and (2) how to solve new ad-hoc teamwork problems with multi-agent RL.

We are looking for new members in our team, with experience in probabilistic machine learning and multi-agent reinforcement learning. No formal experience with cognitive science is required. Additional knowledge in any of the following will be helpful but not necessary - we have a great team to work with: game theory, Bayesian RL, computational rationality, and inverse reinforcement learning.

Recent publications by the team:

  1. https://arxiv.org/abs/2211.16277 (best paper award; HiLL@Neurips-22)
  2. https://arxiv.org/abs/2202.07364 (AAAI-23)
  3. https://arxiv.org/abs/2204.01160 (AAMAS-22)

Keywords: User modeling, multi-agent RL, human-AI interaction, cooperative AI

Level: Research fellow, postdoc and/or PhD student

Supervision: Samuel Kaski (Aalto University) and other professors

F17: Private federated learning - PFL »

Many applications of machine learning require training on distributed data while keeping the data private. Private federated learning enables this, but its communication requirements can be impractical. The aim of this project is to develop new approaches and methods for private learning on distributed data. A strong background in differential privacy and/or federated learning is an asset for this project.

Keywords: Differential privacy, federated learning, deep learning

Level: Research fellow, postdoc and/or PhD student

Supervision: Profs. Antti Honkela (University of Helsinki), Samuel Kaski (Aalto University)

F18: Sustainability in computing - PM »

Sustainability is important in the context of AI and in particular there exist AI/ML/DS methods to improve sustainability of a given system using say AI methods. However, the computational methods used in AI are not always sustainable as they might require a lot of data or a lot of computational power produce the results. The aim of this research project is to take a look at sustainable AI methods and in particular sustainable computational methods whose energy fingerprint is minimal. One possible approach that has recently been studied is the clever use of parallel and distributed algorithms to decrease the amount of energy per flop. We are looking for applicants with interests in energy efficient or otherwise sustainable computational methods in general.

Keywords: Sustainability in AI, energy-efficient machine learning, data-light computing, parallel computing

Level: Research fellow, postdoc and/or PhD student

Supervision: Profs. Simo Särkkä (Aalto University), Laura Ruotsalainen (University of Helsinki)

F19: Workflows for better priors - PM, SBI »

Bayesian models rely on prior distributions that encode knowledge about the problem, but specifying good priors is often difficult in practice. We are working on multiple fronts on making it easier, with contributions to e.g. prior elicitation, prior diagnostics, prior checking, and specification of priors in predictive spaces. We welcome applicants looking to work on any of these aspects and contribute to both theoretical development and practical software for aiding the prior specification process.

Keywords: Prior elicitation, Bayesian workflow, priors on predictive space, default priors

Level: Research fellow, postdoc and/or PhD student

Supervision: Profs. Aki Vehtari (Aalto University) and Arto Klami (University of Helsinki); primary/secondary depending on the candidate’s interests

Your profile

You have a strong research background in machine learning, statistics, artificial intelligence or a related field, preferably demonstrated by publications in the leading machine learning venues (e.g. AISTATS, ICML, JMLR, NeurIPS).

You hold (or expect to shortly receive) a PhD (for postdoctoral applicants) or Master’s degree (for PhD student applicants) in computer science, statistics, electrical engineering, mathematics or a related field. If you don’t have a PhD/Master’s degree at the time of the application, please submit a plan of completion. Experienced postdoctoral applicants can be considered for research fellow positions, typically having previously worked successfully as a postdoctoral fellow for several years.

The positions require the ability to work both independently and as a part of a team in a highly collaborative and interdisciplinary environment. Any position-specific requirements are stated in the topic descriptions. 

Our offer

1) Research environment

FCAI’s research mission is to create new types of AI that are data-efficient, trustworthy, and understandable. We work towards this by building AI systems capable of helping their users make better decisions and design sustainable solutions across a range of tasks from health applications to autonomous traffic.

You will join a community of machine learning researchers who all make important contributions to our common agenda, providing each other new ideas, complementary methods, and attractive case studies. Your research can be theoretical, applied, or both. You will be part of a broader team of researchers studying similar topics, mentored by a group of several experienced professors. Our community is fully international, and the working language is English.

Our research environment provides you with a broad range of possibilities to work with companies and academic partners, and supports your growth as a researcher. FCAI, host of ELLIS Unit Helsinki, is a salient part of the pan-European ELLIS network, which further strengthens our collaboration with other leading machine learning researchers in Europe. In addition, our local and national computational services give our researchers access to excellent computing facilities, spearheaded by the EuroHPC supercomputer LUMI, the third fastest in the world.

2) Job details

The positions are based either at Aalto University or at the University of Helsinki. All positions are fully funded, and the salaries are based on the Finnish universities’ pay scale. The contract includes occupational healthcare. 

Postdoc positions are typically made for up to three years. Following the standard practice in the departments, the PhD position contract will be made initially for two years, then extended to another two years after a successful mid-term progress review.

Starting dates are flexible. All positions are negotiated on an individual basis and may include e.g. a relocation bonus, an independent travel budget or research software engineering support.

We are strongly committed to offering everyone an inclusive and non-discriminating working environment. We warmly welcome qualified candidates from all backgrounds to apply and particularly encourage applications from women and other underrepresented groups in the field. 

How to apply?

The deadline for the postdoc/research fellow applications is on January 15 and for the PhD student applications on January 29, 2023. Please send your application to our recruitment system, links below. 

This call is administered together with the Helsinki Institute for Information Technology (HIIT) and the Helsinki Doctoral Education Network in Information and Communications Technology (HICT). You can find the details on how to apply below:

  • Postdoc and research fellow positions: APPLY HERE

    • In the application form you are asked to select the focus area you are interested in; select the “Articifial Intelliegence” tick box to apply for FCAI positions. You are also asked to select the topic(s) you are interested in; select one or many of the topics described above (F1–F19).

    • In your cover letter, please also specify one or several FCAI positions to which you apply.

  • Doctoral researcher positions: APPLY HERE

    • In the application form you are asked to select the research area(s) you are interested in; select the “Algorithms and machine learning” tick box to apply for FCAI positions. You are also asked to select the topic(s) you are interested in; select one or many of the topics described above. (Note that positions F1, F4, F5, F12 are only postdoc/research fellow level.)

    • In your cover letter, please also specify one or several FCAI positions to which you apply.

    • General FAQ for the doctoral student call: https://hict.fi/open-positions/#faq

If you are interested in applying but none of the specific topic descriptions match your interest, you are welcome to suggest other topics that relate to our core areas of research. Select a topic/supervisors that are closest to your research interest and specify your research plan, as well as the PIs that you would like to work with, in your cover letter.

Required attachments:

  1. Cover letter (1–2 pages). Please specify your motivation for one or several FCAI positions to which you apply and/or supervisors with whom you want to work.

  2. CV

  3. List of publications (please do not attach full copies of publications)

  4. A transcript of doctoral/master’s studies and the degree certificate of your latest degree. If you are applying for a postdoc position and don’t yet have a PhD degree or for a PhD student position and don’t have a Master's degree, a plan of completion must be submitted.

  5. Contact details of possible referees from two senior academic people. We will contact your referees, if we need recommendation letters.

All materials should be submitted in English in a PDF format. Note: You can upload multiple files to the recruitment system, each max. 5MB.

Who we are

Finnish Center for Artificial Intelligence FCAI is an international research hub initiated by Aalto University, the University of Helsinki, and the Technical Research Centre of Finland VTT. We are a part of the pan-European ELLIS AI network - we host ELLIS Unit Helsinki and coordinate the European network of AI excellence centers ELISE.

FCAI is built on a long track record of pioneering machine learning research. Currently, over 60 professors contribute to our research. We create methods and tools for AI-assisted decision-making, design and modeling, and use them to renew industry and society.

Our researchers have access to excellent computing facilities through local and national computational services, spearheaded by the EuroHPC supercomputer LUMI, the fastest supercomputer in Europe.

Our community organizes frequent seminars, e.g., Machine Learning Coffee Seminar and Seminar on Advances in Probabilistic Machine Learning. We offer high-quality collaboration opportunities with other leading research networks and companies. For instance, FCAI has a joint research center with NVIDIA and Finnish IT Centre for Science CSC,  and collaborates closely with the Alan Turing Institute.

About Finland

Finland is a great place for living with or without family: it is a safe, politically stable, and well-organized Nordic society, where equality is highly valued and extensive social security supports people in all life situations.

Finland's free high-quality education system is also internationally renowned. Finland has been listed as the happiest country in the world for the fifth year running. Find more information about living in Finland here and here.