Privacy-preserving and Secure AI

(The FCAI research programs are currently in a ramp-up phase. More information will be updated here later.)

The goal of FCAI’s research program Privacy-preserving and secure AI is to develop realistic adversary models to build effective tools and techniques that practitioners can use to build trustworthy and secure AI systems. Privacy-preserving and secure AI contributes mainly to FCAI research objective Trust and ethics (objective II), but strong privacy preservation will ease the problem of data scarcity through encouraging more data sharing.

We are very active in developing differentially private machine learning methods, especially for Bayesian machine learning used in Agile Probabilistic AI. Our work also covers cryptographic and secure multi-party computation techniques for ensuring the security and privacy of the training of AI systems and their use in prediction. We cover a number of applications from health to generic deep learning and differentially private data release.

Coordinating professor: Antti Honkela – antti.honkela at


The groups of following professors already take part in the research program Privacy-preserving and secure AI. The list is currently under construction. If your group is already involved and needs listing here, please contact the program coordinator.