Back to All Events

Erik Härkönen: Discovering Interpretable GAN Controls

Abstract: Generative Adversarial Networks (GANs) offer exciting new ways to create and edit images. While the quality of their outputs is steadily improving, the control over the generated images remains limited. Recently, supervised approaches have been proposed to define controls, but these require large amounts of labeled data.

This project describes a simple technique to analyze GANs and create interpretable controls for image synthesis. We identify important latent directions based on Principal Components Analysis and perform interpretable edits by layer-wise application of these directions. Moreover, we show that BigGAN can be controlled with layer-wise inputs in a StyleGAN-like manner.

Links: Paper, Video, Code

Bio: I'm a master's student working in Jaakko Lehtinen's research group on visual computing at Aalto University. I started my work on generative models during my 2019 internship at Adobe Research in Cambridge, MA. I'm also interested in photorealistic rendering and high-performance computing.

Speaker:  Erik Härkönen

Affiliation:  Aalto University

Place of Seminar:  Zoom