Ask a scientist: How will AI affect creativity?

The impact of creative AI is unfolding before our eyes, yet we struggle to understand it. It’s the perfect time to ask researchers what they see and think.

Lue tämä suomeksi

Artists have always augmented their creativity with technology. But from the 1950s, scientists have investigated if machines could be creative, too. Christian Guckelsberger thinks we may now be at a critical juncture. Image: Matti Ahlgren/Aalto University

The idea of AI that can produce something creative has captivated the imagination of the general public and professionals alike for decades. Yet, it is the recent breakthroughs that have led to an intense public discussion about how we perceive creativity and what makes human creativity special. To explore this topic in more detail, we interviewed Christian Guckelsberger, assistant professor in creative technologies at the Aalto University Department of Computer Science. 

How do you study creative AI, and why?

I am interested in supporting the sustainable development of creative AI. To this end, I investigate how we can build systems that are creative in their own right and in interaction with people, but also how they can benefit society. Therefore, we must study how these systems are used and experienced. Such studies are, as of now, typically done in the lab, with non-experts, on limited parts of the creative process, and with a focus on improving productivity. 

I want to provide richer and more reliable insights by studying professionals' perception, use and adoption of creative AI in their natural workplace, on real-world tasks, and across the whole creative process. This approach would allow us to study biases towards creative AI in realistic situations, and to promote transparent, human-centered design. 

What does the evolution of creative AI look like from a research perspective?

While applied AI research has long been focusing on automating hard, tedious, and unrewarding work, it has now shifted into the realm of human self-realisation. Research on creative AI has been done at varying intensity since the 1950s, but the systems have now come to a level of maturity at which they can affect professional creatives and society at a large scale. These advances were made possible by the discovery of new machine learning architectures such as transformers and diffusion models. Involving many smart people and immense computational power, industry research departments have leveraged these architectures to train so-called foundation models on massive amounts of data. As the name suggests, these models, through combination and extension, form the foundation of the most impressive creative AI systems that we see right now. 

While we currently only see mastery in selected creative tasks and domains, I expect the same foundation models to be used in other systems and domains, leading to a widening of the creative scope.

How is creative AI affecting professionals?

In a recently submitted study, we investigated this very question for text-to-image generators and professionals in the Finnish game industry. I’m confident that our team, Associate Professor Perttu Hämäläinen, games industry scholar Annakaisa Kultima, and Master’s student Veera Vimpari, managed to pull off the first empirical study about text-to-image generators in a specific industry, and the deepest and most comprehensive study to date on such systems more generally. We asked professionals how they use AI in their work and how the systems could be improved. In contrast to previous work, we also inquired about their attitudes, the roles they assume for themselves and the AI system in the creative process, and how they see the current and future development of AI in their industry. 

The results suggest that even in games where the use of AI has a long tradition, professionals are overwhelmed by the pace of development. Nevertheless, they agree that the most recent creative AI systems will transform their industry and roles, and that there is only one way forward: to learn and adapt. One participant called it an "adapt-or-die type of situation”, which became the title of our paper. While this is arguably a bit too dramatic, it describes the overall sentiment well. While many expressed reluctance to embrace these systems beyond early conceptual phases of the creative process, it seems this is mainly due to ethical concerns, such as the uncompensated use of fellow artists’ work in training the models.

Guckelsberger is also affiliated with the Finnish Center for Artificial Intelligence FCAI and the Helsinki Institute for Information Technology HIIT. Image: Matti Ahlgren/Aalto University

Where do you see the development of creative AI models heading in terms of the creative industries?

Once policymakers provide the needed – and demanded – regulation on these matters, the latest generation of creative AI will likely become yet another tool in many professionals' creative work. Especially in applied art, they will likely yield considerable cost deductions and an increase in productivity. I expect it to be used beyond early phases of the creative process all the way to the final product. I also project that future generations of these systems will require even less involvement by artists, which is presently still very much needed. 

However, the jury's still out on whether this development can be considered a benefit for everyone: our latest study found that professionals assumed various kinds of roles for themselves when working with AI, from "art director for the AI" to "slave to the AI." Moreover, while substituting potentially otherwise unavailable skills and workforce, these systems might also increase our dependency on technology and those who provide it – a development that we should be very conscious of. 

The take-home message is that the impact of creative AI on professionals is not only positive; the situation is rapidly changing, and the diverse reactions prohibit a one-size-fits-all solution as of now. This puts industry leads, researchers and policymakers into a tricky position. Also, as teachers at Aalto, we must watch these developments closely to equip our students with skills that will complement their traditional skills in a future-proofed way.

How can we make the adoption of generative AI socially and ethically sustainable?

I consider sustainability as one of the prime challenges for all of us in balancing the wellbeing of those affected by creative AI with business interests and scientific curiosity. More specifically at this point we see two pressing questions that put many professionals into inner conflict. First, are artists going to be credited and compensated for the data that are used in the models training, and how? Second, a major issue for professionals is who owns the copyright of the outputs. I argue that these issues must be resolved first through quick and transparent legislation to support the ethical and sustainable use of these systems. 

In addition to these questions, we are left with a whole range of issues that are still in flux. For instance, what do professionals find most meaningful about their work, and consequently, which aspects should AI rather not touch? To this end, professional creatives must be involved in the regulation and development of creative AI. Discussions on social media and the news can be very noisy and too superficial for e.g. policymaking. Through scientific studies, we can give professionals a clearer voice. Doing this in a longitudinal fashion should allow us to track how uses and perceptions change and adapt appropriately. Complementing such user studies, we must also become capable of experimenting with changes to the systems themselves, rather than taking what industry has to offer. We are now at a point where these types of models have become flexible enough to be trained and investigated at Aalto, an opportunity which my colleagues and I now actively pursue. 

How should we define creativity and how does machine creativity differ from human creativity?

We can conceive creativity as the production of novel, as well as valuable artefacts – for instance in terms of usefulness or aesthetic pleasure. But this is only one way to see it, and cognitive scientists still struggle with defining creativity. In fact, the concept’s meaning is constantly re-negotiated by society. Most notably, we have observed a shift in emphasis from craft within the creative process, to the ideas that go into it. This volatility makes research on creative AI a challenging endeavour and requires us to look beyond AI and human-computer interaction into cognitive science, philosophy, the social sciences, and other disciplines. 

One way to differentiate human creativity from machine creativity is to think of it in terms of motivation. For instance, much of human creativity is driven by intrinsic motivation such as curiosity. Here, we act not for any value outside of the activity itself. This is fundamentally different from most creative AI, which is built to optimise a separate goal, such as producing outputs that people find most appealing, by including features of the data that the system was trained on. However, I believe that this not only limits an AI’s creative potential, but also the extent to which it could really complement and augment, rather than just substitute, human creativity. My research challenges this divide.

I believe that studying the functional and perceived disparities between human and AI is crucial in that it enables us to ask: how should artificial creativity be different from human creativity? And what biases are at work when we interact with creative AI, that keep us from using it in a more fulfilling way? We’re now at a juncture where, instead of asking "can AI be creative", we should be asking "what kind of creative AI is best for us".

 

You can follow Christian's work on Aalto's webpage, Mastodon and Twitter.

Originally published on the Aalto University website.