Ethics is not a 15-minute box-ticking exercise

Karoliina Snell, Patrik Floréen, Christian Guckelsberger, Jaana Hallamaa, Jaana Leikas and Béatrice Schütte 

In the evolving landscape of artificial intelligence (AI), the ethical considerations surrounding its development and application have never been more relevant: contemporary geopolitics, warfare, fake news and climate change are all connected to the development of AI. The same applies to questions of safety, privacy, human rights, equality, and anti-discrimination. Thus, AI ethics is multifaceted, involving considerations about moral principles and values as well as practical choices made by people, raising many questions of how we should assess the implications of AI and the outcomes of its use. To address these challenges, we require critical ethical thinking.

Many organisations have designed checklists to support ethical evaluation. The idea behind checklists is to offer fast, standardized, and easy ways to assess various ethical issues. One of the goals of using checklists is to make ethical evaluation accessible to computer scientists and technology professionals, who are not necessarily trained in ethics or understanding various social impacts one’s choices may have. However, the checklists have been criticized because they can be too generic and tend to narrow down the complexity of moral assessment to pre-defined ethical issues. Checklists cannot be easily applied to all different contexts (countries, user groups, sectors of society, values, emerging phenomena) either. They may  encourage compliance (fitting to the boxes) rather than critical and innovative thinking. In addition, the ticking of checklist boxes often remains external to processes of technology development. 

The EU AI Act outlines one way of assessing and containing the impacts of AI on society and its members. While the regulation is, in practice, part of the product safety framework, it also establishes a duty to conduct a fundamental rights impact assessment for those putting high-risk AI systems into service, regarding for example applications used in law enforcement or job recruitment. 

Aside from the obligations related to high-risk AI, the AI Act states that Member States should facilitate the creation of codes of conduct concerning the voluntary application of such requirements to all AI systems, not just those deemed high-risk. In practice, this means that the AI Act supports assessing the impacts of all AI systems. But is the context of fundamental rights a sufficient - and even the right - way to assess all AI systems? What about environmental sustainability? What other elements of ethics should  be discussed? Does the AI Act promote critical reflection or development of checklists? Can critical ethical thinking be reduced to impact assessments?

We argue that evaluating responsible development of AI requires more than impact assessments and checklists. Also, an ethics assessment is not an easy, standardized or fast process. Assessing short and long term consequences of a particular AI system, understanding the rights and viewpoints of the persons and groups affected by it, contemplating human autonomy and oversight etc. require time and contextual sensitivity that checklists cannot convey. Neither is assessing ethical impact a one-time task. Critical ethical reflections should be conducted in all stages of a project from preliminary planning throughout the entire life cycle of the AI system. As the project develops, so do societies, policies, legislation and technologies and new ethical issues may come about. Thus the ethics and impacts of the developed or existing AI systems should be regularly evaluated. 

Checklists fail to address many important aspects. As they can be used by people with little training in AI ethics, who might not recognise the limitations of the methods, we need to increase awareness of potential ethical issues among those developing AI systems and tools by encouraging critical ethical thinking. This is all the more important, as an external expert in AI ethics typically lacks detailed knowledge of specific projects, and thus cannot conduct the ethics evaluation on behalf of those developing AI systems and tools. Therefore, understanding AI ethics is a crucial skill for computer scientists to ensure responsible development. Moreover, possessing this skill can provide a competitive advantage in the job market, especially in light of the requirements set forth by the AI Act.

Insights from the FCAI’s Ethics of AI workshop for doctoral students

Acknowledging the complex societal landscape where AI is applied and the need to support critical ethical thinking among computer scientists, the FCAI Ethics Advisory Board (EAB) collaborated with Aalto University and the University of Helsinki departments of Computer Science to organize the "Ethics of AI" workshop in spring 2024. The aim was to foster a meaningful discourse around the ethical dimensions of AI, particularly tailored for doctoral students in the field.

The half-day workshop served as a platform for doctoral students to delve into the complex realm of AI ethics. The research topics of the participants spanned a wide spectrum, such as deep learning, differential privacy, regulatory sandboxes, and AI-generated personas. The research topics ranged from basic research to applications in media, health, and energy, among others. Reflecting the diversity of AI research and its applications, the workshop provided a rich tapestry of perspectives to engage with.

The afternoon commenced with introductions by Jaana Hallamaa (Professor of Social Ethics, University of Helsinki), Karoliina Snell (Sociologist, Programme Director, University of Helsinki), and Patrik Floréen (Senior University Lecturer in Computer Science, University of Helsinki) from the EAB. Patrik set the stage by contextualizing the workshop within doctoral studies and research practices. Jaana explored different aspects of research ethics and ethics in research and the roles researchers can take in ethical and societal discussions. 

Privacy emerged as a central theme during the workshop. For instance, Karoliina explored the question “Does AI change ethics?” by eliciting different understandings of privacy. Participants discussed the intricate balance between competing values, such as privacy and transparency. Balancing this tension is an ongoing challenge for researchers navigating the development of AI methods and applications. Additionally, the workshop addressed concerns surrounding the unintended consequences and potential misuses of AI, the role and limits of regulation, the phenomenon of ethics washing, and real-world AI controversies and mishaps. Notably, the rights of individuals, including deceased persons, sparked a vigorous debate, underscoring the ethical complexities inherent in AI development.

Beyond specific research inquiries, the workshop emphasized the importance of cultivating a broader awareness of AI ethics among researchers. Even in the realm of basic research and mathematical modelling, being attuned to ethical considerations is integral to responsible practices in computer science and AI. To this end, the EAB encourages all doctoral students and researchers to actively engage with AI ethics.

Courses and tolls for doctoral students

By fostering a culture of ethical reflection and dialogue, initiatives like the "Ethics of AI" workshop play an important role in shaping the responsible development and deployment of AI technologies. Our workshop was excellent, but a one-time workshop is not enough to sustain and grow critical ethical thinking. It did provide, however, valuable insights into how courses could be designed. We are willing to share our experiences. The EAB urges universities in Finland to incorporate training and discussions in AI ethics in the AI-related doctoral student's curriculum, in addition to courses in research ethics.

While we are critical towards checklists, tools and practical guidelines that support critical ethical thinking are needed as well. One tangible resource offered by the EAB is the Ethics Exercise Tool, designed to assist researchers in identifying, articulating, and navigating ethical, societal, and legal issues pertinent to their work. The EAB is eager to improve the Tool on the basis of feedback received. 

All in all, ethics is not something NOT for you. It is something everybody should be concerned with.

Kaisa Pekkala