Navigating towards desirable AI

“Technologies often do much more and very different things than what they were supposed to achieve.” With this statement, Frederica Lucivero (2018) warns us not take for granted the claims that technology will realize certain values. Instead, we need to analyze the plausibility of such claims, that is, the future (tech) artifact, its use, value and potential impacts.

 Expectations of technology are by nature socially construed and have a strategic role. They concern hopes of innovative solutions to current problems, and promises how new science will create better futures. They relate to impacts on important human values such as human rights and wellbeing, happiness, abilities, peace, democracy, and justice. To be able to understand the actual and possible impacts of technology, it is necessary to assess the quality of these expectations. Lucivero wants us to ask how likely it is that the suggested technology will promote the expected values. To what extent are these values desirable?

AI technology poses many questions and visions of possible futures. When examining the potential far-reaching implications of AI on human values, we can anchor in the conceptions of digital ethics, a branch of applied ethics that studies and evaluates moral problems relating to data and information, algorithms and corresponding practices and infrastructures, in order to formulate and support morally good solutions.

Why does ethics matter?

Whilst AI technologies are expected to have many groundbreaking (but so far not thoroughly understood) effects on people and societies, they will change how we see the world and how we can act within it. In his book Future Ethics (2018), Cennydd Bowles argues that design turns beliefs about how we should live into objects and environments people will use and inhabit. This is why “ethics is a vital and real topic, nothing less than a pledge to take our choices seriously”. Bowles argues that this commitment is especially important for us who develop and design emerging technologies. 

 

Being socially and culturally embedded, AI will give rise to ethical issues that can be related to anything from biased data to exercise of power.

 

What do “ethical issues” mean?

An ethical issue is a problem or a situation that requires us to choose between alternatives that must be weighed as right (ethical) or wrong (unethical) or as good (enhancing central values) or bad (contrary to central values). An issue arises when there is a dilemma between two or more values (conflicting ethical values or a conflict between ethical and practical values), and – as it often is the case – between attitudes and views of the people occupying the same context. Therefore, to make the relevant ethical issues identifiable, we need to contextualize AI. 


Because of their contextual nature, ethical issues in AI are case specific, transient – even negotiable. They require discussion.

 

How to solve ethical issues?

Ethical issues concerning the development, adoption and use of AI applications and services become evident and must be solved in a particular context, be it social, political, legal, economic or informational. Many ethical initiatives in the field of AI, such as the European Commission’s High-Level Expert Group on AI and the IEEE initiative “Ethically Aligned Design”, suggest that reflection on ethics should be an iterative process where requirements for “trustworthy AI” are systematically discussed and human-well-being is prioritized throughout the design phase. Ethical discourse can start from reflection on the ethical values and choices in respect to design decisions, and continue all the way to considering the impacts of research outcomes on trust, society, and the environment. Various public, private and civil organizations, and expert groups have introduced visions, initiatives and guidelines to support this discourse. 

One way to practice discourse is to rely on four basic dimensions of responsible research and innovation (RRI) which emphasize the impacts of technology innovations. Anticipation helps to examine both the intended and possible unintended consequences arising from research and innovation activities. Reflexivity helps to consider the underlying assumptions and commitments driving research and innovation and to ponder them openly together with relevant actors. Inclusion brings relevant societal actors into ethics discussion from on an early stage of R&I and ensures continuous, open dialogue concerning desirable and undesirable outcomes. Finally, the process needs to be responsive to align research and innovation activities with the new perspectives, insights, and values emerging through anticipatory, reflexive, and inclusion-based processes.

Does ethics deliberation hamper innovation? Sometimes it may do so. Deliberating ethical questions may make some eloquent ideas unrealizable because of their potential harmfulness. However, solving ethical issues can and should be seen as a source of innovation. In its proper use, ethics can generate new ideas as well as sift out unfit ones. In this sense, FCAI research can boost companies even in their endeavor towards competitive AI.

 
 
Leikas.png

Jaana Leikas
Chair of the FCAI Ethical Advisory Board,
Adjunct Professor and Principal Scientist at VTT

 
Kaisa Pekkala