Ethics in AI research – what and how?
The development of ethical artificial intelligence seems to be emerging as a hot topic. Various ethical, fair, and human-centred principles and norms for AI are being publicised around the world. [i] They have a common aim: to list the values considered crucial. When compiled into a code of ethics, they set the goal for doing good in many ways. Since the outlined principles are general in nature, it is often hard to make out what it would take to carry them out, and how they could be put into practice.It is easy to declare you are committed to designing and implementing ethically sustainable AI applications since AI ethics have been compressed into an abstract list of valuable points. A rift can easily form between lofty ideals and practical work, threatening to make the ethical outlines irrelevant.
One of the steps in realising the ethics of AI research is to itemise the different responsibilities of researchers and developers. The ethical requirements for high-standard scientific research can be used as an aid. First of all, research is always carried out as a part of the rest of society. Government holds a central role in funding research, which manifests as a requirement for the results to benefit society. Most researchers also want their work to promote good ends, solve some social problem, promote the welfare of people, ecological diversity and sustainability, or the accumulation of verified information in order to increase our understanding of reality. However, with many different social actors participating, the publicly declared goals may clash with each other. It may not be so easy to combine the goals of efficiency, promotion of equality and accessibility, securing the protection of privacy, and the creation of new innovations that support economic growth.
The healthy interaction between research community and society is full of tensions. Policy-makers want the research community to produce results that suit their political objectives. Researchers may consent to aiding politicians in order to further their own personal interests. However, science can only carry out its duty – and serve society the best – by posing critical views. This is why science has to study subjects that are uncomfortable for government and the populace, and publish results that may be unwelcome, even when it comes to important objectives. Naturally, the same conditions also pertain to other operators in society.AI is developed in cooperation with companies and international partners. They are trying to further their own interests, which also have to be considered critically.
Another, more immediate link of responsibility is formed between researcher and study object. The abuse that was uncovered, especially in the history of medicine – notably in Nazi Germany – led to sets of ethical codes to be created in order to safeguard the status and rights of research objects. This kind of ethical pre-assessment is an integral part of medical research, and is gradually also spreading into other areas of research on humans and other live beings. In fields where it has traditionally not been necessary to consider the position of the subject of study, researchers may unknowingly bypass issues that are important to the subject. In AI research, the physical integrity of people is seldom at risk, as it studies information about people. The relationship between researcher and subject of study is distant. The material being studied may consist of e.g. registers or other data on humans, who may not even be aware that the data has been harvested, utilised, or of the nature of the data.
Thirdly, researchers are answerable for their work to the scientific community. This is implemented by observing responsible conduct of science. The main thing is that researchers follow the ethically sustainable modes of operation that have been accepted by the scientific community; in short, honesty, general carefulness and exactness in research work, recording the results and presenting them in evaluations of both research and its results. [ii] The research-ethical norms can safeguard the integrity of both research and the scientific community. Breaking against them may harm both research and the community; results reached by dishonest methods will not be reliable, and the work of one dishonest researcher will also cast its shade on the work of other researchers.
These responsibility links can clarify the various levels of ethics in AI research, but they are only a first scratch at the surface of a complex network of agents, objectives, and principles, where AI is studied, developed, and applied. One of the important questions is how far the responsibility of the researcher and developer goes when it comes to AI applications. Does the research have to be able to foresee future applications and modify the precepts of research to be able to minimize the chance of using applications unethically? Where does research end and application begin? We should perhaps consider whether we need an assessment of the social impact of research projects to answer these questions.
Many research fields have outlined their own practices for ethical pre-assessment. However, they are not sufficient to cover the whole field of artificial intelligence and its sub-fields. Register studies, for example, are not subjected to pre-assessment, and medical research is more tightly regulated than e.g. AI research into environment or traffic, even though they, too, are burdened with ethical challenges.
It is not easy for individual researchers or research projects to navigate the current environment. The ethical advisory board of FCAI is attempting to ease the situation. The board does not perform ethical pre-assessments of research plans, but offers support to programmes and projects if they encounter ethical problems and in recognising problematic issues. Our goal is to raise the subject and add to the awareness of ethical queries, as well as recognise future challenges and find solutions to them together.
Read the blog post in Finnish here.
[i] European Commission’s European Group on Ethics in Science and New Technologies (2019), Statement on Artificial Intelligence, Robotics and ‘Autonomous’ Systems; Future of Life Institute, Asilomar AI Principles (2017); The Institute of Electrical and Electronics Engineers 2018, Ethically Aligned Design version 2: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems. https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ead_v2.pdf; Montréal Declaration for Responsible Development of Artificial Intelligence (2018).
[ii] Responsible conduct of research. https://tenk.fi/fi/tiedevilppi/hyva-tieteellinen-kaytanto-htk
Jaana Hallamaa
Professor of Social Ethics, University of Helsinki
Chair of The National Advisory Board on Social Welfare and Health Care Ethics ETENE
Karoliina Snell
University Lecturer in Sociology, University of Helsinki
Deputy Chair of University of Helsinki Ethical Review Board in the Humanities and Social and Behavioural Sciences