Can transparency make automated decision-making understandable?

Algorithmic or automatic decision-making (ADM) can make human life easier, but there is growing evidence that algorithms reproduce inequality and challenge ideas of liberalism and free will, basic tenets of western legal thinking. A research project delves critically into these questions.

suurennuslasi silmä.jpg

Automatic decision-making (ADM) has a lot of potential. Decision-making may become more efficient and, at best, free from human errors. However, research shows that automated decisions are not always as neutral as one might think.

Automated decisions can covertly favor certain groups of people over others, based, for example, on sex, age, or skin color. This has happened in credit scoring, hiring decisions, or calculating a criminal recidivism score. The reason for this is that algorithms may make predictions, inferences, scores, and classifications based on vast amounts of data that is based on how people have behaved or been treated in the past.

“This can be legally very problematic. When ADM – in particular statistical methods – is brought to a legal context, statistical prognoses can become self-fulfilling prophesies. Law is supposed to be blind to differences between people, whereas statistical methods rely on them”, says Ida Koivisto,  Assistant professor in the Faculty of Law, University of Helsinki.

Potential discriminatory effects of algorithmic models may stem from a variety of reasons.

 “They can be a result of a poor design of the code, leading to unanticipated results. Also training data for machine learning methods can be biased. In addition, along the use of big data, new sources of disadvantage may emerge that no one intended, e.g. tastes, habits, places of residence, or lifestyle. Algorithmic knowledge production ultimately relies on human knowledge production in one way or another, and we humans are prone to biases and errors”, explains Koivisto.

The black box problem

Some algorithms based on machine learning are not easily translatable into language humans can easily evaluate.

“In effect, these can be seen as secret rules that impact large amounts of people. It is often unknowable to people how algorithms translate input data to outputs. This is commonly referred to as the black box problem. Transparency is often suggested as a solution, both in proliferating AI ethics codes and increasingly also in regulation e.g. the EU’s GDPR, the EU’s AI Act Proposal”, says Koivisto.

“There is no simple way of overcoming algorithmic bias because its sources can be manifold”, says Ida Koivisto.

“There is no simple way of overcoming algorithmic bias because its sources can be manifold”, says Ida Koivisto.

We want transparency because we think transparency is good and because it makes power visible, and, as such, controllable. It privileges firsthand knowledge over secondhand knowledge.

“This idea is visible in ADM ethics debates, through transparency, we could see inside the black box and fix what needs to be fixed. In our research project –AlgoT: The Potential and Boundaries of Algorithmic Transparency– we are viewing this promise critically. Is transparency such an a simple and neutral a solution as it is commonly believed?“, says Koivisto. 

Overcoming algorithmic bias

There is no simple way of overcoming algorithmic bias because its sources can be manifold.

“It is important to point out that transparency is a construct whose extent and interpretation may be fueled by impression management logic and reputational concerns of those who are allegedly providing it”, says Koivisto, who also discusses the problematic sides of transparency in her upcoming book The Transparency Paradox (OUP, forthcoming 2022).

Transparency may also be blocked due to technical obstacles, IP-related obstacles or trade secrets.

“It is also important to note that people’s cognitive capacities to understand complex technical information are limited, regardless of how much transparency there would be. For example, this is an issue in debates on open-source software”, says Koivisto.

Transparency does not necessarily guarantee understanding

Koivisto´s research project is at its midpoint. So far, the researchers have delved critically into the premises, promises and controversies of transparency in law, ethics, and algorithmic design.

“We combine legal research with a science and technology studies approach. We have, for example, found out that the insufficiency of transparency as a legitimacy guarantee is increasingly recognized. Transparency does not necessarily guarantee understanding and taking action”, says Koivisto.

As a result, new conceptualizations have emerged into the discourse: explainability, interpretability, intelligibility, explicability, understandability, and comprehensibility. These concepts imply that transparency does not suffice in guaranteeing understanding.

“However, if transparency is replaced with explanations, the core promise of transparency would be turned upside down. ‘Do not believe what I say, see for yourself’ would thus be transformed into ‘do not believe what you see, let me explain instead’, and so secondhand knowledge would become more valuable than firsthand knowledge”, explains Koivisto.

There is a paradox in human understanding in algorithmic knowledge production.

“It seems that in ADM and perhaps in digitalization also more widely, human understanding is something to be superseded, but paradoxically, also something to be kept as a guiding principle of legitimacy, says Koivisto.


AlgoT: The Potential and Boundaries of Algorithmic Transparency

  • The project, lead by Assistant professor Ida Koivisto, aims at producing knew knowledge on the ways in which ADM can be legitimized. To that purpose, the researchers do not only look at the problems of the digital environment, but are also interested in doing a critical review of the proposed solutions. This kind of knowledge can be used to reveal underlying power structures in our digitalizing society.

  • The AlgoT team includes:

    • Ida Koivisto, Associate professor, Faculty of Law, University of Helsinki (PI)

    • Jenni Hakkarainen, Doctoral student, Faculty of Law, University of Helsinki

    • Riikka Koulu, Assistant professor of Law and Digitalisation at the University of Helsinki, and the leader of Legal Tech Lab

    • Marta Maroni, Doctoral student, Faculty of Law, University of Helsinki

    • Beata Mäihäniemi, Postdoctoral researcher, Legal Tech Lab, University of Helsinki

    • Suvi Sankari, Docent, Faculty of Law, University of Helsinki

    • Jaakko Taipale, Postdoctoral researcher, Faculty of Law, University of Helsinki

  • The research relates to FCAI’s research program R7, which tackles questions of societal change brought about by AI development.

Mia PajuResearch