Regulating AI – is the current legislation capable of dealing with AI?

How is law regulating Artificial Intelligence (AI)? How do we ensure AI applications comply with existing legal rules and principles? Is new regulation needed and if yes, what type of regulation? These questions have gained increasing importance as AI deployment has increased across various sectors in our societies. Adopting new technological solutions has raised legislators’ concern for the protection of fundamental rights both nationally in Finland and at the EU level.  

However, finding these answers is not easy. And the answers we find may are frustrating: varying from typical “it depends” to the self-evident “it’s complicated”, followed by the slightly more optimistic “we don’t know yet”. Beyond these standard replies there is the risk of oversimplification we need to avoid. Hence, an interdisciplinary debate about possible regulatory solutions is needed, if we are to ensure that regulation supports sustainable AI development and deployment. Such discussion, in turn, requires translating between different knowledge domains, most importantly between legal scholars and AI researchers. Otherwise regulation may be built on false assumptions about AI applications and AI development may inadvertently infringe existing legal protections. 

In this blog post, we hope to shed light to some of the topical legal issues involved with AI development. First, we will briefly discuss the broader context of AI. Here, we emphasise the interrelationship between law, technology and society in order to explain why it is so difficult to find answers. After setting the stage, we provide two concrete examples of how law tries to capture AI. Both of these examples deal with responsibility and accountability – themes that are also central for debates on trustworthy and ethical AI. The first example comes from the field of tort law and deals with the EU’s emerging regulatory approach to AI liability, i.e. the question who pays the damages resulting from AI use. The second example draws attention to the tension that results from AI use in public administration, a field where accountability is historically connected with the very person of the civil servant.

Law, technology and society – regulating the computational turn

The principle of technological neutrality is a broadly accepted regulatory strategy. Technological neutrality requires that legislation is drafted in a manner that is not bound to any specific technological form or method. The objective is two-fold. First, to enable legislation that lasts the test of time and does not become outdated when technology develops. Second, to treat different technological solutions equally and without inadvertently granting unfair advantage to certain solutions while discriminating others. 

Technological neutrality is at odds with the idea of regulating AI, as has been acknowledged by many scholars and legislators alike. Regulation hopes to facilitate sustainable technology use while mitigating the negative consequences of such use. In order to be effective, regulation needs to have a precise scope of application and sufficient enforcement mechanisms. However, defining the object of regulation is not straightforward – we hope to regulate AI without regulating specific techniques. Based on this, we can question the feasibility of AI regulation as such and start asking instead what are the AI uses that law should capture. In the EU, one focal regulatory tool that affects also AI development is the General Data Protection Regulation (GDPR). The regulation builds on the principle of technological neutrality and does not, in fact, take a position regarding certain techniques but instead recognizes the use of personal data as the object of regulation. Still, some scholars have contested the neutrality of this approach, arguing that implicit assumptions about certain technologies are embedded in the Regulation despite its explicit neutrality. 

Or course, technological neutrality is an ideal that is impossible to achieve completely. Hence policy making draws a balance between sufficiently clear and predictable regulation and generalized rules that preserve neutrality. When we understand the difficulty of such balancing acts, we can expand the scope of the current policy debate. Is AI regulation really about AI? What is the broader societal context and what are we trying to achieve by drafting regulation? 

Four observations on regulating AI

Particularly four observations help us set the stage. First, we notice that the problems related to AI use the law aims to address are not necessarily about AI as such. Instead they are about increasing datafication, automation and digitalization of the society and the implications of these on-going and long-lasting developments. In other words, about humans using computers in their everyday activities. Perhaps a better framing of the regulatory challenge can be found from this more in-depth understanding of AI as one of the examples of the computational turn. 

Second, there is a common misconception that law lags behind technological progress. Although at certain times this holds true, it follows from technological neutrality that law applies also in situations where new techniques and methods emerge. At the same time, law steers technology development in many ways by enabling certain solutions while banning others. The relationship between law, technology and society is reciprocal and dynamic.

Third, the debate around AI ethics has demonstrated the difficulties involved with fairness-aware AI and the vagueness of ethical principles such as transparency, explainability, and accountability. However, these ethical principles did not appear from thin air, but instead they are the values that have been developed in the socio-political legislative process elaborated by the legal system across various contexts. This means that law has much to offer to ethical AI and helps to contextualise how such values should be implemented in different situations. 

Finally, there is yet another layer of technological neutrality – humans. Law is and has never been objective in the sense that it builds on the assumption of humans as the object of regulation. This “bias” towards humans can partly explain the all-too-typical juxtaposition of humans versus machines. As the assumption of human-driven processes is deeply engrained in legal thinking, humans are often seen as a feasible way to control technology. For example, in the EU, policy actions on AI regulation advocate for ‘human-centric design’ of AI systems and portray human oversight as one of the key requirements for high-risk sectors such as public administration (White Paper on AI, COM (2020) 65). The article 22 of the GDPR includes a ban on automated decision making unless there is suitable legal safety measures such as the right to contest and human intervention. In short, the on-going legislative actions emphasise the importance of humans to mitigate the potentially negative implications of technological systems. Yet it is left open, what exactly does it mean to make technological design human-centric, and what is the role of law in supporting and constraining such design. 

This leads us to the following. There is no shortcut to AI regulation and it is not clear that regulatory challenges can only be solved by introduction of new legal rules. Instead we need to carefully evaluate the basic assumptions behind our legal thinking before drawing conclusions about the feasibility of current legal rules to address different AI uses. 

Who pays the bill: liability for harms caused by AI use

Liability for AI-related harms provides a good viewpoint for examining the challenges of AI regulation. For example, the EU’s emerging AI strategy suggests that expanding product liability to AI-based services (as opposed to only products) would fill one major regulatory gap. Within the legal system, liability issues are a question of interpreting tort law that builds on the assumption that liability should fall on the human who caused the harm.  

The question of who should bear the damages liability when an AI application causes harm, such as personal injuries, harm to property, or economic losses, is receiving increasing attention. Earlier rules or models of legal reasoning may not be easy to apply to factual settings involving AI, and the very essence of liability and responsibility considerations is potentially affected by the involvement of AI. This applies, in particular, to situations where identifying the humans that 'caused' harm is challenging. In any event, liability rules are central from the standpoint of novel technologies that are not yet addressed by (comprehensive) other regulation.Therefore, at least the following questions arise: Is the current tort law capable of dealing with AI-related accidents? Do AI-related harms require particular rules? What kinds of rules would then be reasonable and easy to apply? Who should regulate liability for AI-related harms and how? In the EU, it can be further asked what kind of roles national legislation and EU legislation should have, and, moreover, whether even EU legislation is 'global enough'.  

Addressing these matters related to regulating private law liability supports technological progress and ensures sufficient protection of individuals. Clear rules on who bears the responsibility facilitate evaluating what kinds of risks introducing new AI applications to the general public might include. For those individuals who happen to suffer harm because of novel technologies, it is essential that reparation is easily obtained. In practice, we must accept the fact that developing technologies can cause harm to some, while the new applications are in their infancy, and that, presently, this accentuates the need of clear and easily administrable liability rules. The reward for the society is that many AI-based technologies are safer than human decision makers in the long run.

Applying the existing, general liability rules to AI-related cases can produce unpredictable, unfair and sub-optimal outcomes. The involvement of AI in the chain of events that leads to harm complicates allocating and apportioning liability (of humans, the liability of autonomous machines themselves has been suggested but currently appears far-fetched). Difficulties in applying the existing law may appear during the identification of the liable party, the evaluation of legally relevant causal relationships, and the assessment of fault of the potentially liable humans and organisations.

In developing new rules, central questions also include the matter of to which extent new regulations should be field specific and to which extent we could rely on some relatively generally applicable 'AI liability law'. As regards harm caused to individual consumers in particular, broadening the existing product liability regime to better cover harm related to AI applications has been a heated topic in the EU for some time. 

In brief, the idea of product liability is that the manufacturer of a product is liable for physical harm or material damages to property that have been caused by a defect in the product. The manufacturer escapes liability if they demonstrate that the defect did not exist when the product was released. At a theoretical level, benefits of product or other manufacturer liability include the easy recognition of the party liable and providing a clear, specific set of rules concerning harms caused by consumer goods.

Nonetheless, a new broader European product liability regime that would also cover different situations involving AI-related harm is not a panacea for regulating AI-related liability. Product liability was originally developed with a view to rather simple products. Today, we should keep in mind that product liability or other heavy manufacturer liability is a clearly suitable and efficient solution only in settings where the 'manufacturer' is well-placed to take precautions for preventing harm. The difficulty to foresee accidents that characterises complex AI applications signifies that the possibilities of the 'manufacturer' to avoid harm are very limited, and this may lead to over-cautious behaviour and disincentivise introducing new AI applications to the market. Additionally, easily triggered manufacturer liability could encourage 'manufacturers' to prevent alteration of products after their release, which deters innovation. Furthermore, proving that a harm was caused by a 'product defect' might constitute quite a challenge for a consumer claimant who is not familiar with the details of the technology. The 'manufacturers', in turn, are well-placed to defend their technology and to claim that no defect that rests on their responsibility exists. Even the conceptsof 'manufacturer', 'product', and 'product defect' are challenging to apply to AI applications. 

For these reasons, any product or manufacturer liability that would extensively cover harm caused by AI applications should be planned with care. Additionally, even the broadest product liability regime could not cover all kinds of AI-related harm such as harm caused to companies or professional users of AI applications.

The best way forward appears to be striking a balance between 1) applying the existing general tort law rules, 2) issuing some field specific legislation for particular areas where liability solutions must support field-specific goals in terms of the utilisation of AI, and 3) carefully evaluating what kind of product or manufacturer liability rules are reasonable as a part of the broader regulatory landscape. Matters such as the implications of liability rules for market structures and competition must be taken into account in choosing regulatory approaches in the EU. Additionally, it must be evaluated where liability rules are good solutions for achieving compensation and accident deterrence and where other regulation should be the main focus. In developing AI-related damages liability, both detailed liability and justness considerations as well as overall policy goals concerning AI should be carefully appreciated. 

National regulatory approach: automated decision making

These questions of best regulatory action related to automated decision making (ADM) have emerged also in the context of public administration. These questions are particularly topical nationally. Finland, being at the forefront of digital public services (DESI 2019), forms a unique laboratory for examining AI use in public decision making processes. This means there are no global best practices to guide national policy-making, yet the results of the experimentation can inform the emerging pan-European and global solutions. 

For these reasons, Helsinki Centre for Data Science (HiDATA), University of Helsinki Legal Tech Lab and Finnish Center for Artificial Intelligence FCAI organized an interesting, interdisciplinary event on automated decision-making on October 5, 2020. The recording of the event will be made available here (unfortunately only in Finnish).

As discussed during the event, the use of ADM is already taking place in various organizations in both the public and private sector. Automation has proven to be useful in, for example, handling applications or other “high-volume” matters that are similar to each other and normally fast and easy to solve. Even though the use of ADM applications brings up demanding questions that concern either interpretation of current legislation or the need to generate brand new rules, we should not perceive ADM as a threat. Another keynote speaker, Chancellor of Justice Tuomas Pöysti, underlined well in his speech that with automation we can improve and enhance equality and legal certainty and accelerate processes. This again, enables us to focus on the human dimension of creative problem solving and social interaction.

Accountability of machine-made decisions: AI use in public administration

The debate on liability for AI-related harm in the private domain is echoed by concerns for accountability measures necessary for AI deployment in the public domain. Currently, the debate is framed in terms of automated decision making, which the Ministry of Justice hopes to regulate through reform on administrative law. In administrative law, the responsibility for public decisions falls on the civil servant who has made the decision, which follows from the Constitution of Finland. This human-centricity of accountability is one of the major questions in drafting new legislation to allow automated decision-making processes in public administration. 

As it often is, the first applications in use are not complicated black-box algorithms but simple rule-based automation models that ease the decision-making process. Yet already the use of simple rule-based automation has raised many legal questions that reflect different aspects of administrative law. For example, the Deputy-Ombudsman, the authority responsible for overseeing that public officials comply with the law, found automated decision-making used by the Tax Administration problematic. In her decision, the Deputy-Ombudsmanregarded that there is a clear need to examine the need for new legislation and clarify the situation via applicable rules. The Constitutional Law Committee has also made the same conclusion while handling separate bills that have concerned among other things automated decision-making on certain specific fields of administration.

Of course, there are several ways to use AI and other forms of automation and both upsides and downsides are depending on the context. As described, AI applications may be helpful in decision making but also for example in directing resources to most relevant needs or in other supportive functions where the ability to predict is relevant. Even if the context and forms of use varies also in the public sector, some of the most common legal problems may be listed. As we see these problems are tied to the challenging question of the right regulation of AI.

One of the public sector specific issues is the question of the responsibility for official actions. This debate reflects the concerns described above in relation to liability issues, i.e. the assumption that we need to identify humans to blame. As it is challenging to identify the humans (or in this case civil servants) that 'caused' harm, it is also hard to tell how to interpret the rule of official accountability in an AI application context. However, the dimensions of the official accountability differs from the general tort law rules. Official accountability may be divided into three different parts: liability for damages, criminal liability (offences in office) and possible administrative sanctions that may also take place. The exercise of power is the pivotal concept behind the official accountability. From this point of view, it is understandable that there is need for public sector specific rules on AI-related liability and accountability.

Other questions that are also waiting to be solved regard inter alia questions on transparency and publicity, due process and the right to contest machine-made decisions and the interpretation of good administration. It is easy to state that AI applications need to be transparent, but much harder to define what kind of transparency is needed generally and more specifically on matters that concern for example official decisions on one’s rights, interests or obligations. Should one be able to understand the basic functioning of certain algorithm by herself or trust to some authority that has the capability and skills to do deeper and more technical investigations? How will we include legal principles of administration (like the principle of proportionality) to the AI applications used on public sector? Who makes sure that the AI applications are planned to follow the binding rules on administrative procedure?

As we see, the use of AI applications and other technological solutions on public sector brings up many demanding questions that concern either interpretation of current legislation or the need to generate brand new rules. The Ministry of Justice is currently working with issues that concern automated decision-making in the public sector and new legislation on this theme is expected to come later. It is interesting to see what kind of rules the ministry will be proposing and how much there will be room for the use of actual AI applications. Furthermore, it is interesting to evaluate the relationship between new regulation and technologies and how these two interrelate. 

****

To conclude, we notice that the legal debates around AI regulation are, in fact, about humans and their role in relation to technology. This observation not only explains why AI regulation is such a complex and context-dependent issue but also provides a new starting point for policy dialogue. Instead of focusing on the dystopian fears of machines replacing humans, the interplay of humans and machines takes the centre stage. The question is no longer simply how law should regulate AI. Instead, by drawing attention to the development of meaningful partnerships between humans and AI systems, we are able to ask how the computational turn – supported by effective regulation – can improve our societies as a whole.

Authors: Riikka Koulu with Katri Havu & Hanne Hirvonen

Riikka Koulu is an assistant professor of social and legal implications of AI and the director of Legal Tech Lab at the University of Helsinki. She is also a member of the FCAI Ethical Advisory Board. Katri Havu is an assistant professor of European private law and examines AI caused liability in her Academy of Finland project at the University of Helsinki Legal Tech Lab. Hanne Hirvonen is a doctoral candidate at the Legal Tech Lab, looking into accountability issues in automated public administration. 

Kaisa Pekkala