Serious weaknesses in the EU proposition on regulation of AI systems

The European Union is preparing a legislative proposal to regulate high-risk AI systems. If poorly implemented, the regulation can have undesired negative impacts on economic growth in Europe.

Photo: Christian Lue / Unsplash

Photo: Christian Lue / Unsplash

“It is great that Europe wants to be a forerunner in the regulation of advanced digital systems, since they, pushed forward with modern AI technologies, will have such a huge impact on our lives. However, if the legislation is finalised without the necessary expertise, it may make Europe appear backwards, and the regulation can become bureaucratic and rigid, which will give other parts of the world the chance to reap the abundant fruits of AI technologies”, Petri Myllymäki, professor in AI and machine learning and vice-director of the Finnish Center for Artificial Intelligence FCAI, warns us.

It is vitally important to guarantee that the technology is not regulated merely as technology, with no regard to its purpose. Image recognition, for example, can be used for mass surveillance of ethnic groups in public spaces, which is a violation of human rights. The same technology, however, can also be used for beneficial purposes like unlocking the display of a phone, identifying terrorists at an airport, or stopping a car from hitting a pedestrian.

What is good in the proposition is that it focuses on high-risk use cases, where the use of AI technologies can be regulated or banned altogether. However, the main problem is that the regulation is proposed to cover only AI systems that meet a certain definition, not all digital systems. If there are two alternative systems available for the same high-risk purpose, where one system fulfils the legislation’s criteria for AI and the other one does not, it raises the question of whether these systems will be regulated differently. To name an example: Does a recruitment system that discriminates against applicants based on their gender fall outside the proposed regulation if it does not match the given definition of AI? Hopefully not.

Another serious problem with building a judicial framework on a definition of AI systems is that this offers a direct path for circumnavigating the legislation, as long as you build a system that does not match the Commission’s definition of an AI system.

“This is becoming easier all the time, since AI as a field is constantly evolving. New technologies that cannot be covered by any current definition are constantly emerging. It is also worth noting that AI isn’t one single technology, but a large family of varied methods that are developing in different directions, and can be adapted to very diverse purposes, says Myllymäki. Therefore, the borderline between systems that are currently regarded as AI and those that are not is not only moving all the time, but it will always remain indeterminate and is not suitable for legal purposes at all.”

Tight regulation may be harmful to economic and technological advancement in the EU. If the categorization of high-risk use cases becomes a complex and bureaucratic process, it may slow down the deployment of AI for desired purposes.

“Tight regulation may become a problem for Finnish companies. In Finland, we are often especially careful. Many opportunities may remain unused, just in case”, says Heikki Ailisto, professor at VTT (the Technical Research Centre of Finland) and leader of the Industry and society program at FCAI.

FCAI supports the development of regulation of AI and other digital systems, especially for high-risk purposes, but notes that if it is poorly implemented, the regulation may pose an unnecessary hurdle for economic progress in Europe. We must ensure that the upcoming legislation is carefully planned, adequately measured and sensibly implemented, so that it will not hamper our opportunities for positive technological advancements.

A draft of the regulatory proposal was leaked to the media last week. Its official date of publication is 21 April 2021.