“Even the digital world needs rules”
The UN Advisory Body on AI released its final report, a balancing act between governance, human rights, commercial interests and equity.
Since ChatGPT captured the public’s attention two years ago, it has become clear that the accelerating development of artificial intelligence (AI) systems threatens to widen the digital divide. At the recent UN Summit of the Future, “the Global South was very vocally present, saying they do not want an AI divide to emerge,” says FCAI vice-director and University of Helsinki professor Petri Myllymäki. “They represent a majority of the world’s population and rightly want to be key participants and developers of AI products and services, not just producers of data and consumers.” Myllymäki is a member of the UN Secretary-General’s High-Level Advisory Body on AI that has spent the past year weighing how the global community should react to the current AI boom.
Of the UN’s 193 member states, only seven belong to all of the large AI governance initiatives, such as the G7’s Hiroshima Process, GPAI, AI4Good or the series of AI Summits that will continue in Paris in 2025, while 119 countries are not included in any of them. “Building AI tech for everyone has to be an inclusive process, so the UN is the natural choice to make that happen,” says Myllymäki. The report that he and over 30 other international colleagues spent a year working on lays out a plan for a global AI governance with seven recommendations, such as the exchange of AI standards and the establishment of an AI fund to further the Sustainable Development Goals. These recommendations were included in the Global Digital Compact, which was adopted by UN member states in September 2024 to protect human rights and safety in the digital space.
Governance, notes Myllymäki, can mean a lot of things, including regulation. “Europe has the AI Act, Data Act, acts on digital services and markets, and this is the right things to do—even the USA is slowly following,” says Myllymäki. “We have to stop pretending that the normal rules of society don’t apply in the digital world. AI is so disruptive, and it’s time to take that seriously. I hope that in Europe we can do this without harming innovation, but ground rules are needed to create an equal and democratic playing field.”
Much of the discussion in the AI Advisory Body concerned human rights, respect for law and cultural values, say Myllymäki. “I wasn’t aware that the right to culture is a fundamental human right. That means, your language, your religion, your value framework. It’s a legitimate concern that Californian companies may not have the respect for issues that are societal and political, not just technical,” Myllymäki reflects.
To counter this hegemony, Myllymäki says small countries like Finland need to build local services in their own languages and cultures, ‘culturally diverse AIs’. Singapore, for example, has released a playbook on how small countries can deal with AI. “When everything demands more data, more compute, more resources, small players like Finland naturally need to be clever,” observes Myllymäki. “That’s a cornerstone of FCAI’s research, building AI that requires less data and energy.”
Myllymäki is active in issues of collaboration, governance and public understanding of AI and was recently recognized as ‘thought leader of the year’ by AI Finland. To him, it’s clear that AI should respect human rights. “Do you need to say that? Isn’t it obvious, even in the digital world? We need to keep repeating it in different contexts, then the message becomes stronger.”