What we learned from AuroraAI: the pitfalls of doing ethics around unsettled technologies

According to contemporary viewpoints on government in the 21st century, for example those produced by the OECD and taken up by national administrations, the work of government bureaucracy is increasingly the work of technological innovation. One real-world example of this can be found in the Finnish National Artificial Intelligence Programme, a governmental AI project led by the Finnish Ministry of Finance, which was completed at the beginning of 2023. In November, the FCAI Ethics Advisory Board hosted a panel event titled “AuroraAI – a vision of the everyday”. The panel discussion, dealing with challenges and lessons learned, was to be a kind of resolution and apologia, a settling of accounts, or a reaching for the bottomline, of a governmental artificial intelligence venture many have described as difficult to grasp. This blog post presents some of the issues in the ethics of unsettled technologies which for me, as a technology studies researcher, the panel discussion brought to mind. Namely, how can we do ethics when the technological matters-of-fact are unaccounted for?

But first, what was the Finnish National Artificial Intelligence Programme? The project got its start in 2017 under the Sipilä cabinet as a spinoff from Minister of Economic Affairs Lintilä’s national artificial intelligence strategy. The founding idea for the programme was originally conceived of and developed by civil servants at the Department of Government ICT in the Ministry of Finance, who took the lead in pushing the project through several cabinet changes. The programme's primary aim was, paraphrasing early strategy documents, to take Finland into the AI era, specifically in a human-centric manner, by building a technology which came to be called the AuroraAI Network. Presented at the 2019 World Governance Forum through the rhetorical question “What if you could consult your digital twin to steer your life?”, the AI system was to use personal data to advise citizens on possible life-choices and the relevant public and market-produced services related to those choices. The idea of the citizen’s AI adviser was strongly influenced by ideas of empowerment and life-event thinking in governance, ideas which see the task of public administration as both the co-ordination of cross-sectoral service ecosystems around citizens’ needs (as opposed to simply producing public services), as well as the nudging of these citizens towards beneficial consumer choices within these ecosystems. In 2020, the project was put into higher gear under Marin’s cabinet programme, where the programme is described as a project of “making life and business convenient by building the AuroraAI Network”. According to project documentation, the AuroraAI Network was ready for use at the end of 2022, and in October of 2023 it was announced that the system will no longer be maintained and will be taken offline at the end of the year.

Lessons Learned

As a government programme, the AuroraAI project certainly was ambitious and innovative. It looked to new technologies such as consumer recommendation systems (familiar from platforms like Spotify), customer-service chat bots (a ubiquitous part of the contemporary consumer experience) and industrial digital twins (a form of data model used in industrial maintenance management) to reform and refactor certain aspects of the Finnish welfare state. And reforming, even in seemingly benign ways, of such a deep-rooted, mundanely impactful and politically contested institution is bound to be entangled with concerns of political, social and ethical import. The question posed for panel discussion is appropriate: what did we learn? According to the panel, the teachings are varied and ambivalent.

Niko Ruostetsaari, Senior Specialist at the Ministry of Finance, finds that the project was in its nature experimental, and the outcome of that experiment were primarily certain findings regarding the juridical context of governmental data use in the EU. According to Ruostetsaari, the programme approached the problem of customers running “counter-to-counter”, causing unnecessary service demand, something which can be streamlined by the AuroraAI Network. In Ruostetsaari’s examples, the police can pre-emptively inform a citizen that their passport is to expire, or the tax administration can advise a certain customer that their issues are actually dealt with by the Register and Patent Agency. Bureaucratic AI is ostensibly the project of smoothing out the customer experience in public administration, creating digital ease and convenience in a world of bureaucratic procedure.

Tommi Mikkonen, professor of software engineering at University of Jyväskylä finds that the most laudable facet of auroraAI was a certain air of innovation: the rolling up of one's sleeves, fearless trial and error, a kind of “pioneer or start-up spirit”. The Finnish idiom to describe is that of a tekemisen meininki, a spirit of doing, if you will, which is especially exciting for engineers and other technical experts, likened by Mikkonen to “children in a candy shop”. For Mikkonen, this kind of atmosphere of innovation, with the counsel of starting small and working within resource limitations, is something which could be taken up by Finnish export industries as well.

Nina Wessberg, principal scientist in ethics at VTT, finds that AuroraAI fits squarely into their research programme’s model of social value in AI systems: developing both democratic and administrative processes, as well as both the efficiency and efficacy of services. According to Wessberg though, the idea of digitally modelling a human life is a complex task in which the question of technology becomes the least of many worries. Bearing on questions of social acceptability, bureaucratic responsibility, self-determination, and embroiled in questions of tacit knowledge, emotions and goals, the biggest lesson learned from AuroraAI is that processes of responsible design are vital in technology projects.

Aaro Tupasela, university researcher in sociology at University of Helsinki, finds that the main takeaway from the project is a practice of questioning: do we really need AI for this? Many of the more mundane visions of convenient services can be achieved just by thinking through how bureaucratic processes and information flows between organisations work. Then again, the visionary goals of predicting citizen’s needs and nudging them towards good choices impose upon a realm of individual autonomy which the state should be very careful to tread on. Questions of making individuals better will always run into the problems of what really is a better individual. The upshot is that projects with such ambitious aims should really be placed under scrutiny and evaluation before implementation.

The panel discussion painted a broad if blurry picture of the AI system. No consensus emerged on the bottomline; in-fact the bottomline became a matter of active contestation. For Ruostetsaari, the critical points raised by Wessberg and Tupasela represent “dystopian fantasy” on the part of the ethicists. For him, their desire to imagine AuroraAI as a machine of domination is based on some early plans and visions, but it does not represent the technology which was ultimately developed. But amongst these contested matters of concern, surprisingly it is exactly the factual matter of what the technology does which is most ambiguous and open for interpretation. Indeed, the most incisive question of the discussion comes from an audience member, who asks what building the AuroraAI network cost and what was concretely achieved. The ministerial answer is that the technology cost 10 million euros, and the results can be looked up online.

Unsettled accounts

In science and technology studies, the indeterminacy related to what a technology does, what it’s supposed to do, and what its value is, can be described with the handy concept of “closure”. Much like at the bitter end of a relationship, closure is something which we strive for. In early moments of technology development, it may still be quite unclear what kind of problems the technology at hand should solve. Also, different individuals and social groupings may form factions which have significantly diverging ideas about what it is that technology in fact does.  Through a social and material process of negotiating different accounts, certain ways of thinking about the technology become dominant, technical features settle to solve certain problems, and people come to use the technology in certain consistent ways. The technology will have reached closure. The rational ideal is of a process which tends towards a stable state of certainty, but some scholars in technology studies argue that closure, if ever reached, is only ever a practical and labile achievement, at risk of being thrust back towards uncertainty at any moment of social undulation. Indeed, it is possible that closure is never achieved, but nonetheless people and things manage to work around significant technological uncertainties.

The contested ethical concerns regarding the AuroraAI Network are not contested simply because the world of values is rife with differing viewpoints. The concerns are unsettled partly because the question of what the AuroraAI Network is, does, and should do never attained any form of closure. Rather, we have various unsettled accounts. For example, there exist at least three AuroraAI Networks.

AuroraAI 1:  This is the AuroraAI of conceivable potentialities. While the AuroraAI Network does not inform citizens of expiring passports or direct tax customers to other agencies, it's conceivable that in some world it could. AuroraAI could conceivably make public services smooth and convenient, reducing forms, clicks and queuing tickets, in a way that is ethically benign and politically uncontested, as presented by Ruostetsaari.

AuroraAI 2: This is the AuroraAI of documented plans and intentions. Development documents like design schematics, technical white-papers and project presentations make an account of a technology which gathers personal information from diverse sources to produce a digital twin of individual citizens. The plans describe an AuroraAI to empower citizens to make good consumer choices in welfare service markets, and to reflect on their own wellbeing through algorithmic metrics, as analysed in my recently published article. This AuroraAI is entangled in various different matters of concern as posed by Tupasela and Wessberg.

AuroraAI 3: This AuroraAI is the technology as it’s encountered in the wild and on the servers, and that which will be shut down at the end of the year. AuroraAI as it appears in silica exists only as a kind of plugin for Zekki, a wellbeing self-evaluation service meant for adolescents and young adults. By answering 10 ordinal questions about their personal situation, the user will receive social-worker-compiled wellbeing resources and recommendations. In certain larger municipalities like Oulu or Helsinki, the hand-curated recommendations are enriched with AuroraAI generated ones. Regardless of how one answers the self-evaluation though, the AI will invariably ever make only one recommendation: detached youth services (etsivä nuorisotyö).

It may seem straightforward to privilege that which exists here and now as the AuroraAI. Then again, in a world of continuous iterative development, what we can conceivably see as the next step is also a constitutive part of the phenomenon. And in the same vein, even if plans did not manifest as intended, the politics of designs is an inseparable part of how the social significance of technologies are negotiated. Thus, AuroraAI is yet unsettled amidst various accounts of what was planned, what is, and what could have been. And this uncertainty is sustained as long as no official and generally accepted account of the matters-of-fact exists.

Ethics in sustained uncertainty

The panel discussion hosted at Tiedekulma in November had a subtle promise of closure. Asking what was learned implies reaching a bottomline, coming to terms with the end of a technological venture and looking forward. Nonetheless, the contested concerns brought to the table at the event reveal that the official account of AuroraAI is yet to be settled. Ultimately, the ethics of a technological venture can be uncertain as a matter of contested values and politics, but so can they be contested when the matters-of-fact are thrust to uncertainty. This highlights a difficulty in the project of doing ethics around technologies: how can we make concerns matter when the technology defies closure? Then again, it also bodes poorly for the possibility of bureaucratic innovation: while technologies are the products of flexible and amorphous social processes, the ethical comportment of bureaucracy works through structured, hierarchical accountability. As long as official documentation admits no settled account, neither can bureaucratic accountability be settled.

Writer: Santeri Räisänen (PhD researcher, Technology studies, University of Helsinki)

Recording of the "Aurora AI - visio arjesta" discussion below (in Finnish):

Katri Karhunen