Ten common challenges for AI—should we be worried?

Public discourse often generates abstract AI risk scenarios that are hard to put into concrete form. The long-term consequences are difficult to grasp and we easily underestimate the changes. Here, VTT Principal Scientist and associate professor Jaana Leikas discusses ten concerns related to the effects of artificial intelligence development on humans and society. Lue tämä suomeksi

Artificial intelligence has the potential to enhance the wellbeing of citizens, but it can also lead to increased mistreatment of individuals and pose a threat to justice and democracy. UNESCO’s Recommendation on the Ethics of Artificial Intelligence states that, in addition to many benefits, AI technologies present risks and challenges resulting from malicious use of technology and the deepening of inequalities and gaps. The first aspect is related to deliberate malicious activities and the second to the inability to anticipate the far-reaching effects of artificial intelligence on people, interpersonal relations, societal functions and the natural world.


1. Redistribution of power. The AI-based platform economy is growing. Simultaneously, societal power is progressively being shifted to platform giants and directed away from democratic institutions. According to a survey conducted in 2022, the NLP research community stated that private companies have too much influence. Meredith Whittaker, a former employee of Google and one of the founders of the AI Now Institute in New York, has repeatedly expressed that the discussion on AI regulation should lean towards considering the supervision of business models instead of laws.

The operating logic of digital giants is based on collecting as much user data as possible and selling it for different purposes, such as advertising, but also for political leverage. Large corporations have the power to influence what is being developed and what kind of operating models for advancing AI are accepted and introduced internationally. Investing in venture capital also allows them to exert influence over the activities of small enterprises.

2. Renewal of work tasks. Work carries meaning both on a personal and social level. For many people, the ability to support themselves and their family is integral to their sense of identity and self-worth. By challenging the capabilities needed for knowledge work, for example, artificial intelligence is questioning the traditional notion of human value based on work. Generative artificial intelligence, in particular, has the potential to revolutionise different sectors. The development of large language models has sparked a debate on the potential job losses within the middle class, which plays a vital role in upholding societal structures.

According to a study of 900 people conducted in 2023 by VTT and TEK (a trade union for academic engineers and architects in Finland), artificial intelligence is a concern but not a major threat. In light of changes based on artificial intelligence, a proactive approach has been taken and it is crucial to enhance skills further.

3. Increasing inequality. One of the biggest issues in the development of artificial intelligence and digitalisation is epistemic inequality. Equality and non-discrimination are only realised if everyone has equal access to truthful information and understands the alternatives produced by artificial intelligence and their consequences. Intellectual rights and capabilities make equality possible in the implementation of informed consent among citizens. Especially individuals in vulnerable positions, such as those with decreased cognitive or mental health capabilities, are particularly vulnerable to exclusion and abuse enabled by technology.

It is important to keep everyone involved in the development and to ensure a fair and just transition to the post-labour society supported by artificial intelligence. How can we provide support to individuals who are overwhelmed by digital services and unable to use, for instance, strong electronic identification? Would it be a step towards greater equality, for example, to switch to the use of bots based on language models and the possibility of managing matters online by speaking instead of writing?

4. Distorted data. The development of artificial intelligence is driven by humans, and people are susceptible to bias and judgement. Their attitudes may be passed on to algorithms, changing how individuals are assessed and the factors considered when decisions are made about them, as well as the services offered to them. 

It is dangerous if the use of AI strengthens the values and beliefs of algorithm developers. When photographs of Congress members and politicians’ Twitter images were fed to Google’s Cloud Recognition Service in a US study, it was found that women’s images received three times more comments about physical appearance compared to men. Coded prejudice strengthens harmful gender stereotypes.

Jaana Leikas. Photo: Tiina Ahovainio

5. Mass surveillance. We are able to move around the city thanks to an abundance of city cameras and AI technology that mitigate any potential risks to our safety. In the city of Shanghai, AI-assisted robot dogs supervise compliance with pandemic restrictions and remind citizens of instructions by giving commands to people in the vicinity.

Algorithm technology can be used to identify individuals from large data sets by combining different databases. On the other hand, if technology is abused, mass surveillance might result in the disruption and control of democratic processes. Conversely, this can lead to the formation of societies characterised by all-encompassing surveillance that gives rise to totalitarian structures and influences every aspect of life.

In future virtual environments, artificial intelligence will enable not only the identification of individuals but also the detection of emotions. This will provide means for investigating individuals’ personality types and psychological attitudes, including their criminal tendencies. On the flip side, the data could be used to expose additional details, such as insights into individuals’ mental health, sexuality and political opinions.

6. Profiling and automated decision-making. Big data is employed to profile individuals into specific types or groups based on their data, which rarely contains information about the social context in which it was produced. In other words, it readily dismisses the underlying assumptions and life contexts linked to data collection. Consequently, there is a fear associated with artificial intelligence that decisions about individuals will be made in a narrow, automated way, reducing people to objects governed by mechanical interpretations of regulations.  Citizens struggle to uncover the reasoning behind decisions that affect them, making it even harder to defend their rights, as machine learning algorithms used to process big data lack transparency and involve complex chains of reasoning.

The upcoming European Artificial Intelligence Act will prioritise evaluating AI systems that pose a high risk to the health, safety or fundamental rights of natural persons. In Finland, cases that require special consideration are thankfully exempt from automated decision-making under the Administrative Procedure Act. The authority is not allowed to make automatic decisions on citizens with the help of artificial intelligence when a case-by-case evaluation is necessary.

7. Toxic information environment, disinformation and AI hallucinations. Machines are unable to distinguish between right and wrong, or true and false. They can only statistically deduce the next word based on the teaching material. ChatGPT apologises if the user does not agree with the result it has generated.

There is an abundance of answers available yet decreasing control over the accuracy of information. Disinformation can affect democracy, freedom and societal stability. The use of artificial intelligence can promote extensive disinformation campaigns in social media, initiated by private individuals or even state actors. Social media bots mimic human users by engaging in posting, retweeting and participating in content on a massive scale, creating the illusion of the common good. You cannot trust your own eyes: deepfakes manipulate audio and video content and produce realistic but fabricated content, resulting in a loss of trust in verifiable facts. A video showcases a well-known head of state articulating fabricated statements in their own voice. In the United States, the decisions made by leaders, health authorities, courts, police and universities, and even the integrity of elections, are being called into question. The erosion of trust in common rules and institutions is also taking place in Finland.

If individuals are unable to differentiate between what is true and what is false or to act responsibly as creators of data, the effects on their lives can be significant. The FBI warns against uploading photos online: the face of an individual was digitally imposed onto a fake video using just one photo sourced from the Internet, after which they were blackmailed for money. A new phenomenon is hoax calls, where the criminal speaks to the recipient in the voice of someone close to them, such as a child. Only a small snippet of audio that can be found in social media is needed for producing the AI fake.

8. Manipulation. In the Finnish city of Espoo, young people are suspected of starting fires in buildings, inspired by a social media challenge. Social media algorithms can lead to both intentional misconduct and unintended negative outcomes as a result of greed. Algorithms favour extreme positions that increase activity on the platform.

Loneliness and mental health issues are one of the global megatrends. AI applications are becoming more proficient in simulating human conversations and interactions and have the ability to allocate boundless resources to developing relationships. The forthcoming European Artificial Intelligence Act seeks to regulate these so-called subliminal systems that implement subtle influence by deploying “subliminal techniques”. Artificial intelligence can be used to enhance love scams in which people with a fake profile establish a close connection with a person of their choice for deceptive purposes. The subject is lured into a harmful web of deceit that can ultimately result in mental health issues and financial ruin. 

9. Bubbling. On social media platforms, algorithms offer different explanations of the world that expose people only to a certain perspective or content and strengthen their assumptions based on previous preferences. In the worst-case scenario, this results in individuals isolating themselves from views that differ from their own, so that their personal data bubble becomes their only reality, leading to a narrower understanding and fuelling confrontation. Being unable or unwilling to consider perspectives beyond your own bubble and being closed off to other viewpoints can be dangerous.

In a way, this is a form of hidden censorship, where alternative perspectives are intentionally omitted or rendered hard to access, thereby excluding them from consideration. One example of the phenomenon of bubbling is TikTok, where the fragmented presentation of information weakens people’s ability to form and understand broader sets of information.

10. Humanity 2.0. The socio-technical change could bring about a shift towards a data-centric view of human beings, prompting a change in our understanding of humanity. UNESCO’s recommendation discusses, among other things, the threat of objectification: when a person is viewed as a mere object, their value becomes tainted and damaged. The significance of privacy has been dramatically reshaped by technological advancements and societal changes. With the progress of artificial intelligence, our understanding of human autonomy may also change.

How can we cope with the challenges posed by artificial intelligence?

  • Conducting a proactive impact assessment that is sensitive to values and considers multiple perspectives.

  • Increasing common understanding of the responsible use of artificial intelligence.

  • Conducting research that is both empirical and explanatory.

  • Collecting feedback in an agile manner and implementing changes.

  • Increasing dialogue between research institutes, companies and institutions.

  • Investing in education.

  • Increasing citizens’ understanding of artificial intelligence and improving media literacy.

The Ethics Advisory Board (EAB) of the Finnish Center for Artificial Intelligence FCAI strives to foster researchers’ sensitivity to ethical reflection and has developed a tool for reflecting on the implications of AI applications.