Data spaces and compute power advances in language technology and speech recogntion at FCAI

Having a typed conversation with an AI bot like ChatGPT may now seem commonplace, but what about talking with an AI system? With the widespread deployment of large language models, this may seem easy and straightforward, but according to University of Helsinki research director Krister Lindén, the bottleneck is data—there are many languages in which a conversation with an AI isn’t possible, because there simply is not enough speech data.

Image: vectorjuice / Freepik

“Current machine learning models in speech need big transcribed data resources. For a basic speech recognizer, a ballpark of 100 hours of transcribed speech is needed, and that will only recognize well-formed speech, not varieties or dialects,” explains Lindén. Two parallel large-scale projects in the language technology space are working on developing speech models for special purposes and for languages with fewer speakers.

The goals of the LAREINA project can be understood in the context of the generative AI explosion, says Lindén: creating a speech interface for applications like ChatGPT by training the most powerful speech models for languages spoken in Finland. “We’re trying to leverage multilingual speech models and sources of untranscribed speech data to create speech models for under-resourced languages like Swedish spoken in Finland or Sámi, of which three variants are spoken in Finland,” says Lindén.

The data bottleneck was solved by using thousands of hours of radio and television archives from Kavi, the Finnish national audiovisual institute. The resulting large speech model was trained on LUMI, Europe’s fastest supercomputer, but more importantly, it is completely open, meaning anyone can use it or fine-tune it for a particular business case. Aalto University and FCAI professor Mikko Kurimo will present LAREINA’s progress in automated speech recognition from raw audiovisual data at FCAI’s AI Day on October 21.

As for how these data and models can be shared across academia and business, Lindén points to ALT-EDIC, a Europe-wide data infrastucture for language technology started in 2024. This consortium brings together all the data resources that publishers, media companies, businesses and the public sector can harness to make their own large language models (LLMs), for applications from customer service to delivering the news. Finland is currently an observer in ALT-EDIC and aims for full membership. Lindén emphasizes the potential of LLMs across sectors: “Through ALT-EDIC, companies can find suppliers, buy data or outsource and buy ready-made or specialized models. This data and resource infrastucture is an opportunity for diversity in the language space, because you can’t rely on one big tech company to provide everything, especially for small languages.”

As Lindén and Kurimo continue their collaborations to advance speech recognition, one fruitful path they have observed is to use untranscribed speech data and small amounts of transcribed data from under-resourced languages with existing LLMs. Starting from a big multilingual speech model yields good results more quickly, says Lindén. The backing of public and private institutions in Finland through the LAREINA project, combined with the high-performance computing of LUMI, means that researchers can focus on AI tasks like speech-to-text, intent, emotion and privacy in speech, and text-to-speech. Some current results of the LAREINA project and the role of Finland in ALT-EDIC will be explored in the workshop on Large Language Models and Speech-Centric AI on October 9.