AI is transforming healthcare: 5 things to know

From automated diagnoses to smart devices for monitoring health, artificial intelligence (AI) technologies are rapidly changing how healthcare is delivered. We asked two Aalto University experts about how AI is impacting healthcare services, the reliability and underlying mechanics of automated medicine, and why appropriate regulation and continuous validation of these technologies is vital.

Lue tämä suomeksi

1. AI is already used in diagnosis, monitoring and healthcare resource planning

AI image analysis can be used to diagnose growth abnormalities from an X-ray of a child’s hand, track subtle changes in the brain during clinical drug trials or monitor the progression of diseases like brain cancer. With the help of AI, scans become faster to acquire, which reduces motion artifacts and the amount of radiation received by the patient, says professor Koen Van Leemput. “We can also reduce the time it takes doctors to read the images, which in turn allows us to capture higher-resolution images showing more details,” says Van Leemput, who develops machine learning methods for interpreting medical images.

In the area of smart devices, doctoral researcher Tommi Gröhn has gathered digital sensor data from chronic pain patients. These measurements can be used to gamify a treatment plan in virtual reality, and mathematical modeling of movement data can even distinguish patients from control subjects. Automated measurements and monitoring are becoming commonplace. It’s also becoming possible to track the carbon footprint of various treatment options.

Economists have allocated healthcare resources between different regions already for a long time, for example in Finnish wellbeing service counties. Lately, AI tools have allowed health economists to adaptively learn about the relationships between variables, such as age and different diseases. These tools allow decision makers to identify underserved groups in the healthcare system, for example patients with multiple chronic diseases.

2. Lack of data and validation are challenges for expanded use of AI in healthcare

People only get scanned when they have a disease or are injured, and the data are very heterogenous, acquired with lots of different machines with different settings across the world. This severely limits the data that is available for training and testing AI algorithms in practice. Furthermore, electronic health records miss information about whether a patient was satisfied with the given treatment or not. If they don’t return, is it because they got better or went somewhere else? The reporting of outcomes in the treatment pathways is a big gap in making healthcare more efficient.

A better bicycle does not replace the cyclist, and AI won’t replace doctors
— Koen Van Leemput

Another issue in commercializing AI healthtech is the data shift problem, says Van Leemput. “You have training data and an algorithm that work for one type of scanner or for one study, and you sell a product based on that, which then fails in a different context or with different hardware. Studies have shown that many commercially available systems have not been validated thoroughly in this respect.” To counter this, Van Leemput is developing methods that can automatically adapt to changes in the images and handle artifacts and limitations more robustly.

3. The decisions of black box AI models must be understandable to be trusted

 Trustworthiness of algorithms needs to be more widely addressed, says Gröhn. “An emphasis on predictive performance often shifts attention away from understanding the model’s logic. There needs to be trust of a reasonable logic behind a model, and an understanding of limitations and the uncertainty of the prediction, because no model is perfect.” Data scientists, doctors, and social scientists are actively working together to make models more transparent and acceptable.

“It’s hard for a person to understand how a neural network estimates your disease status from a brain scan, for example,” adds Van Leemput. Creating AI that can explain decisions in ways that humans find easy to understand requires developing different types of algorithms. When doctors explain their decisions, they will often emphasize how specific conditions cause specific changes in anatomy, and how a different diagnosis would have required the images to look differently. This is a process fundamentally distinct from simply assigning a diagnosis based on how similar an image looks to other images in a training dataset, which is the current prevailing approach in AI.

 4. AI will not replace doctors

But it will enable doctors to be more efficient with their time. “AI can do boring tasks faster and can help detect subtle changes more accurately. If you make a better bicycle, it does not replace the cyclist, or in the case of image analysis, the radiologist,” says Van Leemput.

Gröhn agrees: “Machines can see more pixels, so it’s smart to integrate their capabilities into detection, then doctors can concentrate on difficult cases. If a machine can diagnose 90 percent of patients very reliably, it makes sense to let the machine to do that. On the other hand, it is very important that the machine clearly informs about its limitations for the remaining 10 percent.” The importance of tacit knowledge and lived experience will continue to be human assets.

As the cost of healthcare increases, we may see AI as a solution to replacing expensive services, but this should be viewed carefully, says Gröhn. “Computer science may provide algorithmic answers to how resources should be allocated, but we can’t ignore societal concerns or reduce the quality of care based solely on these predictions.”

5. Regulation and interdisciplinary expertise are vital for developing AI that is both useful and fair

There is plenty of hype in the healthtech industry. “Someone can raise money and sell their model or product to a hospital, and it can fail without warning because it was trained on data from somewhere else. We absolutely need regulation to ensure that AI methods are tested and validated in the real world and supported by evidence,” says Van Leemput.

Methods that work well at some point can also become outdated, with performance degrading as time passes. “Little things, like changing the order of various options in a form at a hospital, can have dramatic effects, where your software suddenly stops predicting well. Continuous testing and validation of AI in healthcare is needed,” emphasizes Van Leemput.

Gröhn says that, ideally, AI will integrate background knowledge from other domains, such as biology and economics, always depending on the use case for that AI system. “AI or data science by itself isn’t enough, because one size does not fit all when it comes to health technologies. Domain experts can inform our understanding of what good quality data is,” says Gröhn, citing the goals of the Datalit project where he is conducting his PhD research.


More on health-related research at FCAI: fcai.fi/health