FCAI’s feedback on “AI made in Europe”
The European Commission's new White Paper on AI is most ambitious. It aims to foster a European ecosystem of excellence and trust in AI.
FCAI was happy to see that the central role that AI and data-driven technologies will play in supporting the economic growth and societal well-being in Europe, but attention should be directed towards some risks.
Much of the development of AI has been led by gigantic organizations that have access to the largest data sets in the world. Europe should develop technologies that work with smaller amounts of data, or scattered data sets, giving room also for smaller players to flourish. It is also important to support data-sharing ecosystems and platforms.
FCAI pointed out a rational approach to regulation. EU should regulate the use, not the technology. There is a need to review and potentially modify the current legislation in the light of recent developments in AI and digital technologies, but rational approach to regulation is based on the use of the technology, what the technology actually does, and not on the technology itself, how it works.
The commendable goals of Trustworthy AI can not be reached by imposing requirements on the training data of machine learning, or on the learning algorithms, but what we can do, is to monitor and verify how the resulting AI system works in practice. Having a high-quality data set and a good learning algorithm is a good starting point for machine learning, but this does not guarantee the quality of the learned AI system. The only way to verify the quality of most AI systems is to test them.
More dialogue is needed between decision makers, AI experts and the general public. The AI experts at FCAI are always ready to participate to this dialogue.
You can read FCAI’s entire commentary here.