Skip to main content
EUPLANT

Towards European AI: Part III – The AI Single Market, goals, hopes and dreams

A Single Market for Artificial Intelligence (AI), an “institutional structure” for Trustworthy AI, up-skilling and re-skilling the current workforce; these are just a few of the recommendations the High Level Expert Group on AI (AI-HLEG), assembled by the European Commission (the Commission), puts forward in the second deliverable it released - after the Ethics Guidelines discussed in Part II of this series – “Policy and Investment Recommendations For Trustworthy AI”(the Recommendations).

Published:

EUPLANT logo sat above the Erasmus+ logo which states 'with the generous support of the Erasmus+ programme of the European Union

Alexandru Circiumaru
Research Assistant, Jean Monnet Network on ‘EU-China Legal and Judicial Cooperation’ (EUPLANT), Queen Mary University of London

Saying that AI will change the world is quickly becoming less of a revolutionary statement and more of a cliché as mass-media more widely embraces this idea and as the interest of the general public in this topic increases rapidly. Yet, it remains hard to imagine how exactly AI will change the daily life of EU citizens and within which timeframe. This second paper released by the AI-HLEG provides a fascinating insight into how wide-ranging the effects of AI will be on the society as a whole, by putting forward 33 broad recommendations, each coming with a series of more specific sub-recommendations, addressing issues as diverse as education, cybersecurity, public procurement and profitability of start-ups and small and medium-sized enterprises (SMEs). 

It would be a fool’s errand to try and guess how many of these recommendations and sub-recommendations will be given serious consideration by the Commission and how many will ever be implemented. Regardless, this deliverable is and will remain valuable because: (1) it provides a snapshot of how wide-ranging the impact of AI will be and of the work needed to prepare for it; (2) it is a wake-up call for policy makers at both EU and member state level; and (3) it provides a goal to strive towards.

Ultimately, through the level of ambition of its vision, the AI-HLEG has put together a rough sketch of a roadmap towards a golden standard of Trustworthy AI, complementing in this way the Ethics Guidelines to which the Recommendations often make reference to. This should prove relevant not only for policy makers but also for stakeholders such as companies developing and/or deploying AI, or the civil society which should work to keep the EU and individual member states on track to reach this golden standard.

The Recommendations are divided into two chapters – 1. Using Trustworthy AI to Build a Positive Impact in Europe and 2. Leveraging Europe's Enablers for Trustworthy AI. An in-depth analysis of each recommendation is beside the scope of this post which aims instead to single out some of the most interesting ones. This exercise will be helpful in keeping track of what will happen with these ideas, for example whether they will appear in the White Paper on AI (the White Paper) – which is due to be released by the end of February 2020 - and if so under what form.

Chapter 1 - Using Trustworthy AI to Build Positive Impact in Europe

The first area of impact that the AI-HLEG focuses on is (A) Humans and the Society. Some of the most interesting recommendations here are refraining from disproportionally using mass surveillance of individuals, using AI solutions to address sustainability challenges, the creation of an EU Transition fund and establishing monitoring mechanisms at both national and EU level to continuously analyse, measure and score the societal impact of AI.

Leaks suggest that the idea of banning mass surveillance will be present, under a certain form, in the White Paper. It remains to be seen what will happen with the other recommendations and whether they will ever be acted on, in particular the monitoring mechanism which seems crucial given the speed with which AI evolves and its wide-ranging impact.

The second area of impact is (B) the private sector. Here the attention is focused on giving start-ups and SMEs the necessary tools to use AI in a way that is compliant with the Ethics Guidelines, such as creating easy avenues to funding and advice, fostering the availability of legal and technical support to implement Trustworthy AI solutions that comply with the Guidelines, creating an EU-wide network of AI business incubators that connect academia and industry and boosting the development and growth of AI technology firms in Europe through the InvestEU programme.

One of the main arguments against regulating AI is that doing so might prevent start-ups and SMEs from doing work in this area therefore hindering innovation. Finding ways to provide them with legal and technical support to implement Trustworthy AI solutions might prove to be an effective solution to this potential problem.

The Recommendations then move from the private to the (C) public sector to put forward, in broad lines, three ideas: (1) public institutions and bodies in all Member States should ensure that their use of AI is compliant with the Ethics Guidelines, (2) a reform of the public procurement procedures is necessary so as to encourage investing in Trustworthy AI and finally, (3) AI-enabled mass scale scoring of individuals should be banned. The AI-HLEG is thus unequivocal on the matter of using AI to score citizens as part of a social credit system. Whether the Commission and ultimately the EU will decide to follow this position remains to be seen.

The way in which the EU positions itself on this matter might be directly at odds with China’s approach, which already has such a system in place. Any possible cooperation between the EU and China on AI related matters might be heavily influenced by the stance that the EU decides to adopt on this matter.

The fourth and last area of impact is (D) research. The Recommendations focus on developing a research roadmap for AI and ways in which it can subsequently be implemented, such as creating a stimulating and empowering research environment which attracts and retains researchers specialising in AI.

Chapter 2 - Leveraging Europe's Enablers for Trustworthy AI

The second chapter contains 17 recommendations, again spread out across four areas, which this time are not areas of impact but ‘enablers’. The AI-HLEG explains that in order to achieve what was described in the first chapter a strong foundation is needed; the four enablers are the elements necessary for creating that foundation.

The first enabler is (E) Building Data and Infrastructure for AI. Those who followed this series from its beginning might remember a quote from Commissioner Vestager who said, not too long ago, that “some say that China has all the data and America has all the money […] but when I see what we have going for us in Europe, it’s that we have purpose”. Indeed, many, such as Kai Fu Lee, do say about China that its access to data will lead to it leading the field by 2030. The way the EU responds to this is therefore crucial.

The AI-HLEG suggests, among others, that the EU should consider creating a network of testing facilities and sandboxes, with high-speed networks interconnecting them and with appropriate governance mechanisms to set legal and ethical standards, setting up national and European data platforms for AI that include all necessary tools for data governance, creating a data donor scheme and considering the introduction of a data access regime on FRAND terms.

The Recommendations then move on to consider the next enabler (F) Generating appropriate Skills and Education for AI. Here a reform of educational systems around Europe is suggested, so as to adapt them to focus on strengthening “human-centric key skills”.

It is also in this part of the Recommendations that one of the most sensitives issues related to AI is approached, although indirectly - its impact on the job market. The AI-HLEG, without going into details about the number of jobs that could disappear, stresses the importance of continuous learning and training, recommending the creation of a right to continuous training, to be implemented by law or collective agreement. It also suggests developing employment policies that support and reward companies who are setting up strategic and reskilling plans for the development of new data and AI-related applications.

The next enabler deserves particular attention, especially in light of the forthcoming legislation on AI, promised by the President of the Commission in her agenda for Europe – (G) establishing an appropriate governance and regulatory framework. These recommendations should, in principle, form the very basis on which the promised legislation will be formulated. Whether or not this will be the case will become apparent once the White Paper on AI is released.

In broad lines, the AI-HLEG advocates for a risk-based approach to regulation generally and a precautionary principle-based approach for specific AI applications that generate “unacceptable” risks or pose substantial threats.  It also suggests that a systemic mapping and evaluation of all existing EU law relevant to AI systems should be carried out, hinting at how civil liability, criminal law provisions, consumer protection, data protection and competition rules, as well as non-discrimination provisions might have to be changed.

The AI-HLEG also urges policymakers to refrain from establishing legal personality for AI systems or robots, deeming this to be fundamentally inconsistent with the principles of human agency, accountability and responsibility, and to pose a significant moral hazard.

It is also here that the AI-HLEG puts forward two of the most daring recommendations - establishing governance mechanisms for a single market for Trustworthy AI in Europe and setting up an institutional structure for Trustworthy AI to fill the existing governance gap. The AI-HLEG also provides a number of steps to be taken for the AI Single Market to be achieved, such as establishing a comprehensive strategy for Member State cooperation for the enforcement of regulation relevant to AI, and lays out the mission it believes the new “institutional structure” should fulfil.

The fourth and last enabler discussed is (H) raising funding and investment. The need for a European transition fund to help manage the “transitions taking place in the job market” is once again mentioned, together with a suggestion that a European Coalition of AI Investors should be set up and investment guidelines that take into account the Ethics Guidelines be put together by the Commission working together with the European Investment Bank

Conclusions

Some of these recommendations, such as the AI single market or the creation of an institutional structure for Trustworthy AI would bring substantial changes not only to EU law but also to the way the European Union functions, having an impact not only on its competences but also on the way in which its budget is to be spent. The first test for these recommendations is the White Paper on AI. Once published, it will show which ideas put forward by the AI-HLEG have received the endorsement of the Commission, therefore moving closer to becoming a reality, and which remain, at least for the time being, goals, hopes and dreams.

 

 

Back to top