Skip to main content
EUPLANT

Towards European AI: Part II – The ethics of AI as a global issue

In 2019 numerous states, international organisations and tech giants have shown enthusiasm towards redefining their normative identity by putting together principle-based approaches to Artificial Intelligence (AI). From Google and Microsoft to China and Australia to the European Union (EU) and the Organisation for Economic Co-operation and Development (OECD), a number of important stakeholders have, over the course of the last year, sought to define their position in relation to AI by publishing lists of – usually non-binding – principles which they seek to observe when developing or deploying AI.

Published:

EUPLANT logo sat above the Erasmus+ logo which states 'with the generous support of the Erasmus+ programme of the European Union

Read part I of the blog here

Alexandru Circiumaru
Research Assistant, Jean Monnet Network on ‘EU-China Legal and Judicial Cooperation’ (EUPLANT), Queen Mary University of London

In 2019 numerous states, international organisations and tech giants have shown enthusiasm towards redefining their normative identity by putting together principle-based approaches to Artificial Intelligence (AI). From Google and Microsoft to China and Australia to the European Union (EU) and the Organisation for Economic Co-operation and Development (OECD), a number of important stakeholders have, over the course of the last year, sought to define their position in relation to AI by publishing lists of – usually non-binding – principles which they seek to observe when developing or deploying AI.

By taking this action these stakeholders showed, among other things, that they are paying close attention to the development of AI and that they intend to actively participate in it in a socially conscious way with the creation of “AI that works for good” as an overarching goal.

While one must appreciate the existence of such lists - if not necessarily for their content then at least for pushing AI more and more into the focus of the general public and outlining the importance of creating AI that works for good – an in-depth analysis of each is beyond the scope of this post. Instead, as mentioned in part I of this series (available here), this post will focus on the Ethics Guidelines for Trustworthy AI published by the European Commission (the Commission) in April and on the Beijing Principles, released one month later. Consideration will also be given to the steps taken by the United States (US) in this field, with references made to the OECD AI principles – also released in May 2019.

Ethics guidelines for the use of trustworthy AI

An important observation must be made from the beginning. Although published by the Commission, this document was put together by the High-Level Expert Group on AI (AI-HLEG), assembled by the Commission and comprising representatives from academia, civil society and industry – but not by the European Commission itself. This is an important distinction as it reflects the fact that in the process of putting together EU’s first legislative proposal on AI, the Commission is not bound by the views and ideas expressed by the AI-HLEG, although it is certain that it will draw inspiration from them.

Through the publishing of these Guidelines, the EU has acted as the normative power Ian Manners once described it to be, attempting to take the lead in regulating AI - as discussed in the previous post of this series -  so as to take advantage of its “first-mover” position and pre-empt this field with its own values and ideas.

It is however the legislative proposal on AI which the new Commission President has promised to deliver early in her mandate that will determine EU’s level of ambition and stance on AI and in consequence shape the way in which it can exercise its normative power on this matter.

The Guidelines lay some of the groundwork for the said legislative proposal, providing a number of guiding principles and a goal to work towards – “Trustworthy AI”. Achieving this goal lies at the very core of the Guidelines, which first define it and then aim to articulate a framework through which it can be achieved. Notably, the Guidelines make it clear that this framework must be based on EU fundamental rights. Fundamental rights therefore take centre stage from the very beginning - a clear sign of the key role they play in the European approach to AI.

In order for AI to be deemed “trustworthy”, the Guidelines explain, it must meet three requirements – it must be lawful, ethical and robust. The next three chapters deal in turn with the foundations of trustworthy AI (Chapter I), its realisation (Chapter II) and assessment (Chapter III).

The concept of lawful AI is partially dealt with in the Ethics Guidelines which explain it in broad terms and partially in the Policy and Investment Recommendations for Trustworthy AI, where legislative changes are suggested to ensure that the EU legal order appropriately covers the subject of AI. In broad terms, lawful AI means AI that complies with the already existing legislation, from the national level, to the EU level, to the international level, the AI-HLEG explains. It is here that the EU fundamental rights framework takes centre stage once again, its importance being restated, with direct references to the EU Charter on Fundamental Rights (the Charter) being made.

Making reference to the Policy and Investment Recommendations, thus somehow highlighting the need for legislative reform, the AI-HLEG moves on to describe the other two components - Ethical AI and Robust AI - as being closely intertwined and complementary. It is for this reason that both Chapter I and Chapter II address both of these components together.

In Chapter I, four ethical principles that must be respected in order to ensure that AI systems are developed, deployed and used in a trustworthy manner are singled out. These are: (i) Respect for human autonomy (ii) Prevention of harm (iii) Fairness (iv) Explicability.  Chapter II then attempts to translate these ethical principles into tangible, workable requirements for developers, deployers and end-users to take into account, the result being a non-exhaustive list of seven key requirements for Trustworthy AI.

The seven key requirements included are: (1) Human agency and oversight; (2) Technical robustness and safety; (3) Privacy and data governance; (4) Transparency; (5) Diversity, non-discrimination and fairness; (6) Societal and environmental wellbeing and (7) Accountability.

An in-depth analysis of all would be beyond the scope of this post which simply aims to make a broad comparison between the principles put forward by these three global players, taking them at face value. The Guidelines do however provide short descriptions for each of the seven key requirements.

The Beijing AI principles

The Beijing AI principles are the result of the work done by the Beijing Academy of Artificial Intelligence (BAAI), an organisation backed by the Chinese Ministry of Science and Technology and the Beijing municipal government, with collaboration from prominent and important technical organisations and tech companies.

The Beijing Principles - 15 in total - are divided into three parts, by the subject matter they address - research and development, use and governance. Before they are presented, a short introduction restates the fact that the development of AI concerns the future of the society as a whole, of all humankind and of the environment - a view which is shared not only by the EU but also by the US and all other member states of the OECD.

Looking at the eight principles listed for research and development, one will find many similarities with the Ethics Guidelines. There is consensus over the fact that AI should do good, work in service of humanity and be ethical. The Beijing principles for research and development also mention that AI should be open, responsible, diverse and inclusive and control risks. All these principles are found, sometimes under slightly different terms, also in the seven key requirements mentioned in the Ethics Guidelines.

On the face of it, it would appear that the EU and China share an almost identical vision as to the principles that should be taken into account when developing and deploying AI. True as it is that there are many similarities in their approach, especially towards the big, overarching goals, such as ensuring AI works to ensure the wellbeing of the society, on a careful look, one can easily see important differences.

One such difference is the importance given to human rights. While the Ethics Guidelines have the EU fundamental rights framework at their core, repeatedly restating its importance, the Beijing AI principles make no reference to human rights whatsoever. This is also unlike the approach taken by the US, as it will be described below.

Differences are also bound to appear in the interpretation of vague terms such as “overall wellbeing of the society”, “robustness” or “transparency”. Despite being clear that both the EU and China agree on their importance, it seems likely that differences will arise in the way they are defined. This can already be seen from the different approaches they take to AI-based credit scoring.[1] Albeit it being already implemented in China, the AI-HLEG has called for a complete ban on AI-enabled mass scale scoring of individuals and for the introduction of very clear and strict rules for surveillance for national security purposes and other purposes claimed to be in the public or national interest in line with EU regulation and case law.

The approach of the United States and the OECD AI principles

The US’s strategy on AI has been released in February 2019 in the form of an executive order signed by the President and titled “Maintaining American Leadership in Artificial Intelligence”. This document covers the objectives the US has in this area, some details about its approach to regulation and a discussion on the topic of AI and the American workforce. It does not, however, list any ethical principles similar to the ones described above.

To find those, one will have to look on the website launched by the US government in March 2019, AI.gov. One of the five sections of the website, titled “AI with American Values”, defines, inter alia, freedom, human rights guarantee and the rule of law as fundamental American values that American AI must reflect. More specifically, the same section calls for AI that is understandable and trustworthy, robust and safe and which takes into account workforce impacts.

The notion of trustworthy AI – a key concept in the EU’s approach – makes therefore an appearance in the structure of American AI as well. The same applies for understandability, robustness and safety, all three characteristics that the US, China and the EU all agree AI should have.

The devil will, as they say, once again be in the details. Despite the agreement on these key terms, differences are bound to appear when discussing what they actually mean in this context and how they are to be implemented.

The US has also adopted and enthusiastically praised the AI principles released by the OECD, adding that they address many of the issues that the American AI strategy addresses and therefore sound ‘familiar’. A similar feeling of familiarity will probably also be felt when looking at the Ethics Guidelines and the Beijing Principles given the similarities already discussed.

The OECD recommendation identifies five principles “for the responsible stewardship” of AI. AI should, the 42 countries which adopted this OECD recommendation believe, benefit the people and the planet, be designed in a way that respects the rule of law and human rights, be transparent, understandable and robust. Further, those who develop, deploy and operate AI-based systems should be held to account for their observance of these principles.

Perhaps unsurprisingly, given the said similarities and the number of Member States which have adopted them, these principles also have the backing of the European Commission, according to the OECD.  The fact that these principles have been so widely adopted will no doubt be good news for those who want to see a global consensus on AI and who want to ensure that AI’s power can be harnessed for good. The same people will however be left wondering whether this consensus and this apparent cohesiveness on the matter will finally lead to a unified international approach to AI and if so, under what terms.

Conclusions

While it is encouraging and commendable that so many states embrace the idea of developing and deploying only AI that is safe and which works for the benefit of the society as a whole, this remains a rather vague and too general goal. Further, reading all these various lists of principles, one can be left wondering whether there can be too much of a good thing.

Despite sending positive messages and foreshadowing important developments, these numerous lists of principles, regardless of how similar they seem to be, cannot replace one universal list which to serve as a baseline on which states and other stakeholders could build on. Such a list could take, for example, the form of a United Nations (UN) Convention or alternatively take inspiration from the UN Guiding Principles on Business and Human Rights. 

After having defined and put forward their own seats of principles these global players should now come together to analyse with a critical eye the similarities and differences between their approaches and attempt to put forward a common set of principles. This set of principles should include a clear and common definition for key concepts such as “robust AI”, “transparent AI” or “understandable AI”. Given the similarities presented so far and the expressed desire to cooperate, this should not be a “mission impossible”. What will no doubt be challenging however is the process of moving from agreeing to hopeful and catchy but very broad terms to specific, narrow, binding rules.

The international community will also do well to remember that the challenge of harnessing AI for good is very current and the available time to deal with it is limited. All things considered, it is therefore even more encouraging to see different states agreeing on certain principles, however broad, and expressing their interest in cooperating and working together.

The EU will certainly play an important role in the process of putting together an international approach to AI, even more so because of the Ethics Guidelines and of the legislation it prepares on the matter. In its bid to create “European AI” the EU should not miss the opportunity to study and learn from what others are doing and to try building bridges where possible. 

 

[1] For a more detail discussion on China’s social credit scoring system please see Creemers, Rogier, China's Social Credit System: An Evolving Practice of Control (May 9, 2018), Available at SSRN: https://ssrn.com/abstract=3175792 or http://dx.doi.org/10.2139/ssrn.3175792 and Chen, Yu-Jie and Lin, Ching-Fu and Liu, Han-Wei, 'Rule of Trust': The Power and Perils of China's Social Credit Megaproject (April 30, 2018). Columbia Journal of Asian Law, Vol. 32, No. 1, 2018, pp. 1-36. Available at SSRN: https://ssrn.com/abstract=3294776.

 

 

 

Back to top