BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Europe's Quest For Ethics In Artificial Intelligence

Following
This article is more than 4 years old.

This week a group of 52 experts appointed by the European Commission published extensive Ethics Guidelines for Artificial Intelligence (AI), which seek to promote the development of “Trustworthy AI” (full disclosure: I am one of the 52 experts). This is an extremely ambitious document. For the first time, ethical principles will not simply be listed, but will be put to the test in a large-scale piloting exercise. The pilot is fully supported by the EC, which endorsed the Guidelines and called on the private sector to start using it, with the hope of making it a global standard.

Europe is not alone in the quest for ethics in AI. Over the past few years, countries like Canada and Japan have published AI strategies that contain ethical principles, and the OECD is adopting a recommendation in this domain. Private initiatives such as the Partnership on AI, which groups more than 80 corporations and civil society organizations, have developed ethical principles. AI developers agreed on the Asilomar Principles and the Institute of Electrical and Electronics Engineers (IEEE) worked hard on an ethics framework. Most high-tech giants already have their own principles, and civil society has worked on documents, including the Toronto Declaration focused on human rights. A study led by Oxford Professor Luciano Floridi found significant alignment between many of the existing declarations, despite varying terminologies. They also share a distinctive feature: they are not binding, and not meant to be enforced.

Getty

The European Guidelines are also not directly enforceable, but go further than these previous attempts in many respects. They focus on four ethical principles (respect for human autonomy, prevention of harm, fairness, and explainability) and go beyond, specifying that Trustworthy AI also implies compliance with EU law and fundamental rights (including privacy), as well as a high level of socio-technical robustness. Anyone who wishes to design, train and market a Trustworthy AI system will be asked to carefully consider the risks that the system will generate, and be accountable for the measure taken to mitigate them. The Guidelines offer a detailed framework to be used as guidance for such assessment.

For those looking for strong statements, the Guidelines may not be a great read. You will find no mention of Frankenstein, no fear of singularity, no resounding provisions such as “AI should always be explainable”, “AI should never interfere with humans”, “there should always be a human in the loop”, or “AI should never discriminate”. These statements are intuitively attractive, but are very far from the reality of AI deployment and likely to prove disproportionate when converted into a policy framework.

Users do not need a detailed explanation and understanding of how an AI-enabled refrigerator works, or even how an autonomous vehicle takes ordinary decisions. They need to trust the process that brought them to the market, and be able to rely on experts that may intervene whenever things go wrong. But users should be entitled to know why they were refused access to a government file, or why someone cut the line as recipient of a subsidy, or a kidney. Likewise, a human in the loop will make no sense in some cases (think about humans sitting at the steering wheel in autonomous cars); yet a human “on the loop”, or a “human in command” may be required. And while discrimination will be often inevitable because our society is already biased, excessive, unjustified, unlawful discrimination should be outlawed, and prompt redress should be given to the damaged individuals. Importantly, the Guidelines also include examples of “areas of critical concern”, which are most likely to fall short of meeting the requirements of trustworthy AI: identifying and tracking individuals with AI, deploying covert AI systems, developing AI-enabled citizen scoring in violation of fundamental rights, and using AI to develop Lethal Autonomous Weapons (LAWs). 

The concept of Trustworthy AI is still only an “aspirational goal” in the wording of the High-Level Expert Group. It will be up to the EU institutions to decide in the coming months whether to make it a binding framework, and for which use cases. This may entail the use of both hard law (such as amended rules on torts, and sector-specific legislation making Trustworthy AI binding in some contexts, ad hoc competition rules), and also softer instruments. Among other initiatives, the EU could decide that all public procurement be limited to Trustworthy AI; or mandate that AI applications in healthcare be trustworthy. There may be a need for some form of certification to ensure that the new system is correctly implemented, and information is correctly presented to users.

A different issue is whether this system will help Europe set global AI standards and thereby relaunch its competitiveness. IBM has declared that they will apply the framework across the globe. But given (1) the United States is considered to provide for inadequate privacy protection of the end users, and (2) U.S.-based platforms are regularly accused of excessive interference with users’ autonomy and self-determination, Trustworthy AI could also be used to shut the door to non-compliant (or non-European) players in the near future. The expert group that drafted the Guidelines did not discuss any such industrial and trade policy scenarios. But the EC hinted at this possibility by advocating, in a recent official document, the development of ethical, secure and cutting-edge AI “made in Europe”.

Follow me on Twitter or LinkedInCheck out my website