How far should artificial intelligence be allowed to go?
Hardly any other technological development raises as many ethical questions as artificial intelligence (AI). And the closer it gets to people – for instance, in customer dialogue – the more specific the demands become for clear rules regarding its use. The European Commission responded to these demands in December 2018 with its proposal “Ethics Guidelines for Trustworthy AI.”
In his short story “Runaround,” first published in 1942, Russian-American science fiction author Isaac Asimov developed three fictitious laws for how robots should interact with people. In his stories, these three laws set the foundation for the coexistence of machines and their human masters in a vision of the future – one in which artificial intelligence has long been thinking and acting independently. Today, algorithm-based intelligent systems are not nearly that far advanced. And yet, in a certain sense, they seem to be growing more and more human.
As useful and advanced as these systems may be, the necessity of rules for their use is no longer a matter of science fiction. AI not only has the potential to sustainably improve the lives of all of us; it also has the potential to endanger society if it falls into the wrong hands. The debate on ethics involves numerous questions: Can an intelligent machine (e.g. in the form of a chatbot) deceive people about its nature? To what extent can an algorithm use collected data to assess a person’s creditworthiness? Should an AI system be able to perform a medical procedure on its own authority, without involving a human physician?
In order to sustainably answer such questions and to develop regulations for the responsible use of AI, an expert group from the European Commission proposed a set of guidelines for a future ethic of artificial intelligence. The proposal was published on December 18, 2018. Public debate on the issue was expressly encouraged, with the website of the European Commission stating, “Tell us what you think. The European expert group requests your feedback.”
For the good of the people
When it comes to the good of the people (“Ensure that AI is human-centric”), these guidelines are not too unlike Asimov’s robot laws from nearly 80 years ago. Aspects such as accountability, data governance, non-discrimination, and freedom of the individual are examined and – in view of their relationship to artificial intelligence – are placed in a broader social context that is intended to inspire dialogue. Indeed, a lively discussion has developed on the subject since December of last year.
This is an important step toward the future, as confirmed by Sara Stalder, managing director of Konsumentenschutz, the Swiss foundation for consumer protection: “Information ethics – and ethics in general – is ultimately an entrepreneurial task. I’m convinced that information ethics will be an important issue in the coming years. A company’s credibility and attractiveness will be measured in part by its approach to information ethics, depending on the extent to which it is applied and the way data is used.” And this is a perspective that can be carried over into Europe as a whole, thereby substantiating the European Commission’s push to promote more obligation in the area of corporate responsibility.
Europe and the United States
But not everyone is convinced about the proposed guidelines. In the Frankfurter Allgemeine Zeitung, scientific journalist Ranga Yogeshwar criticizes the composition of the expert group. Besides comprising representatives of European companies such as Airbus, Orange, Bayer and Zalando, it also contains delegates representing US corporations such as Google and Amazon. Yogeshwar sees this (at least partially) as an attempt to create a climate that would allow major US corporations to cement their position in Europe – under the cover of a general discussion on ethics: “At least since the implementation of the European General Data Protection Regulation (GDPR) in May 2018, major players such as Google, Facebook and others have had a much harder time doing business on the European market. Maybe Silicon Valley has learned something from this and is now trying to break down a possible European AI barrier ahead of time.”
But whether it involves visionary debates or exerting influence for one’s own interests, one thing is clear: ethical approaches to increasingly intelligent systems will remain a matter of public discussion for some time to come. Today’s examples of AI are still like “savants,” as German computer scientist Jürgen Schmidhuber calls them, but they are constantly increasing their knowledge. And recommendations such as the guidelines proposed by the European Commission are a step in the right direction – even if they won’t serve as the basis for any new robot laws anytime soon.
Author: Editorial team Future. Customer.
Image: © sdecoret – AdobeStock