Ethical guidelines for the interaction between people and machines: Can autonomous robots be held legally responsible?

Robots and artificial intelligence (AI) have long since been a familiar part of our everyday lives. The interaction between people and machines is therefore raising many new ethical questions: what should robots and AI be allowed to do – and what not? To what extent should they be permitted to operate autonomously? A commission working for the German government has recently dealt with the question of what that means for autonomous driving.
More than 70 years ago, the Russian-American author Isaac Asimov formulated three laws of robotics that have since then regularly found their way into novels and movies:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Although these rules appear plausible at first glance, they turn out to be problematic on closer inspection. For one thing, the first law is relatively vague. What exactly does “harm” mean – does this refer only to physical harm, or does it include psychological and economic harm also? For another thing, these laws provide no guidance for situations in which conflicts occur. For example, suppose a robot can only prevent the death of a person by accepting the possibility that some other person will suffer injury.
Just a rhetorical scenario? Definitely not, in view of the current research involving self-driving cars. After all, as science journalist Ulrich Eberl says, these are nothing more than autonomous robots on wheels. “The interaction between people and machines raises new ethical questions in the age of digitization and self-learning systems,” says German Federal Minister of Transport and Digital Infrastructure, Alexander Dobrindt. “Automated driving with vehicle-to-vehicle communication is the current innovation in which the full breadth of that interaction is taking place.” For that reason, an ethics commission led by the former Constitutional Court Justice Udo Di Fabio set about finding some answers to these questions. During the summer, the commission introduced the first ethical guidelines in the world for automated driving.
Key ethical issues of automated driving
The ethics commission formulated 20 propositions in all. The key points are as follows:
- Automated driving with vehicle-to-vehicle communication is ethically advisable if the systems cause fewer accidents than human drivers do.
- In dangerous situations, the protection of human life is always the highest priority.
- In accidents that cannot be prevented, it is impermissible to rate human beings in any way according to personal characteristics.
- In every situation, there must be clear rules governing who will drive, whether a human being or computer, and it must be possible to identify at all times whether the driver is a person or computer.
- Who is driving must be documented and recorded, in part in order to clarify any possible questions of liability.
- Drivers must always be able to decide for themselves whether to share their vehicle data and how it will be used.
The commission also reflected on situations that give rise to ethical dilemmas – without really finding a solution: “The technology must be designed in accordance with the state of the art in such a way that critical situations do not arise in the first place, including dilemma situations.” The latter are cases in which an automated vehicle must “decide” which of two incommensurable evils must be accepted.
The commission did not formulate any rules that went further in this regard. At present, the goal is therefore to avoid dangerous situations wherever possible, thereby automatically avoiding ethical conflicts too. If an accident can no longer be prevented, the priority of carmakers is to minimize harm through safe evasive maneuvers or braking.
Driverless cars as part of everyday life
It will likely take some time yet, however, before driverless cars become a routine part of daily life. The German Association of the Automotive Industry (VDA) has defined six levels of automated driving, from Level 0 – the driver drives, no assistance system intervenes – to Level 5, in which a driver is no longer needed. A Tesla currently allows driving at Level 3: drivers don’t have to monitor the system all the time, but they have to be potentially ready to take control. A law that permits this in Germany was adopted by the upper and lower houses of the German parliament in the spring.
There are already tests and pilot projects underway involving the first driverless cars, such as the electric shuttle Olli, which travels on the campus of the Innovation Center for Mobility and Societal Change (InnoZ) in Berlin. The American automotive parts supplier Delphi wants to sell the first Level 4 production vehicles as early as 2019. At this level, the technology can take full control in specific cases. From there, it’s only a small step to completely autonomous driving.
But according to estimates of Deutsche Bank Research, fully autonomous driving won’t become generally established until 2040, at the earliest. Analysts cite a number of reasons for this, including the long development periods in the industry, the long service life of cars and consumer preferences that have matured over several decades. There are also huge technical hurdles to overcome, they say, before a highly complex system like road transport can be automated.
Naturally, the guidelines of the ethics commission cannot be applied one-to-one to other fields that are increasingly being automated. One reason is that, in many cases, there are unlikely to be dilemmas involving life and death. But they can provide some valuable food for thought – about the role of artificial beings in our society, about the issue of legal liability in the case of systems that perform their own reasoning, about the issue of who owns and controls data, etc. – and about living with other people generally. What if there is one day a configuration menu in cars with an option to select which passenger should be given priority for protection in the event of an accident that cannot be avoided?
The report of the ethics commission is available here: https://www.bmvi.de/SharedDocs/DE/Anlage/Presse/084-dobrindt-bericht-der-ethik-kommission.pdf?__blob=publicationFile
Author: Editorial team Future. Customer.
Image: Sergey – Fotolia/Adobe Stock