Skip to main contentSkip to navigationSkip to navigation
A humanoid robot
A humanoid robot. Photograph: Siu Chiu/Reuters
A humanoid robot. Photograph: Siu Chiu/Reuters

Do no harm, don't discriminate: official guidance issued on robot ethics

This article is more than 7 years old

Robot deception, addiction and possibility of AIs exceeding their remits noted as hazards that manufacturers should consider

Isaac Asimov gave us the basic rules of good robot behaviour: don’t harm humans, obey orders and protect yourself. Now the British Standards Institute has issued a more official version aimed at helping designers create ethically sound robots.

The document, BS8611 Robots and robotic devices, is written in the dry language of a health and safety manual, but the undesirable scenarios it highlights could be taken directly from fiction. Robot deception, robot addiction and the possibility of self-learning systems exceeding their remits are all noted as hazards that manufacturers should consider.

Welcoming the guidelines at the Social Robotics and AI conference in Oxford, Alan Winfield, a professor of robotics at the University of the West of England, said they represented “the first step towards embedding ethical values into robotics and AI”.

“As far as I know this is the first published standard for the ethical design of robots,” Winfield said after the event. “It’s a bit more sophisticated than that Asimov’s laws – it basically sets out how to do an ethical risk assessment of a robot.”

The BSI document begins with some broad ethical principles: “Robots should not be designed solely or primarily to kill or harm humans; humans, not robots, are the responsible agents; it should be possible to find out who is responsible for any robot and its behaviour.”

It goes on to highlight a range of more contentious issues, such as whether an emotional bond with a robot is desirable, particularly when the robot is designed to interact with children or the elderly.

Noel Sharkey, emeritus professor of robotics and AI at the University of Sheffield, said this was an example of where robots could unintentionally deceive us. “There was a recent study where little robots were embedded in a nursery school,” he said. “The children loved it and actually bonded with the robots. But when asked afterwards, the children clearly thought the robots were more cognitive than their family pet.”

The code suggests designers should aim for transparency, but scientists say this could prove tricky in practice. “The problem with AI systems right now, especially these deep learning systems, is that it’s impossible to know why they make the decisions they do,” said Winfield.

Deep learning agents, for instance, are not programmed to do a specific task in a set way. Instead, they learn to perform a task by attempting it millions of times until they evolve a successful strategy – sometimes one that its human creators had not anticipated and do not understand.

The guidance even hints at the prospect of sexist or racist robots, warning against “lack of respect for cultural diversity or pluralism”.

“This is already showing up in police technologies,” said Sharkey, adding that technologies designed to flag up suspicious people to be stopped at airports had already proved to be a form of racial profiling.

Winfield said: “Deep learning systems are quite literally using the whole of the data on the internet to train on, and the problem is that that data is biased. These systems tend to favour white middle-aged men, which is clearly a disaster. All the human prejudices tend to be absorbed, or there’s a danger of that.”

In future medical applications, there is a risk that systems might be less adept when diagnosing women or ethnic minorities. There have already been examples of voice recognition software being worse at understanding women, or facial recognition programmes not identifying black faces as easily as white ones.

“We need a black box on robots that can be opened and examined,” said Sharkey. “If a robot is being racist, unlike a police officer, we can switch it off and take it off the street.”

The document also flags up broader societal concerns, such as “over-dependence on robots”, without giving designers a definitive steer on what to do about these issues.

“One form of this is automation bias, when you work with a machine for a certain length of time and it gives you the right answers you come to trust it and become lazy. And then it gives you something really stupid,” said Sharkey.

Perhaps with an eye on the more distant future, the BSI also alerts us to the danger of rogue machines that “might develop new or amended action plans … that could have unforeseen consequences” and the potential “approbation of legal responsibility” by robots.

Dan Palmer, head of manufacturing at BSI, said: “Using robots and automation techniques to make processes more efficient, flexible and adaptable is an essential part of manufacturing growth. For this to be acceptable, it is essential that ethical issues and hazards such as dehumanisation of humans or over-dependence on robots, are identified and addressed.

“This new guidance on how to deal with various robot applications will help designers and users of robots and autonomous systems to establish this new area of work.”

Comments (…)

Sign in or create your Guardian account to join the discussion

Most viewed

Most viewed