Document Type

Book Chapter

Publication Date



Oxford University Press


The convergence of robotics technology with the science of artificial intelligence ( or AI) is rapidly enabling the development of robots that emulate a wide range of intelligent human behaviors.1 Recent advances in machine learning techniques have produced significant gains in the ability of artificial agents to perform or even excel in activities formerly thought to be the exclusive province of human intelligence, including abstract problem-solving, perceptual recognition, social interaction, and natural language use. These developments raise a host of new ethical concerns about the responsible design, manufacture, and use of robots enabled with artificial intelligence-particularly those equipped with self-learning capacities.

The potential public benefits of self-learning robots are immense. Driverless cars promise to vastly reduce human fatalities on the road while boosting transportation efficiency and reducing energy use. Robot medics with access to a virtual ocean of medical case data might one day be able to diagnose patients with far greater speed and reliability than even the best-trained human counterparts. Robots tasked with crowd control could predict the actions of a dangerous mob well before the signs are recognizable to law enforcement officers. Such applications, and many more that will emerge, have the potential to serve vital moral interests in protecting human life, health, and well-being.

Yet as this chapter will show, the ethical risks posed by AI-enabled robots are equally serious-especially since self-learning systems behave in ways that cannot always be anticipated or folly understood, even by their programmers. Some warn of a future where Al escapes our control, or even turns against humanity (Standage 2016); but other, far less cinematic dangers are much nearer to hand and are virtually certain to cause great harms if not promptly addressed by technologists, lawmakers, and ocher stakeholders. The task of ensuring the ethical design, manufacture, use, and governance of AI-enabled robots and other artificial agents is thus as critically important as it is vast.

Chapter of

Robot Ethics 2.0


Patrick Lin
Keith Abney
Ryan Jenkins


This material was originally published in Robot Ethics 2.0 edited by Patrick Lin, Keith Abney, and Ryan Jenkins, and has been reproduced by permission of Oxford University Press. For permission to reuse this material, please visit

Included in

Philosophy Commons



To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.