Artificial Intelligence and Autonomous Systems: Why Principles Matter

Artificial Intelligence and Autonomous Systems: Why Principles Matter

by Alan Winfield and Mark Halverson

September 2017

In this article, we trace the development of general principles for artificial intelligence (AI) and autonomous systems, from Asimov to the current time, and the standards that are now emerging from those principles. In the context of this article a general principle is defined simply as a rule, or guide to action, which embodies an underlying ethical principle.

Asimov’s laws of robotics

Without doubt the first fully articulated principles of robotics are Isaac Asimov’s now famous three laws of robotics, which first appeared in his short story Runaround [1]. Asimov’s laws were remarkably prescient; implicit in the three laws is an assumption that robots are autonomous, self-aware and (although the term artificial intelligence didn’t exist at the time), intelligent. Indeed, Asimov himself came to regard his laws as a basis for governing real robot behaviour, writing in 1981[2]:

“I have my answer ready whenever someone asks me if I think that my Three Laws of Robotics will actually be used to govern the behavior of robots, once they become versatile and flexible enough to able to choose among different courses of behavior. My answer is, “Yes, the Three Laws are the only way in which rational human beings can deal with robots — or with anything else.”

It is perhaps a testament to the enduring influence of Asimov’s laws of robotics that a number of scholars have felt the need to argue that they are unsuitable as a basis for governing real world robots[3,4,5], although others have argued that – in the absence of any agreed framework of computational ethics – they can at least serve as a starting point for researching ethical robots[6,7].  What is however, incontrovertible, is that Asimov’s laws established the principle that robotics (and by extension intelligent systems) should be governed by principles.

The Engineering & Physical Sciences Research Council (EPSRC) Principles of Robotics

Although Asimov’s laws of robotics are designed to govern the behaviour of robots, not roboticists, Clarke[3] observed that while “many facets of Asimov’s fiction are clearly inapplicable …  the substantive content of the laws could be used as a set of guidelines to be applied during the conception, design, development, testing, implementation, use, and maintenance of robotic systems.”

First published online in 2011, the EPSRC principles of robotics were not just inspired by Asimov’s laws of robotics, but decidedly revised to refocus those three laws from robots to roboticists. That revision resulted in not three but five general ethical principles for roboticists[8,9]. They are:

  1. Robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security.
  2. Humans, not robots, are responsible agents. Robots should be designed and operated as far as is practicable to comply with existing laws and fundamental rights & freedoms, including privacy.
  3. Robots are products. They should be designed using processes, which assure their safety and security.
  4. Robots are manufactured artifacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent.
  5. The person with legal responsibility for a robot should be attributed.

Importantly, these principles downplay the specialness of robots, treating them as tools and products to be designed and operated within legal and technical standards. Or, as Bryson writes[10] “… that robots are not responsible parties under the law, and that users should not be deceived about their capacities.” In the same essay Bryson argues that the principles are de facto policy: “…the EPSRC principles are of value because they represent a policy … their purpose is to provide consumer and citizen confidence in robotics as a trustworthy technology fit to become pervasive in our society”.

Current Work in General Principles for Framing Ethical Guidelines for Artificial Intelligence and Autonomous Systems

Today, general principles are concerned with all types of artificial intelligence and autonomous systems (AI/AS) and domains of application regardless of whether they are physical robots (such as care robots or driverless cars) or software AIs (such as medical diagnosis systems, intelligent personal assistants, or algorithmic chat bots).

Such principles may be expressed not as single statements of principle, but instead as high-level ethical concerns or questions, together with a set of recommendations for measures to address or ameliorate each of these concerns[11,12]:

  1. Human Rights: How can we ensure that AI/AS do not infringe human rights?
  2. Well-being: Traditional metrics of prosperity do not take into account the full effect of AI/AS technologies on human well-being.
  3. Accountability: How can we assure that designers, manufacturers, owners and operators of AI/AS are responsible and accountable?
  4. Transparency: How can we ensure that AI/AS are transparent?
  5. AI/AS Technology Misuse and Awareness of it: How can we extend the benefits and minimize the risks of AI/AS technology being misused?

These general principles reflect a different set of priorities from the EPSRC principles (especially the call for AI/AS to prioritise human well-being) while importantly retaining the human-as-the-locus of responsibility (in accountability and transparency). They also reflect a widely held concern for the potential for misuse of AI/AS and the need to actively guard against that misuse while extending the benefits.

More recently the Future of Life Institute published the Asilomar AI Principles[13], 23 high-level principles organized within the headings of Research Issues, Ethics and Values, and Longer-term Issues. In March 2017 the Japanese Society for Artificial Intelligence published a set of nine ethical guidelines[14]; notably – in a nod to Asimov – principle 9 suggests that AIs must themselves abide by 1 to 8.

From Principles to Standards

Standards are vital to the modern world; they are no less a part of the infrastructure of human civilisation in the 21st century than roads or airports. Standards formalise ethical principles into a structure, which can be used to evaluate levels of compliance with those principles. Ethics therefore underpin standards. But standards also sometimes need teeth, i.e. regulation which mandates that systems are certified as compliant with standards, or parts of standards. Thus ethics (or ethical principles) are linked to standards, which are in turn linked to regulation[15].

Arguably the first published ethical standard in robotics is BS8611:2016 Guide to the ethical design and application of robots and robotic systems[16]. BS8611 incorporates the EPSRC principles of robotics; it is not a code of practice, but instead a toolkit for designers to be able to undertake an ethical risk assessment of their robot or system, and mitigate any ethical risks so identified.

New, so called ‘human’, standards are in development within the IEEE Standards Association. Presently, 11 standards working groups are drafting candidate standards formalising one or more ethical principles. To give just one example IEEE P7001: Transparency in Autonomous Systems is defining a set of measurable, testable levels of transparency for each of several stakeholder groups including users, certification agencies and accident investigators[17].

It is through such human standards that ethical principles will be embedded in the design of AI and autonomous systems and into the infrastructure of modern life.

References:

  1. Asimov, I., Runaround, Astounding Science Fiction, 1942. Reprinted in I, Robot, Gnome Press, 1950.
  2. Asimov, I., Guest commentary: The Three Laws, Compute! Magazine, Vol 18 No 3, pp 18, November 1981. https://archive.org/stream/1981-11-compute-magazine/Compute_Issue_018_1981_Nov
  3. Clarke, R., Asimov’s Laws of Robotics: Implications for Information Technology, IEEE Computer Vol 26 No 12 (December 1993) pp 53-61 and Vol 27 No 1 (January 1994), pp 57-66. http://www.rogerclarke.com/SOS/Asimov.html
  4. Anderson, S., Asimov’s “three laws of robotics” and machine metaethics, AI and Society, Vol 22 No 4, pp 477–493, 2008.
  5. Murphy R, and Woods D. Beyond Asimov: The Three Laws of Responsible Robotics. IEEE Intelligent Systems, Vol 24 No 4, 2009.
  6. Etzioni, O., and Weld, D., The First Law of Robotics (a call to arms), AAAI Tech. Rep. SS-94-03, pp17-23, 1994. http://www.aaai.org/Papers/Symposia/Spring/1994/SS-94-03/SS94-03-003.pdf
  7. Vanderelst, D. and Winfield, A., An architecture for ethical robots inspired by the simulation theory of cognition. Cognitive Systems Research, 2017. https://doi.org/10.1016/j.cogsys.2017.04.002
  8. Winfield, A., Roboethics for Humans, New Scientist, pp 32-33, May 2011.
  9. Boden, M., et al, Principles of robotics: regulating robots in the real world, Connection Science, Vol 29 No 2, pp 124-129, 2017. http://dx.doi.org/10.1080/09540091.2016.1271400
  10. Bryson, J., The Meaning of the EPSRC Principles of Robotics, Connection Science, Vol 29 No 2, pp 130-136, 2017.
  11. IEEE Standards Association, The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems, http://standards.ieee.org/develop/indconn/ec/autonomous_systems.html
  12. IEEE, Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems, version 1, IEEE Standards Assoc., 2016. http://standards.ieee.org/develop/indconn/ec/auto_sys_form.html   
  13. Future of Life Institute, Asilomar AI Principles, 2017, https://futureoflife.org/ai-principles/   \
  14. Japanese Society for Artificial Intelligence, About the Japanese Society for Artificial Intelligence Ethical Guidelines (2017), http://ai-elsi.org/archives/514
  15. Winfield, A., Written evidence submitted to the UK Parliamentary Select Committee on Science and Technology Inquiry on Robotics and Artificial Intelligence, Discussion Paper, Science and Technology Committee (Commons), Website, 2016. http://eprints.uwe.ac.uk/29428/
  16. British Standards Institute, BS8611:2016 Robots and robotic devices: guide to the ethical design and application of robots and robotic systems, ISBN 9780580895302, BSI, London, 2016.
  17. Bryson, J. and Winfield, A., Standardizing Ethical Design for Artificial Intelligence and Autonomous Systems, IEEE Computer Vol 50 No 5, pp 116-119, 2017.

 Alan Winfield is co-chair of the General Principles committee of the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. The committee is working on the overarching principles of the ethical design and use of autonomous and intelligent systems, focusing on the broader issues related to the acceptance of these systems by the general public.

Alan is also Professor of Robot Ethics at the University of the West of England (UWE), Bristol, UK, and Visiting Professor at the University of York. He received his PhD in Digital Communications from the University of Hull in 1984, then co-founded and led APD Communications Ltd until taking-up appointment at UWE, Bristol in 1992. He co-founded the Bristol Robotics Laboratory where his research is focused on the engineering and science of cognitive robotics; current projects are focused on robots with simulation-based internal models and multi-robot systems in critical environments.

Alan is passionate about communicating research and ideas in science, engineering and technology; he led UK-wide public engagement project Walking with Robots, awarded the 2010 Royal Academy of Engineering Rooke medal for public promotion of engineering. Until recently he was director of UWE’s Science Communication Unit. Alan is frequently called upon by the press and media to comment on developments in AI and robotics; recently he was a guest on the BBC national radio series The Life Scientific.

Mark Halverson is the CEO of Precision Autonomy whose mission is to make unmanned and autonomous vehicles a safe reality.  Precision Autonomy operates at the intersection of Artificial Intelligence and Robotics employing crowdsourcing and 3 dimensional augmented reality to allow UAVs and other unmanned vehicles to operate more autonomously.  Precision Autonomy has developed an ‘On Purpose’ infrastructure ensuring machines operate in a transparent, predictable, and auditable way; always keeping human needs at the center.  Mark has over 25 years of global consulting experience working with the world’s largest corporations shaping strategies embracing innovation and disruption.  Mark is a regular speaker and sits on multiple AI and Robotics ethics panels.  He is passionate about embracing innovation to put Humans back at the center.

 

Editor: 

Dr. Steve Jones joined the Center for Information and Communication Sciences faculty in August of 1998. He came to Ball State University (BSU) from completing his doctoral studies at Bowling Green State University where he served the Dean of Continuing Education developing a distance-learning program for the College of Technology’s undergraduate Technology Education program. Dr. Jones was instrumental in bringing the new program on board because of his technical background and extensive research in the distance-learning field.

Prior to coming to higher education, Dr. Jones spent over sixteen and a half years in the communication technology industry. He owned his own teleconnect, providing high-end commercial voice and data networks to a broad range of end users. Dr. Jones provided all the engineering and technical support for his organization that grew to over twenty employees and two and a half million dollars per year revenue. Selling his portion of the organization in December of 1994, Dr. Jones worked briefly for Panasonic Communications and Systems Company as a district sales manager providing application engineering and product support to distributors in a five-state area prior to starting doctoral studies.