Ethics, Policymakers and Technology Development

Ethics, Policymakers, and Technology Development

Recent events illustrate the societal implications of ethical lapses

by Greg Adamson

November 2017

Technology’s impact on society has likely been fodder for thought since humans began making stone tools. But in the 21st century, technology development is taking place at greater speeds and with greater potential impacts on society than ever before.

At this point, we have enough experience and historical precedent to enable us to not only consider technology’s potential impacts, but to shape those impacts via public policy.

In fact, specific technologies – artificial intelligence (AI) comes to mind – are being developed faster than we can understand their implications and shape their future. Not even the technologists involved know where AI is heading or can forecast its societal implications.

Honest acknowledgement of the limits of our understanding should be central to public policy discussions of emerging technologies. And wise public policy development will be facilitated by a well-informed public.

Unfortunately, recent examples abound in which ethical approaches to existing or emerging technologies were absent or consciously abandoned, with significant societal impacts.

The urgency of addressing known risks

The failure of technology and how it was used – or misused – were contributing factors in the BP Deepwater Horizon oil rig disaster in 2010 1. That event killed 11 people, spewed more than 200 million gallons of oil into the Gulf of Mexico – the largest such spill in history – and cost BP more than $44 billion in fines and reparations, not to mention the decimation of the Gulf’s ecosystem and the related livelihoods of coastal communities.

In 2014 a U.S. District Court found BP guilty of gross negligence and reckless conduct in handing down the largest fine in U.S. corporate history. Still, a U.S. government report in 2011 had noted that “absent significant reform in both industry practices and government policies, [such an incident] might well recur.”

Would it not be in everyone’s best interest to ensure that policies governing industry practices changed to preclude the chance of another such devastating spill?

In a different but related ethical lapse, in 2015 Volkswagen (VW) was found to have installed emissions-limiting technology in 11 million of its cars from 2009 to 2015 that would perform as advertised during government emissions testing but which, in reality, emitted 40 times the legal limits under actual road conditions2. VW spent more than $18 billion in the U.S. alone to make amends in fines and recalls.

Unfortunately, the attention to VW’s misdeeds revealed discrepancies between actual emissions and emissions standards claims among many other leading vehicle makers around the world. VW was just the tip of an iceberg of ethical lapses of global significance. Advocates of open source software point out that making open source a standard industry practice would allow third-party auditing of company’s compliance with environmental regulations.

In a way, these latter two scandals were 20th century in nature: we understood the technologies involved, agreed on how to limit their deleterious contributions to greenhouse gases and climate change and yet ethical lapses made a mockery of society’s best efforts. Nevertheless, awareness had created policy and legal obligations, with resulting penalties. This shows that campaigns of awareness are very important.

Beyond these examples of known technologies being misused or lacking fail-safe measures, the case of AI is illustrative of circumstances in which we truly do not fully understand where the technology is heading and, therefore, cannot predict its benefits, costs and risks.

Some suggest we should throw caution to the winds and just see what happens. That laissez faire approach would seem foolhardy, given recent catastrophic events such as the cited BP and VW cases. Perhaps it would be wiser to simply acknowledge the obvious fact that, in many cases – including AI – we simply don’t know where the technology is heading and what it’s second and third order impacts will be.

You may recall the case of “Tay,” the Microsoft-created, AI “chatterbot” that used a self-learning algorithm to respond to Twitter users3. Within 16 hours of its March 2016 launch it had to be taken down. Hackers had bombarded it with offensive, inflammatory language, leading Tay to spew similarly atrocious tweets.

Of greater seriousness is the controversy over the AI-supported COMPAS, an automated risk assessment tool used by judges in sentencing convicted criminals4. Backers of COMPAS claim that its algorithms can predict the future risk associated with a specific individual, though they caution it should not guide specific sentences. Upon closer examination, however, it turns out that the software makes its risk assessment predictions based on group data, therefore it cannot predict a specific individual’s behavior. A convict who received an unusually harsh sentence from a judge guided by COMPAS has challenged its use and the case may reach the U.S. Supreme Court.

As we approach these unknowns, the policy implications are challenging. It is unclear whether regulators will be able to keep up with rapidly changing threats:

  • Where does responsibility reside?
  • What is reasonable knowledge, which would create a responsibility?

An important approach to these is the “precautionary principle”, ensuring proportionality between risk and benefit. For example, a technology which promised to increase global food production by 10% but could alternatively destroy all food production would be avoided.

More generally we see the importance of feedback loops, both technical and social (eg. monitoring and whistleblowing), so that we understand as quickly as possible when new threats are emerging.

Forward-looking efforts

Three factors are likely to drive consensus and a sense of urgency that the societal implications of existing and emerging technologies must be addressed.

The examples just cited – and, unfortunately, many others – make quite clear that failure to meet this challenge has significant impacts on and costs to society. Second, those impacts and costs have become more pervasive and global in nature. Third, policy makers have begun to impose staggering penalties on errant parties, who in turn pass the cost along to consumers.

To fail to acknowledge how little we know of the implications of emerging technologies that can have pervasive, real-world consequences is therefore irresponsible. It is simply imperative that we do so.

We are fortunate that the world’s largest professional society, the IEEE, and its many initiatives and technical societies, has a heritage of technology-centric ethics and a new priority to address the ethics of emerging technologies to shape public policy for the benefit of humanity.

References 

  1. Charles K. Ebinger, “6 years from the BP Deepwater Horizon oil spill: What we’ve learned, and what we shouldn’t misunderstand,” Brookings Institute, 4/20/16, https://www.brookings.edu/blog/planetpolicy/2016/04/20/6-years-from-the-bp-deepwater-horizon-oil-spill-what-weve-learned-and-what-we-shouldnt-misunderstand/
  2. Jennifer Chu, “Study: Volkswagen’s excess emissions will lead to 1,200 premature deaths in Europe,” 3/3/17, http://news.mit.edu/2017/volkswagen-emissions-premature-deaths-europe-0303
  3. John West, “Microsoft’s disastrous Tay experiment shows the hidden dangers of AI,” 4/2/16, https://qz.com/653084/microsofts-disastrous-tay-experiment-shows-the-hidden-dangers-of-ai/
  4. Melissa Hamilton, “We use big data to sentence criminals. But can the algorithms really tell us what we need to know?” https://theconversation.com/we-use-big-data-to-sentence-criminals-but-can-the-algorithms-really-tell-us-what-we-need-to-know-77931

 

 Greg Adamson is an associate professor at the Melbourne School of Engineering, University of Melbourne, Australia. He is also an IEEE senior member, an executive committee member for The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems, and past president for the IEEE Society on Social Implications of Technology.

 

Editor: 

Dr. Mohammad Saud Khan is a Lecturer (Assistant Professor) in the area of Strategic Innovation and Entrepreneurship at Victoria University of Wellington, New Zealand. Before taking up this role, he was positioned as a Postdoctoral researcher at the University of Southern Denmark. Having a background in Mechatronics (Robotics & Automation) Engineering, he worked as a field engineer in the oil and gas industry with Schlumberger Oilfield Services in Bahrain, Saudi Arabia and United Kingdom. In addition to several consulting assignments, his corporate experience includes a project on “Open Innovation” with Agfa Gevaert, Belgium. Saud’s research has largely been focused on investigating entrepreneurial teams within high-tech business incubators. His work has appeared at several reputed conferences (such as Academy of Management Annual Meeting and Babson College Entrepreneurship Research conference)  and journals (such as Creativity and innovation Management and Management Decision). Currently, his research interests include innovation management (especially managerial implications surrounding novel technological paradigms such as big data, IOT and 3D printing), technology and digital (social media) entrepreneurship.