Big Data, Analytics, and Technology’s Impact on Society

Big Data, Analytics, and Technology’s Impact on Society

by Lyria Bennett Moses and Greg Adamson

March 2017

The social implications of technology have been with us for as long as humans have created technology, which is to say as long as we’ve been human.

In Paleolithic times, stone tools could be used to kill game or fellow humans. In Greek mythology, Icarus’ hubris was enabled by technology. In our time, headline revelations about National Security Agency spying, Anonymous’ hacking, and security breaches at Sony, at Target – you name it – no longer shock us.

And now, appearing on the horizon, the Internet of Things (IoT) is arising with the “promise” of ubiquitous sensors, and big data analytics to improve our lives and, yet, along with it may come opaque algorithms and a growing sense that, perhaps, George Orwell will be proven prescient [1].

It’s been said that technology is neither good nor bad, but neither is it neutral. Technology does indeed have major, often unforeseen or poorly understood implications for society. Granted, this is the stuff of daily conversation – How secure is our data? How private are our conversations? How long before a trove of data defines our lives in the eyes of others using an opaque algorithm?

We would argue that the dynamics of the market may blind some technologists to the implications of their work, while for others, creativity is the driver and reflection is an afterthought. Conversely, policymakers too often do not fully grasp the implications of technological developments, and how these interact with existing laws and policies. Where policymakers make mistakes, there can be a significant impact on the community.

We come to the social implications of technology from two different backgrounds, but our interests intersect where automation and the use of algorithms can produce – or reduce – social value.

Our challenge is to grasp the ethical and legal implications and impacts of such tools in a potentially sensitive context. For instance, governments and agencies are accumulating data on everyone: should algorithms be applied to tease out insights, particularly in the name of preventing crime and terrorism?

To ensure that the use of these tools does not spin out of the control of a democratic society that applies them, we need to ask questions. “What do agencies want from such data?” “What biases are inherent in the algorithms that produce results?” Perhaps most importantly: “What legal frameworks should society impose for positive, just outcomes?”

At first blush, algorithms just perform automated analysis at high speed, right? But it’s more complicated than that. Not to put too fine a point on it, but a recent op-ed in The New York Times – “Artificial Intelligence’s White Guy Problem” – points out that cultural biases seep into algorithms. To quote briefly from the article:

“Like all technologies before it, artificial intelligence will reflect the values of its creators. So inclusivity matters … Otherwise, we risk constructing machine intelligence that mirrors a narrow and privileged vision of society, with its old, familiar biases and stereotypes … [and] we will see ingrained forms of bias built into the artificial intelligence of the future.”

Evaluation, review, oversight, accountability, legal frameworks all seem appropriate if, for instance, the use of big data and analytics for profiling terrorism suspects has undesirable impacts on some communities. It is also an important instance, where analytics are used as the basis for calculating “risk assessment scores” for criminal defendants that can be used to make decisions about bail, parole, and sentencing. As a ProPublica investigation revealed, these tools risk introducing bias against racial minorities [2].

The solution cannot be that those responsible for national security, law enforcement, and criminal justice ignore tools that may offer useful insights. The problem is not the concept of data analytics but how it is developed, used, understood and evaluated. We need to make sure that tools are rigorously evaluated against metrics that test not only accuracy and effectiveness but also the disparity of impact and moral questions [3]. For example, there may be some variables that should be irrelevant to a decision about sentencing even if those variables correlate with high reoffending rates.

So what can we do about the non-neutral, societal implications of technology? The first point is to prioritize technology for the benefit of humanity. As well as providing an environment for examining and where appropriate promoting technology, in recent years technologists have begun to focus on public policy and the importance of ethics, both professional ethics and ethics in the design and implementation of new technologies.

Since the early 1970s technology professional organizations have focused on the context around technology. This includes key aspects of the social implication of technology: Sustainable development and humanitarian technology; Ethics, human values, and technology; Technology benefits for all; Protecting the planet – sustainable technology; and, the Future societal impact of technology advances.  It also includes organizations actively engaged in reaching out to the Science, Technology, and Society community, comprising researchers interested in technology and society from the humanities and social sciences.

Externally, we can bring our logic in humanitarian/development technology to examine, for instance, the 17 U.N. sustainable development goals and to ask thought-provoking questions: Does a particular technology further a specific goal? Given various technology choices, what are the potential outcomes?

The societal implications of technology is a sprawling, pervasive topic and a few very large issues – climate change, nuclear weapons, for instance – may never be bottled again. But an effort is underway to revive critical thinking on a pragmatic level where it could make a difference going forward.

References: 

  1. Mireille Hildebrandt, Smart Technologies and the End(s) of Law (Elgar 2015); Trevor Timm, ‘The government just admitted it will use smart home devices for spying’ The Guardian (9 February 2016) at https://www.theguardian.com/commentisfree/2016/feb/09/internet-of-things-smart-devices-spying-surveillance-us-government.
  2. Julia Angwin et al, ‘Machine Bias’ ProPublica (23 May 2016), <https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing>.
  3. Lyria Bennett Moses and Janet Chan, ‘Algorithmic prediction in policing: assumptions, evaluation and accountability’, Policing and Society, available online http://www.tandfonline.com/doi/full/10.1080/10439463.2016.1253695.

 

Lyria Bennett MosesLyria Bennett Moses is an Associate Professor in the Faculty of Law at UNSW Sydney. Lyria‘s research explores issues around the relationship between technology and law, including the types of legal issues that arise as technology changes, how these issues are addressed in Australia and other jurisdictions, the application of standard legal categories such as property in new socio-technical contexts, the use of technologically-specific and sui generis legal rules, and the problems of treating “technology” as an object of regulation. Lyria is currently a Key Researcher and Project Leader on the Data to Decisions CRC, exploring legal and policy issues surrounding the use of data and data analytics for law enforcement and national security. Lyria is also Chair of the Australia Chapter of the IEEE Society for the Social Implications of Technology, Academic Co-Director of the Cyberspace Law and Policy Community, Leader of the Law, Technology and Innovation Research Network at UNSW Law and a PLuS Alliance Fellow.

Dr. Greg AdamsonDr Greg Adamson is an Associate Professor (honorary) at the University of Melbourne and chairs the IEEE Board of Directors ad hoc on ethics. He is past president of the IEEE Society on Social Implications of Technology, and consults in cybercrime, blockchain and digital risk. His research interests include Norbert Wiener, and barriers to the implementation of socially beneficial technology.
Editor:

Syed Hassan AhmedSyed Hassan Ahmed (S’13, M’17) received his B.S in Computer Science from Kohat University of Science and Technology, Pakistan. Later, he completed his Masters combined Ph.D. in Computer Science and Engineering from School of Computer Science and Engineering, Kyungpook National University, Korea in 2014 and 2017 respectively. In 2015, he was a visiting researcher at the Georgia Institute of Technology, Atlanta, USA. Since 2012, he published over 70 International Journals/Conferences articles, book chapters and 2 Springer brief books. From year 2014 to 2016, he won the Best Research Contributor award in the workshop on Future Researches of Computer Science and Engineering, KNU, Korea. In 2016, he also won the Qualcomm Innovation Award at KNU, Korea. His research interests include Sensor and Ad hoc Networks, Cyber Physical Systems, Vehicular Communications and Future Internet.

Dr. Hassan has been TPC Member or Reviewer in 50+ International Conferences and Workshops including IEEE Globecom, IEEE ICC, IEEE CCNC, IEEE ICNC, IEEE VTC, IEEE INFOCOM workshops, ACM CoNext, ACM RACS, and ACM SAC. Furthermore, he has been reviewing papers for 25+ International Journals including IEEE Wireless Communications Magazine, IEEE Transactions on Industrial Informatics, IEEE Transactions on Vehicular Technologies, IEEE Transactions on Big Data, IEEE Communication Letters, Elsevier Computer Communications, Computer Networks, and a lot of others. Recently, he has also gained an editorial experience with Journal of Internet Technology (JIT), IEEE Access Journal, Elsevier FGCS, Hindawi WCMC journal, IEEE Communications Magazine, and IEEE Internet Initiative and Future Directions newsletter.