Enslaving the Algorithm: Avoiding the Transparency Fallacy while moving to meaningful Control
6:00 pm to 7:00 pm, 27 June 2018
UC Laws, Gideon Schreier LT, Bentham House, 4-8 Endsleigh Gardens, London WC1H 0EG
About the event:
Machine learning algorithms are increasingly important to individuals’ lives, but have led to concerns around their unfairness, discrimination and opacity. As they are often alleged to be “black boxes” — inscrutable to human analysis and thus defying legal and social challenge — transparency in the form of a legal “right to an explanation” has emerged as a compellingly attractive remedy across jurisdictions to shed light on them. We outline the evolution and practical shape of recent explanation rights, including the provisions in the General Data Protection Regulation (GDPR), French administrative law and the draft modernised Council of Europe Convention 108. Individual rights of this sort can be useful, but no panacea. The history of privacy law illustrates the unreasonableness and ineffectiveness of placing the burden of investigation and challenge on data subjects. Furthermore, despite emerging techniques, it is still difficult to produce “meaningful information” about how algorithmic decision making operates and might harm users. Transparency may thus produce an illusion of remedy rather than anything substantively of help to society as a whole: a “transparency fallacy”.
We explore alternative modes of explaining, challenging or preventing bad algorithmic decisions, drawing on a range of different types of governance including impact assessment, “soft law” and judicial review. We suggest that to manage the complex socio-technical challenge of bad algorithmic decision making, we may need to spend less energy concentrating on expecting individual rights to turn into remedies, and focus more on building and facilitating algorithmic systems which are developed reflexively, overseen in multiple ways, and themselves fairness and discrimination-aware, all whilst also shifting to more privacy-protective paradigms than the centralised business models common to tech platforms today. Data protection is a strong base to build on, but it is only one of many data-relevant regimes. To avoid automating injustice will require casting a considerably wider disciplinary and societal net.
About the speakers:
Lillian Edwards is a leading academic in the field of Internet law. She has taught information technology law, e-commerce law, and Internet law at undergraduate and postgraduate level since 1996 and been involved with law and artificial intelligence since 1985. Her current research interests, while broad, revolve around the topics of online privacy and data protection, intermediary liability, digital assets and digital copyright enforcement.
She worked at Strathclyde University from 1986–1988 and Edinburgh University from 1989 to 2006. She became Chair of Internet Law at the University of Southampton from 2006–2008, and then Professor of Internet Law at the University of Sheffield until late 2010, when she returned to Scotland to become Professor of E-Governance at Strathclyde University, while retaining close links with the renamed SCRIPT (AHRC Centre) at Edinburgh . Since 2011, she has been Chair of E-Governance at Strathclyde University.
She has co-edited (with Charlotte Waelde) three editions of a textbook, Law and the Internet; the third edition appeared in 2009 and a new sole edited title, Law, Policy and the Internet will appear in autumn 2018. She won the Barbara Wellberry Memorial Prize in 2004 for work on online privacy. A sole edited collection of her essays, The New Legal Framework for E-Commerce in Europe, was published in 2005. She is Associate Director, and was co-founder, of the Arts and Humanities Research Council (AHRC) Centre for IP and Technology Law (now SCRIPT). Edwards has consulted inter alia for Google, Symantec, McAfee, the EU Commission, the OECD, and WIPO. Edwards co-chairs GikII, a annual series of international workshops on the intersections between law, technology and popular culture.
Since 2012, Edwards has been Deputy Director of CREATe, the Centre for Creativity, Regulation, Enterprise and Technology, a £5m Research Councils UK research centre about copyright and business models. She is also a frequent speaker in the media and has been invited to lecture in many universities in Europe, Asia, America, Australasia and Africa.
Michael Veale is a PhD researcher in responsible public sector machine learning at University College London, specialising in the fairness and accountability of data-driven tools in the public sector, as well as the interplay between data-centric technologies and data protection law. His research has been cited by international bodies and regulators, in the media, as well as debated in Parliament. Michael sits on the Advisory Council of the Open Rights Group, has acted as consultant on machine learning and society for the World Bank, the Royal Society and the British Academy, and previously worked on IoT, health and ageing at the European Commission. He tweets at @mikarv.