Skip to main content

Project description

This Major Project addresses a number of legal and ethical issues raised by reliance upon deep neural networks and cognate technologies as decision-making tools in many aspects of society. It aims to foster a community of interest in these and related issues locally, nationally and globally.

Primary participants

Principal Investigators:

Dr Noura Al-Moubayed, Department of Computer Science, noura.al-moubayed@durham.ac.uk

Professor William Lucy, Durham Law School, w.n.lucy@durham.ac.uk 

 

Visiting IAS Fellows: 

Professor Mireille Hildebrandt, Vrije Universiteit Brussel

Professor Joe Tomlinson, University of York

 

Justice and Artificial Intelligence (Epiphany Term 2024)

The technical means of redress that will be examined draw upon and test recent developments in explainable machine learning. The project will investigate how the explainable outputs of DNNs could be utilised by domain experts so as to improve final outcomes. The project also examines the possibility of ‘justice audits’ being conducted upon the outcomes of any surveillance-cum-recognition network and of any automated decision process.

Back to Future Projects
Share page:

Epiphany Term 2024

Events

 

This project is, first, an interdisciplinary examination of potential and actual sources of injustice within surveillance-cum-recognition and automated decision-making technologies and the DNNs which are their foundation. It is, second, an attempt to eradicate or redress these forms of injustice by both technical and other means. The technical means of redress that we examine draw upon and test recent developments in explainable machine learning, evaluating whether algorithms used in DNNs to make decisions can be developed so as to explain how the decisions were made. The team will also investigate how the explainable outputs of DNNs could be utilised by domain experts (those with expertise in the regulated field) so as to improve final outcomes. Alongside this technical work, the project examines the possibility of ‘justice audits’ being conducted upon the outcomes of any surveillance-cum-recognition network and of any automated decision process.


The project also situates these particular technological developments within their wider regulatory and socio-cultural context. To that end, it examines the impact such developments might have, or are already having, upon the way in which regulators conceive of their task, the way agents conceive of themselves in data-driven, quantification-rich contexts and how agents interact with the algorithms and DNNs that shape those contexts.  


The project has three principal complementary components. The first deals with context and problems. By the former, the team means to chart the extent of automated decision-making, on the one hand, and surveillance and facial recognition technologies, on the other, in the UK and related jurisdictions (see Tomlinson 2020 (a); Choudhury 2021). At the same time, they also aim to identify and evaluate some of the challenges that recourse to these technologies present. The project team proposes to use the expertise of a range of participants to this end, drawn mainly from the disciplines of Computer Science, Geography and Law. One research strand will examine the nature of the justice concerns raised by some of these developments. The PIs initial research idea is the truism that the notion of justice is complex, with numerous facets (including distributive, procedural and corrective aspects, among others, and a commitment to equal treatment/equality of standing). Since every one of these facets need not necessarily be in play in each area of technological concern, the aim is to determine which aspects of justice are in play when; it is hoped to match areas of concern with particular facets of (in)justice.


The issues constitutive of this first component will be examined in the first workshop of the project, in which the expertise of project members and Fellows will illuminate the specific issues and some of the broader concerns that animate the project. It is anticipated that this workshop will occur early in month 1 of the project.   


The second component is the broadest of the three and overarches the whole project. This component is dubbed ‘implications’ and will seek to examine the broad spectrum of changes in personal, social and institutional life that the technologies underpinning automated decision-making, facial recognition and surveillance technology and the like – what some have called ubiquitous computing or ‘everyware’ – both portend and have already brought about. They will focus in particular on three apparent changes. First, upon the on-going shift in legal-regulatory mindset – from rule-based regulation to technological management – and its implications for human agency (Lucy 2022; Brownsword 2019); second, upon how the presentation and understanding of self are mediated by these technologies and how, if at all, those processes connect with the quantification and management of data by agents and others; and, third, upon the ethical and political implications of the continuing entanglement of algorithms and agents’ data attributes (Amoore, 2020). 


The workshop on the second component will take place in month 2 and will be led by Professor Louise Amoore (cloud ethics, the self and algorithms), and Professor Lucy (on the rise of technological management and the death of rules), Dr Mariann Hardey (on technology dependence and self-tracking) and potential IAS Fellows (on interpreting the outputs of data-driven models).  


The project’s third component concerns potential solutions and it examines technical, legal and related ways of ameliorating the justice concerns identified in the first component (see Tomlinson 2020 (b) for some difficulties). The team will explore recent advances in explainable machine learning (Al Moubayed 2022-1, Al Moubayed 2022-2,  Al Moubayed 2021-1) and examine whether or not they provide technical solutions that allow decision-making, surveillance and recognition algorithms to present the rationales they use in decision-making. Realising this possibility would enable the quality and fairness of decisions to be challenged by, in the first instance, domain experts interpreting algorithmic outcomes. Furthermore, domain experts’ understanding of the inherent sub-optimal decisions made by machines will facilitate the interpretations of the outputs of explainable technology by legal experts, thus providing a basis for justice audits of particular decisions or patterns of decisions yielded by surveillance and decision-making systems. The workshop on this component will take place towards the end of month 3 and will be led by Professor Toby Breckon (on facial recognition and computer vision bias), Dr Suncica Hadzidedic (on race and ethnicity biases) and Professor William Lucy (on justice audits). 


Besides the three workshops, the project hopes to have other tangible outcomes, including an edited collection of essays, research grant application, and the development of a KTP project.