Cloud Ethics
The humanities are under assault from all sides, and yet machine learning is apparently the future of our economy.
Algorithms are one of the most useful tools on the planet right now. All online searches use them, they basically run our smartphones, emails wouldn’t know how to get to their destinations without them and neither would our delivery drivers.
But, despite their elegance, usefulness and almost always good intentions, with an avalanche of unintended consequences resulting in some seriously bad press, we need to have a discussion about algorithmic ethics. Consequences such as recommender algorithms pushing more and more extreme content to social media users, the promulgation of fake news online and the influencing of national elections, to name a few.
With their powerful ability to transform contemporary society and make decisions that affect our lives, how do we manage what seems like the actions of invisible puppet masters but really has no human author? What can we do to make sure these systems are fair, but acknowledge the uncertainty that will always be part of our relationship with algorithms?
Hear political geographer Louise Amoore in conversation with UNSW law and technology expert Fleur Johns as they explore the ethics and politics of algorithmic systems.
The Centre for Ideas’ new series of international conversations brings the world to Sydney. Each digital event brings a leading UNSW thinker together with their international peer or hero to explore inspiration, new ideas and discoveries.
Transcript
Ann Mossop: Good evening. I'm Ann Mossop, from the UNSW Centre for Ideas. And I'm very happy to welcome you to this event in our International Conversation series. Writers and thinkers from around the world join leading UNSW researchers to talk about inspiration, new ideas and discoveries. We're coming to you from our homes in Sydney, and I'd like to acknowledge the Gadigal people of the Eora nation, the traditional custodians of the land from which I'm speaking, and pay my respects to their elders past and present, and to Aboriginal and Torres Strait Islander people who are with us tonight. Our conversation today is titled Cloud Ethics. It brings together political geographer Louise Amoore, from Durham University, and Fleur Johns from UNSW Sydney, for a conversation about the ethics of machine learning algorithms. How will we manage the powerful ability of algorithms to transform our lives? Can we manage them at all? Or do we need to acknowledge the uncertainty that will always be part of our relationship with algorithms? Our host tonight, Professor Fleur Johns is from the Faculty of Law and Justice at UNSW Sydney and works in the areas of public international law, legal theory, and law and technology.
Fleur Johns: Good evening, everyone and welcome to tonight's event, Cloud Ethics. My name is Fleur Johns and I'm a Professor of Law at UNSW Sydney. It's my great pleasure to welcome Louise Amoore, professor of political geography from the University of Durham in the UK. Louise's research and teaching focus on geopolitics, technology and security. And she is particularly interested in how contemporary forms of data and algorithmic analysis are changing the pursuit of state security possibilities for politics and ideas of society. Louise's work has been supported by a series of very prestigious grants. But most recently, Louise was awarded a five year advanced grant by the European Research Council for a project entitled, Algorithmic Society's Ethical Life in the Machine Learning Age. Louise is the author and editor of many celebrated books that I recommend to you. But most recently, Cloud Ethics: Algorithms and the Attributes of Ourselves and Others, of which I have a well loved copy here. And that has been very widely and positively reviewed, including having been described as essential reading for anyone interested in the ethical and political analysis of our digital condition. And it is this book that will be the focus of our conversation tonight. So welcome Louise, thank you so much for joining me.
Louise Amoore: Thank you so much Fleur, and good evening, everyone. It's great to join you all, even virtually, thank you.
Fleur Johns: So I wanted to start with a few questions that map out the terrain on which you're intervening. Some of our listeners will be very familiar with these debates and this terrain, but others, perhaps less so. So I wanted to be sure that we cover some basics at the start. Cloud Ethics focuses on machine learning algorithms and deep neural networks in particular, and how these are implemented in a very wide range of practices from policing to surgery. So could you briefly explain what a machine learning algorithm is and what a deep neural network is, or how they work? To a lay audience?
Louise Amoore: Thanks so much Fleur. You know, in so many ways, this is much more than a definitional question that you're asking, actually, I think when I was writing the book, I was very concerned to try to also historically situate why is it that now in the 21st century, we are seeing so much of the use of deep neural networks around us in machine learning becoming very much part of public debate. So, so, what I was also concerned to do was to trace part of the history of that debate. And I focus very much in the archives on, in the 20th century, when there were initially discussions around well, what about machines? What's the role of the machine in terms of, thinking and in terms of learning? And there's a particular moment that I described in the book that was a BBC radio show in 1952, which Alan Turing participated in, alongside of a philosopher and a neurosurgeon, and they were debating, what would a machine need to do to be able to think? And what's so wonderful about that radio debate is it shows actually, that this question of machine learning, where they concluded that what would need to happen is that the machine would need to learn something beyond what it had been taught by the human. And this is actually something that they could agree about, you know, across the mathematics, the philosophy and the, kind of, neuroscience that they could agree that really for machine learning to take place, the machine would need to learn something beyond what it had been taught. So something beyond the rules. And they spent quite a lot of time talking about how the algorithms would be seizing from their environment, something that was useful. And I think that that, for me, is this this moment, then also, where we can see what the difference is for the deep neural network. Because a deep neural network is a subset of machine learning, has a very particular way of seizing from its environment. And of course, in our times, that's an environment that's very much about data. And we often talk about that in terms of big data. So a deep neural network is learning from its exposure to data. It's mapping a function or a series of functions between inputs and outputs. And through its neurons in the neural network, it's if you like, subdividing the complex task of recognising an image or recognising a word in a sentence, and it is breaking that down into a series of processes, so each of the layers in the neural network is tackling part of that function, so a series of sub functions. So when we hear deep, in deep learning, or deep, in deep neural networks, the imaginary of the depth is actually the additional layers that exist within the model in the neural network itself. And so you'll hear the computer scientists often talking about the difference between shallow and deep networks, being that the deep networks are able to deal with these more complex problems. So for me, that's one of the reasons why we're seeing so much attention to the public debates about this in the 21st century that some of the most deep and intractable social and political problems from diagnosis and medicine, to managing the pandemic, to thinking about national borders and security, that those a deep political problems are beginning to be addressed through deep learning. So you'd begin to see a kind of alliance between the depth in the neural network and depth in terms of society. So that's also one of the things I'm interested in my current project and probing a little bit more, you know, what is it about? What is it about the deep neural network that's become such an attractive solution for tackling difficult problems?
Fleur Johns: That's a great way of introducing the book because I think in many ways the book is trying to dwell in that beyond, that you're that you were alluding to, from the BBC debates. That sense in which the machine always exceeds, and despite the reduction to a single output signal that there's always the sense in which there was more there. So on that front, you talk about the relation between machines and society, and the sense in which machine learning algorithm exceeds the analogy to the human, from which it gets started. You talk about in the book about machine learning algorithms, proposing things, in the world and transforming perception and apertures of perception, the aperture is a key motif in the book. So could you perhaps offer listeners an example, a story, a short story, that shows how algorithms do this work? And something of what is at stake in it?
Louise Amoore: So I suppose it is my definition of the algorithm in the book, actually, thank you for, sort of, drawing that out. To think about the algorithm as an arrangement of propositions, and how might seeing the algorithm as an entity that actually proposes things in the world, how might that shift our ethical debates and our public debates about what algorithms do? So in a sense, I am also responding to common sense understandings of algorithms, are often described as recipes or, you know, series of steps which can give the impression of a kind of linear notion of the algorithm. So in a sense, by understanding this in a broader context, and actually I take some of this work from on propositions from the historian of mathematics, Keith Devlin, who says we should always think in mathematics about what he calls combinatorial possibilities. So what are the combinatorial possibilities? So what I'm trying to do there is say, right, let's think about algorithms as not simply a sequence of steps, but to actually think about how that output is contingent on these combinatorial possibilities, which are proposing things about the world. And let's take an example of, a concrete example, to make that a bit more real in terms of what might be happening. Let's imagine a facial recognition algorithm, in which a police force monitoring a crowd who are gathering to protest in London or in Sydney? How does a person become a target of interest in the face of the kinds of deep neural networks and convolutional neural networks that are running in that system? Because the output of the algorithm as a sequence may be a simple numeric output of, you know, something like point six, seven, for example. But a debate has to have taken place about what the threshold is, for the identification of a person of interest. The datasets that are being analysed will exceed just the biometric images. And they might include people's social media feeds, for example. So to think about the algorithm, as an arrangement of propositions encourages us to ask questions about how that output was arrived, and what the other alternatives could have been. How did the person or the object become seized from its environment, to use, kind of Bergson's notion of perception, that to perceive something, we have to extract it from the environment. So there's very often in our debates about the use of algorithms in public space, especially, there's very often an emphasis on correcting the algorithm, or ensuring that it's just and that it's fair. But my point in this sense of an arrangement of propositions is, where would we begin with that correction? Since it's a continual process of iteration, that's learning every engagement with the public, with crowds of people, analysing all of those data points, and adjusting and modifying its performance, even, based on those things. It's a much more lively and iterative process, than the sequence would lead us to believe. So I guess, in doing that, I'm trying to multiply the possible points in terms of the algorithm's process of learning, in which we might intervene and say, These are also ethical political moments, that it's not only a matter of how was the algorithm trained, or what is the output? But its entire arrangement, and the way that the parameters are adjusted, as the algorithm learns that these moments are also important moments for us to think about in terms of a broader public and social question about the place of these sorts of machine learning algorithms in our world, really, in our society.
Fleur Johns: And I want to come back to that multiplicity, because one of the many strengths of the book, I think, is that you direct our attention gently away from questions like whether the person of interest that you were just talking about is the right or the wrong person, and direct our attention to the way in which irrespective of getting it right, in that sense, our perception and our way, our vision into the future, the way we relate to and understand and think about acting together, is fundamentally transformed. And that is, the way you bring us to that recognition and think through the possibilities of how we could work and act in that, I think, is a beautiful aspect of the book. So I want to come back to the multiplicity, but I feel that we should, coming to an event like this, some listeners may expect us to talk about certain things. So I want to walk through a couple of ways in which your approach to the ethical politics of machine learning algorithms is distinctive from some positions that are being quite widely taken in public debate. And so I just want to set a few alternatives aside before we come back to Cloud Ethics itself. So one familiar approach that many listeners will probably be familiar with, to the ethics of machine learning algorithms, is to try to keep a so-called human-in-the-loop in automated systems or processes. So for example, in talk of automated weapons systems, it's often said that the most important thing is to ensure that a human, not a machine is ultimately calling the shots, so to speak. Now, this is an approach with which your book explicitly takes issue. So can you tell our listeners a little bit about what you see is wrong with this approach, or why it doesn't work?
Louise Amoore: Thank you, you know, that actually, this provocation to me to think about what's wrong, is so interesting, because I feel like also part of the intervention of this book was to try to, if possible, shift beyond this sense of, of right and wrong, you know, and to try to somehow think about, well, actually, are there different ways we can approach this question, in which, instead of adjudicating between the, kind of, rights and wrongs of this, that we actually reimagine how we're letting with algorithms in such a way that we can understand that they might actually be also adjusting somehow our sense of how we decide what is right and what is wrong. I think we might come to some examples of that later on. So I want to begin by saying that I completely understand the desire to appeal to the human, as the, kind of, locus of responsibility or control. So I think that that is a completely understandable position. But nonetheless, I do think that it potentially might limit the space that we have available to us to raise some questions. And it happens in a number of ways. So maybe I might talk about just one or two of these. The first is one that's been pointed out for a very long time, particularly by, sort of, feminist historians of science and STS scholars who've reminded us that when we talk about ‘the human’, in the human-in-the-loop, we're actually talking about a very small fraction of those that we consider to have human agency. So in some ways, I think the notion of human-in-the-loop always left me uncomfortable with which humans will be talking about. And you know, Donna Haraway, in work that was done 20, 30 years ago, points this out, you know, that it, she calls it a very tiny fraction of humanity. We're talking about white men and their capacity to control these kinds of outcomes. So that's already, for me, a question, who the category of the human is? I think that was already a starting point, long before I was thinking about, specifically about, machine learning algorithms. So then my feeling was, well, actually, we would need to then shift the ethical focus, so that it's not limited to the moments where it seems to have invaded human control, these kinds of moral panic moments we might think of in relation to, in the case of autonomous weapons, or actually even vehicles, you know, this question of who's the driver and who is in control? And what would it mean for an algorithm to make a mistake in that context? But throughout the book, I was very interested in how the kinds of collaborations between humans making decisions, and algorithms, seems to be reshaping this question of, who or what the human is. And some of that was surprising to me. So when I was interviewing the surgeons who were learning with their surgical robots, about new forms of cancer surgery, they began to use this notion of we. And, you know, at one point, in one of my interviews, I said, do you mean we, you the team, and they said, well, and the robot, right? So this sense of their experience of themselves as a human, and even their corporeal bodily experience of themselves, so that the 365 degree turn of the wrist of the robot made possible a reach into the patient that hadn't been possible before. So I see this in so many different examples, the border guard understands their position differently in relation to the risk scoring algorithms that they are working with, the radiologists whose image recognition algorithms are kind of changing that sense of how they would detect an anomaly in the context of their images. So by seeing these as, kind of, collaborations between humans and algorithms, that are changing how the human experiences their own agency, I feel like this is helping them disrupt the sense that we already know which humans and who we're talking about, when we refer to the human-in-the-loop. So that's part of it. Who do we mean, and what do we mean by the human? There's also, for me, a more intensely ethical point going on here as well, which is about how, in a sense, these systems already have multiple humans, sort of, all the way down, kind of, inside their actual arrangements. So an example of that could be a police force in the UK, for example, using a machine learning algorithm to give a likelihood that a particular person who's been arrested, you know, should somebody be detained in custody, can they be released? What is the risk? And what’s to me very interesting about that decision, in terms of a human decision, is that actually lodged in that algorithm are the very many tens of thousands, hundreds of thousands of past decisions that were made, and their outcomes. So this is what I mean by you know, the attributes of ourselves and others that actually, is it about the person, who is the subject of the decision? Or is it about the very many, sort of, residual elements of multiple other people's data, who encountered that system in the past, that then has an onward effect on the next person where a decision is made. So it's partly about extending the sense of who or what counts as human, but it's also about thinking that around the the notion that these so called automated decision making systems actually have humans in the residue of human interactions, and past decisions all the way down, and to try to recover those moments and to say, actually, that brings new kinds of ethical relations, it means that One person's decision to have their visa granted, it has a very close machine learning relationship to somebody whose visa in the future will be refused, because of the way that things like clustering algorithms will group people and experiment with degrees of riskiness of particular kinds of applicants. So, to me, this notion of the human actually extends and expands what we could then think of in terms of ethical sites, rather than reducing human responsibility, it ought to give us some opportunities to, to extend the points where we can think about human responsibility.
Fleur Johns: It adds an interesting dimension to decisional politics as well. Because when you talk about decisions, and, and there is a lot of talk of responsible decision in the book, but it's so clear that this is not decisionism as we've seen it before, because of the way in which the decision maker is always multiple, and is frequently acting in a posture that doesn't presuppose mastery at all. So you talk about decision in non-knowledge. But I think the version of decisional politics, which comes out through the book is really intriguing because of the way it fractures and makes fractal, the figures that have typically been central in the way in which we've thought about a politics of responsible decision. But let's come back to that, because I do just want to do another one of these sidesteps around contemporary debate. And then I think we'll go to cloud ethics itself, but one of the standard moves and, and I don't mean that in a dismissive way, that surrounds the ethics of algorithms, is to try to unveil and correct bias in algorithms source code or training data. And you contend in Cloud Ethics, that it can be a mistake, or misguided or perhaps less promising to try and purge algorithms of bias. Because as you say, the algorithm is always already a political being. So can you talk more about that and explain why you direct, and how, you direct readers attention away from a pathway of algorithmic bias detection and mitigation?
Louise Amoore: Thank you Fleur, for this question about bias. I think it is the thing that's haunted me for the longest, that I still feel that I'm grappling with now, because it's a difficult thing to say, actually, what I'm not seeking here is an unbiased algorithm, because it seems like such an obvious move to make, to say, well, of course, it shouldn't be biassed. And so it's really got me thinking this about my own background. So my background is in political science, in international studies and in modern languages. And so, I think I brought with me to this debate, already a discomfort, somehow, with the idea that anything that could be given to us in the world could be given to us in a way that was somehow neutral, or unbiased, or perhaps most importantly, outside of relations of power, you know, what would it look like to imagine anything, whether it's a technology, or whether it's a political idea, to imagine anything somehow existing outside of relations of power? So I think I already brought with me to the discussion, in a way, a sense that I'm uncomfortable with this notion that we're seeking something that's completely available to us, as transparent, and that there's a possibility for neutrality and objectivity here. Like in whose name are we also asking for that kind of objectivity? So I think I already had that, sort of, feeling as I went into field work for the book. And you know how, when you look back on fieldwork, you have these moments that you will always remember, and I'll never forget one moment, where it occurred to me that the computer scientists who were working with a team of borders and immigration team. They were an in-house group of computer scientists, and they were looking at the data that they had, and they were playing with the model. It's the bit in the book where I talk about the space of play. Where it began to occur to me that when they were talking about bias, they meant something else. They meant bias in terms of the weights in the model. So they were talking about bias in quite a positive way, as a productive thing. Which then led me to, kind of, go into some of the computer science textbooks and journals to trace through this notion that actually bias in the building of a machine learning model is a productive thing, because it has to weight some signal more heavily than another, otherwise it can't possibly know what to output to the next neuron. So in a sense, there has to be a kind of bias in the sense of a weighting of something as more important than something else. For me this always already had a philosophical element that we see in people's work like Karen Barad, where she says we should think of these instruments as mattering and making matter. So the bias is not something that can be excised from the algorithm, because I think it's absolutely intrinsic to the building of a machine learning model. And that it's not about reducing the bias, it's using the signal of the bias to learn in terms of the process of machine learning itself. So in that sense to rely on being able to exercise the bias, to say, well, actually, it can stay if it behaves itself, you know, if it's if it's neutral, and objective, then it can stay, is is problematic, you know, could we ever say that an algorithm is completely fair or non discriminatory? I want to say no, we will never be able to say that, it must always, by necessity, discriminate between signals. It has to do that, has to have assumptions, and it has to learn those assumptions. And so an acknowledgement that it's always discriminating, in various ways, is a way into also thinking about the politics of the algorithm and how it's doing that. What is it weighting? And what does that do in terms of its world making capacities? What things is it making in the world through that process of weighting? So when we think about important outcomes, who gets what cancer treatment in a, kind of, oncology algorithm? Or will this visa application in the immigration system be accepted or not? I think we should be beginning instead with the notion that there will always already be bias, beginning from there and not beginning from this notion that we can exercise it, it's to stay with that problem of the difficulty, really, of the kind of logic that machine learning has, that it needs to be able to do this weighting, and setting of parameters and a continual process of the adjustment of parameters. So yeah, I hope that captures it. I mean, that's just been what's I suppose part of what's driven me to conduct the field work, really, is to, kind of, to look at the various moments where the bias is being actually harnessed, thinking also about how, in some ways, deep neural networks learn best in very uncertain and volatile environments, because that's giving them the most in terms of variability, so that they're exposed to data beyond the expected distribution in the training data. So in some ways, volatility and surprise, and turbulence is a, kind of, positive force for deep neural networks. So I think in the work that I'm doing at the moment, this is taking me on a step from the focus on bias to think about actually, is there even a sense in which unstable and violent, even, political and social situations are proving to be some of the most productive testing grounds for these kinds of systems? And there's lots of work happening at the moment, you probably already know these, you know, Claudia Aradau talking about experiments without protocols in the Greek islands with migrant arrivals in the camps. You know, hardly any infrastructure, in terms of, you know, sanitation, and water and so on, but an infrastructure of machine learning absolutely in place, she documents very carefully. So I think that I'm already now starting to think beyond also productivity of bias, to also thinking about productivity of social turbulence in the broadest sense for machine learning algorithms.
Fleur Johns: Yeah, that's, that's interesting. I mean, the challenge, I guess, is to show and work with an idea of turbulence that isn't flattening, that doesn't presuppose that everyone just gets tumbled round in the same way, that there's something sort of agnostic about it. So that really takes us into a cloud ethics as an ethico-politics, which is the heart of the book. And what we do with these kinds of encounters between turbulence, or encounters between bias, collectively. You talk about ethico-politics that you're interested in, as one always already present within the algorithm. So, present in this machine learning practice. And you emphasise the many touch points and data fragments that shape relations between subjects and objects, various, kinds of, points of reference within the algorithm. And you focus especially on prospects for reopening alternative pathways, what you describe as forks in the road, forms of difference and multiplicity within the writing of algorithms. So, could you explain some of how and where these forks in the road emerge, and how we could think of them as open to a collective politics.
Louise Amoore: The forks in the road is something which was really, I was just so caught by the similarities, and I think throughout, in terms of my research practice, I think it's always about resonance. I'm always really interested when I see a resonance across two spheres of social life, or political life that seemed to be entirely separate. And so I could see, of course, the forks in the road, in terms of how computer scientists and data scientists were approaching some of their problems. And sometimes it's very obvious, like, in a sort of tree type algorithm, so even in random forests, for example, where we might meaningfully talk about forks in the road happening in terms of the weighting of particular pathways over others. But this combined and resonated with my interest in terms of literature, of the very many moments when authors reflect on the difficulty of knowing in advance when they're crafting a story or a narrative of which direction this story might take. And specifically in the book, I engage in a bit more depth with an author whose work I've read for a very long time, John Fowles, and because he actually, in his memoirs, is really reflecting on this idea of the, he calls it the trace of the rejected alternative, that as he's crafting his stories and his narratives, he's always aware of this, kind of, ghostly other alternative pathway that he could have taken in the writing of his story. So perhaps in quite a, kind of, curious way, I found that resonance across those different forms of writing, the writing involved in code, but also the writing involved in terms of the building of a model. And this sense of working one's way towards a story, I found lots of productive, kind of, intersections between the different notions of the traces of the rejected pathway. And that might sound, kind of, quite, you know, at a sort of conceptual, theoretical level, but actually, I'm seeing it all the time in the way that governments, in particular, are embracing the use of deep learning at the heart of government. And, you know, in the pandemic, in the UK, there has been an intense acceleration in the inviting into government, of course, and everywhere else too, you know, Palantir and Amazon Web Services and so on. And what is the promise that these companies are offering is, this, again, this kind of question of being able to resolve the optimal policy choice or the optimal output. So for me, there is also a, kind of, political significance in signalling the trace of the rejected alternative, precisely because of the way it runs against the grain that one could have, you know, a completely optimal function, that the deep neural net will map all pandemic data. And then you can adjust parameters of various aspects of opening and closing schools, or changing different aspects of how your society is governed in terms of lockdown, and governing your population almost becomes like managing the model, you know, that there's spaces of play involved in adjusting those parameters, is a space of play that's actually directly about questions about people's lives, you know. You might have seen, I watched it almost obsessively, actually, when Dominic Cummings was interrogated by the MPs in the Select Committee, and one of the images that was shown was a whiteboard when they were planning their collaborations with data science teams in UK universities, there was a note on the whiteboard, which said, who do we not save? And that struck me as a kind of parameter, you know, so the decision making around, you know, crucial aspects of public health, to me are being changed and refracted through this lens of machine learning. So for me to recapture the trace of the rejected alternative is not just a, kind of, conceptual point about the writing of the algorithm that I talk about in the book, it really does have important political consequences, because that rejected pathway could have meant not having 180,000 deaths, in the context of machine learning, at the heart of government, for the pandemic. So sorry, I've strayed slightly from your question, I wanted to sort of show how actually the interventions we need to be making are real, you know, they are real and kind of political interventions. And for me, you know, reminding us that this is never a single output, it always rested upon a multiplicity and it always involves all kinds of different decisions about which parameters should be included and which should not, and discarded clusters in clustering, for example. And so, though we can never fully explain it, we have to find the way to prise open that aperture, and make it political again, you know, make it challengeable and contestable.
Fleur Johns: I think that, I know that you've worked a lot on migration techniques and professionals and that's been a key site of fieldwork for you in this work, but also in prior work. So I imagine that this talk of the trace of the rejected alternative would resonate with many political constituencies who experienced this, who recognize that there were multiple pathways, or that they are on the wrong side of the pathway that was made actionable. But how then do you think about keeping that alive other than on the whiteboard? So one sort of is the planning, design model, but if we think of these pathways as things that are often received in the aftermath, or experienced in the aftermath, when a particular alternative has been made operational, how then might it be possible to keep alive the sense that there were other possibilities, that would have led to different effects and would have perhaps made vibrant a political claim that has been relegated to the junk heap?
Louise Amoore: Yeah, that's a really good way to put it, to say relegated to the junk heap. I mean, you know, seriously, I think, if there is a political project, I think explicitly in the book, I did not map out a political project, like, it was not my intention to say, and this is what we should do. But what I really hoped might happen was that people would see, in this notion in particular, of thinking about the attributable so in the way that machine learning logics is so concerned with attributes, you know, not just attributes of people, but public spaces, countries, regions, cities, and everything. You know, to really try to think through, well, what might, what could be done by people with this? And I have been really pleased about that, you know, there have been artists who've emailed me and said, well, actually, I’m trying to make something, build something which shows the, kind of, absolute fragility of the output, but it's an entirely malleable thing, that should never bear the full weight of, kind of, political expectation, that it will always fall short. So in some ways, that's what I wanted, I wanted people to, kind of, see some possibility, to be able to do something with that idea that we need to reopen the branching pathways. But having said that, and in the year and a half since the book came out, I'm also seeing really interesting moments where an intervention happens, that is taken in the name of a particular constituency, but where I think that the cloud ethics would, again, reopen the possibilities, I'll give an example of that. So, Foxglove Legal in the UK has become a law firm that has, you know, taken on cases that are absolutely about directly challenging the effects of machine learning algorithms in society. And one of their successes last year was that they looked at the so-called visa streaming algorithm that home office was using, and they identified that it was in breach of the 2010 Equality Act. So because it used nationality data, among a huge array actually, of input data, and, of course, nationality as proxy for race, unlawful in terms of the Equality Act. So they succeeded in having some of the input data removed from the algorithm. Now, the difference with Cloud ethics was it would say, well, actually, what's happening in that model? What are the various ways in which other forms of, kind of, racial discrimination and outright racism, are possible in the way that the model itself is learning? So one could successfully excise a piece of input data, but once we see this as a multiplicity, we can see very many ways in which the parameters and weights of the algorithm are, you know, thinking back to, sort of, Stuart Hall's notion of the, kind of, race as a floating signifier, signifying racism in completely new ways that are outputting already refusals of visas, that are entirely related to assumptions that are being made about a person. And some of them are much more subtle, like is the marriage that they're claiming to be a marriage real or not? I mean, this is becoming an intensely of interest to governments to be able to use deep learning algorithms to determine, is this child likely to belong to this person? Is this person's relationship real? You know, you can see this in so many government documents around the world now, you know, using machine learning to detect sham marriages. So for me, you know, there is a huge job, of kind of, resistance that needs to take place here, which also acknowledges that for those people not targeted by the algorithm, that the traces of their data, nonetheless help to build it. Which I mean, you know, some of your question about resistance also got me thinking about the very many futile attempts that I make to try to stop some of my data entering that model. So I really embarrassed my 13 year old son a few months ago by making a big fuss in a supermarket because they were using facial recognition at the till and then trying to explain to the manager, how it didn't matter that they were deleting my face, because I knew how they were going to be using that to adjust the match scores for adjusting the performance of the algorithm would have an onward effect on someone else. So I guess what I'm saying there is, when Foxglove Legal make an intervention, it's necessary, but it's insufficient. We try to resist the use of our data for training and modifying algorithms. It's probably futile, but it's still necessary. And part of that needs to be about recognising –- back to this question of the attributes of ourselves and others — that our freedoms to move and our capacity to claim rights in the world are more than ever intimately related to those people whose claims will be refused, because that's exactly what for example, clustering algorithms need, you know, they need to be able to adjust and identify the different boundaries around clusters and say, here are the risky people, you know, and here are the safe people. So it's not a new politics. It's not a new politics. It's a deeply continuous colonial politics, in a sense, about who and what can be recognised, and on what terms?
Fleur Johns: And I guess that resonates with your earlier point about the iterative nature, that in a sense, the danger with juridical political approaches is often that can create the impression that one is done and dusted, you've established the cause of action, you've got some relief in there, you know, oh, justice is done. When any political movement that we think through the history of feminism, First Nations political movements, it's this endless, iterative process of trying all these different entry points, and, and doing it again, and again and again. And so I think, I think you're, you're right to connect back to a long political history, that we could trace through a range of different, including anti colonial movements.
Louise Amoore: Absolutely. And but when, don’t you think it's interesting, when it does break the surface, though, Fleur, I mean, so like, with the exams algorithm that happened in the summer of 2020, it was so interesting to me that people were so shocked that an algorithm was being used in public life to determine the exam grades of the students who had been unable to sit their GCSEs and A levels in the pandemic. And it did cause you know, protests in the streets with teenagers carrying placards that said, Fuck the Algorithm. Which, in a sense, it, sort of, broke the surface. But yes, still, my first thought was, there are very many groups in this country who for a very long time, have been suffering in full, the consequences of the kinds of injustice of this system, and now it's affecting a particular group. It's there, you know, and it's kind of in the public domain. Yeah.
Fleur Johns: So let's talk about that. Because you'd have written about that in the press, as a turning point in algorithmic politics. So you talk about the way in which young people targeted the algorithm, rather than trying to make it better, they just tried to block it, its ability to write their future. So can you talk more about that, and why you described it as such a turning point, and how this might be understood as an instance of cloud ethics. Of an instance of staying with the difficulties and undecidability of the algorithm.
Louise Amoore: What's interesting, in a sense about the exams algorithm is it was not a machine learning algorithm. You know, it was, really very old fashioned, rules based algorithm, that was in itself quite interesting, because it meant that people, you know, computer scientists, but also educational professionals, teachers, were able to actually, to an extent, try to unpick the pathways. So that was interesting in itself, because, of course, the discrimination became fairly clear, which was that those areas of the country that were most deprived, you know, parts of our inner cities and so on, those schools were most hard hit by the application of the algorithm, because in effect, it was just fitting to a bell curve, for data. So any particular student could only be awarded a mark already available on a, kind of, distribution that their particular individual school had achieved in the past. So if you had a particular child in a certain part of the UK, they would have been limited in what they could achieve, by the historical data. So in some ways, it was a very, kind of, it was not machine learning algorithm. It's a very, kind of, old fashioned algorithm, but it showed and dramatised the consequences. So for the private independent schools with very small class sizes. The rule said that actually, if there are fewer than 20 people, you can't use the algorithm and it should go on teacher grades. So it led to huge grade inflation for the independent, fee paying school. So in some ways, the discrimination was very overt and it was not a matter of machine learning. So what was of interest to me was more the response of the teenagers, that they were not saying, fix this algorithm, make it fairer. Please adjust it, it's discriminatory. They were directly targeting this, kind of, notion that their potential, their absolute potential in relation to a future that could not be fully known, was being rendered calculable. So my feeling was that this was absolutely an example of the harm, not to be in quite the way we understood it, that the harm was not just discrimination, injustice, and unfairness of the algorithm, that the harm was actually too, the idea of an uncertain future full of potential, that someone ought to be able to look ahead and see multiple possibilities and not have that potentiality reduced to the output of the algorithm. So in calling for a scrapping of the algorithm, to me, this was a moment where the, kind of, the cloud ethics broke the surface, in terms of what about potentiality? You know, what about those things I have not yet achieved? How will they ever happen if you already claimed to have known me on the basis of this data that you have on the past performance of others, you know, the attributes of others? So I think it's a turning point. But I think there are many of these, and I think that we're not, this is not about a, sort of, calling for a particular program of resisting algorithmic decision making. It's just calling for us to pay attention to the deeper effects, beyond simply, you know, for example, a discriminatory training set that has unfair effects, and thinking about this broader harm of what it means to say, that you can reduce to a single output, the potential of a person, whether what they're doing is claiming asylum, or whether they are sitting there exams in the pandemic.
Fleur Johns: So that speaks, I think, to one of the questions that was pre-submitted, there was a question about how civil society organisations and marginalised communities might forge counter strategies to deal with the unintended consequences of AI, although you're very much not just talking about unintended consequences, you're sort of talking about routine operations. But that last point about closing off futures and possibilities at the level of the individual, your book also speaks to non individualising effects. And you say when we're predicting other things like whether crops might fail, or, you know, large things that appear to have no stakes for people. How should we think about the political effects of these algorithmic predictions that don't speak to a particular person, apparently, in any direct way? It's a fairly general question, but maybe you can make something with it.
Louise Amoore: No, I think it's a wonderful question. And I think, I suppose the starting point for me is that there are no low stakes experiments of machine learning. That's something that's really stayed with me, actually, pretty much throughout the research. And still now, and one of the moments that I think I did talk about it in the book is when the, kind of, Google employees to consider that to be a threshold of high stakes, which they considered to be when object recognition, image recognition, video feed analysis, and so on, when this became used for military purposes, here was the line where the, kind of, social good was crossed into something which was high stakes. And, you know, actually, in trying to trace in the books, some of them, apparently mundane things like I spent a little bit of time talking about how a robot learns to grasp the bottle. And this seems like a low stakes thing. It's like, how does the availability of a large database full of cloud point images of household objects, how is that path of the condition of possibility for a robot to be able to grasp something? But actually, you know, though, there was a worker killed in a Volkswagen factory because a robot misidentified the door of the car for the person. So I'm not sure that there are anything, any aspects of this that are low stakes, in the case of the Google workers, I was really intrigued by the, kind of, the way that actually all forms of object recognition in terms of the kind of work that DeepMind are doing, for example, are available for use in a, kind of, multiplicity of spaces. And because of the centrality of features, you know that what machine learning algorithms need are features. So, every new immersion in the new environment yields new features to the model. So, in a sense, even this, kind of, sense of, well, crops and, you know, drone feeds from fields in terms of crops and this, kind of, sense that the global south could kind of benefit from machine learning and the, kind of, questions of climate change. They're still high stakes to me, you know? Because of the condition of possibility for machine to learn something, in excess of its programmed rules. And I think, of course, this field is also shifting so quickly, so even in the year and a half that's passed since the book was published, the intensity of interest now in how to generate more variability, so one of my postdoctoral researchers on my team, Ben Jakobson, for example, at the moment is looking at synthetic data. You know, to what extent is actually the use of algorithms to generate synthetic data in order to train more algorithms, right? And it's very often considered low stakes in the sense that it doesn't need real people, or it doesn't need real data. But of course, the question all the time is about, you know, the, sort of, condition of possibility of that machine learning for other kinds of practices and forms. So he's doing some really great work on that we hope to, kind of, take that forward some more.
Fleur Johns: Well, that's great to hear. Because I think there's so much room for work on this. I mean, the whole vocabulary and repertoire of personal data protection, and privacy really doesn't speak to this at all. And so I think it's great to hear that you're working on that. So a final question about the politics of practice and the methods of your research. I could ask a lot, because it's, I think, one of the many intriguing aspects of your work is how you do it. And one feature that comes through, in this book, and in your prior work is — and you alluded to this earlier — the way in which it traverses different genres of creative work, and takes justice seriously ideas and arguments that are put forward in the visual arts and in literary fiction, as in the social sciences or hard sciences. So, could you speak a little bit more to that aspect of your work and how you came to that, you spoke earlier about your training, and that it was in some way, a natural approach. But how do you think of this in relation to the ethico-political approach that you argue, is there a relationship between the way in which you source and cite your interlocutors and the argument you're making about the ethico-politics of machine learning algorithms?
Louise Amoore: I think in a sense, I have placed at risk recognition in geography, that there's definitely geographers who say, "well, she's not really a political geographer." So I think that's interesting that the thing that makes me most excited about the research that I do, and the writing, is all the different people that I work with, and they're, kind of, very different disciplinary backgrounds.
So I think one way to put it would be that, kind of, juxtaposition has always, for me, been a really important method. So juxtaposing things that seem to, sort of, not quite dwell together. And then what happens when you kind of bring those things together? And you know, a very long time ago, I did work on technologies at the US Mexico border, for example, with Alexandra Hall, where we worked quite closely with a Mexican artist Marcos Ramirez, who was using installation artworks at the US Mexico border, to sort of, puncture the scene almost, right? So his point of intervention was, you encounter this, kind of, border crossing every day, but actually does it fall beneath the visual register? So, we worked with him quite a lot, in terms of, thinking about the sort of disruption of the scene of the border technology.
And also with the artist Meghan Trainor, who put the RFID chips inside, kind of, beautiful handcrafted objects that I wrote about in my Politics of Possibility book, you know? And that was something I thought, it stays with me, still, this idea that when the Department of Homeland Security asked Accenture to put the RFID into border crossing cards, that this is considered a, kind of, moment in terms of technological advance. But when an artist puts it inside a handcrafted object, you somehow encounter it differently, and why would that not count? I'm finding that very interesting at the moment in the UK, the humanities is under assault from all sides. And yet machine learning is apparently the future of our economy. So why is it that this notion that the arts or the humanities somehow just supply the, kind of, wrapping or the philosophy on something which is intrinsically a question of science? I find it really curious, because the history of discussion of machine learning, as I was saying earlier, you know, always involved mathematicians and philosophers and scientists working together. So it's a dangerous thing, this notion that the arts are not actually central.
And I think that for the cloud ethics, but but also going back further than that, I've always been really inspired by Jonathan Crary, the art historian, and his reflections not just on visual culture, but on specifically installation artworks and how he describes them as, kind of, interrupting the space, you know? So, the speciality, if you like, of an intervention, that means you have to, as he puts it, walk around the object, you can't ignore it. So it's a kind of Brechtian point, you know, Bertold Brecht’s Epic Theatre, that it's about interrupting the scene, so that the audience might see anew those things that they'd otherwise considered taken for granted. And I suppose for me, that's the, kind of, method that I used also with the novels that I refer to in Politics of Possibility, as well as, kind of, why does the post 911 novel not have the same status as a knowledge object as a report that's written by Department of Homeland Security. So for me, there's always a kind of infiltration and interpenetration of these different ideas. And you know, at the moment, a lot of my, kind of, thought about this is also about trying to challenge the notion that the humanities role in, you know, artificial intelligence and the future of the world is not to just write the regulations, which is pretty much how UK government but also research councils are seeing this, you know, that what happens is, the AI models are built, and then of course, we need the humanities scholars to kind of dress this in, in kind of regulatory legal frameworks or philosophical discussion. And yet, these are practices of vision and visuality, practices of sensing, meaning making, you know, to me that at the heart of them, they are questions of the humanities, and perhaps only secondarily about computer science, they are fundamentally about human beings come to perceive things, make meanings in the world, relate to one another, you know, how are those things not also about questions of arts, dance music, philosophy? It's a question. And I think it might also be something of a struggle for us all to engage in.
Fleur Johns: Well, that's a great place to end. Although I would love to talk more with you, thank you so much for interrupting our space, and bringing your distinctive scholarly voice to these debates. They're both timely and ongoing points of struggle. So I really appreciate it. And I'm sure that the audience has, and I hope that we have another opportunity, perhaps when the borders open up, to have the conversation, in person, IRL, as they say. So thank you, to you again, and thank you to our audience tonight. And I would encourage everyone who hasn't already, to read Cloud Ethics, and engage with it. There is much more in it than we've been able to traverse today and I recommend it to you wholeheartedly. So thanks so much, always, we really appreciate it, and I look forward to some other time, some other place when we might meet in person.
Louise Amoore: Thanks so much, Fleur, It's been wonderful to talk to you, hope to see you soon.
Ann Mossop: Thanks for listening. For more information, visit centreforideas.com, and don't forget to subscribe wherever you get your podcasts.
Louise Amoore
Louise Amoore is a Professor of Political Geography at Durham University in the UK. Her research and teaching focuses on aspects of geopolitics, technology and security and she is particularly interested in how contemporary forms of data and algorithmic analysis are changing the pursuit of state security and the idea of society. Her most recent book is Cloud Ethics: Algorithms and the Attributes of Ourselves and Others which has been reviewed as “essential reading for anyone interested in the ethical and political analysis of our digital condition”. She is appointed to the UK independent body responsible for the ethics of biometric and data-driven technologies and is co-editor of the journal Progress in Human Geography.
Fleur Johns
As a Professor in the Faculty of Law & Justice at UNSW Sydney, Fleur Johns works in the areas of public international law, legal theory and law and technology. She studies emergent patterns of governance on the global plane and their social, political and economic implications. Her most recent work has focused on the role of automation and digital technology in global legal relations, especially in development, humanitarian aid and disaster relief. She is currently leading an Australian Research Council-funded project entitled 'Data Science in Humanitarianism: Confronting Novel Law and Policy Challenges’.