MAY 6, 2019
THIS IS THE 28th in a series of dialogues with artists, writers, and critical thinkers on the question of violence. This conversation is with Davide Panagia, who teaches political science at UCLA. He specializes on the relationship between aesthetics and politics, and his recent publications include Rancière’s Sentiments (Duke University Press, 2018), Ten Theses for an Aesthetics of Politics (Minnesota University Press, 2016), and Impressions of Hume: Cinematic Thinking and the Politics of Discontinuity (Rowman and Littlefield, 2013).
BRAD EVANS: Following on from your brilliant work on the importance of poetics in political thinking, more recently you have become concerned with the political nature of algorithms. What is it about such algorithms that commands your attention, and why are they important in terms of considering new forms of violence in the world today?
DAVIDE PANAGIA: First off, Brad, thank you very much for inviting me to participate in this fantastic series, and for your overly generous estimation of my past work.
In very concrete ways, the answer to the second part of your question regarding the forms of violence that emerge out of algorithmic technology aligns with some of your work on contemporary threat imaginaries. That is to say, I don’t believe that algorithms offer new forms of violence per se. What they do afford is a scalar intensification on forms of governmental rationality, and all of the violences these bring.
From the 1990s there was a concerted effort from diverse philosophical orientations to attend to language as the specific medium of political life, and of political theory. Whether one frames questions around the study of discursive structures, communicative action, consensus-oriented deliberation, deconstruction, hermeneutics, speech act theory, or psychoanalysis — what seems central to me about all these approaches that otherwise fall under the heading of “the linguistic turn in political theory” is a tacit acceptance that language (and thus the use of language through oral or written speech) is the specific medium of political thinking. My own way of approaching these concerns is to consider the aesthetics of political thinking and thus reconsider the role and function of sensation therein. And here, discovering the work of Gilles Deleuze and Jacques Rancière early on was central because what they offer is a way of reflecting on sensation and political aesthetics in a manner that is not reducible to either an objectivist account of beauty or subjective judgment of meaning. I rely here on poetics so as to emphasize the aesthetic sensibilities at work in our normative commitments of how to think the political — or, to use Rancière’s own language, how our aesthetic sensibilities partition what counts as political thinking, and thus the forms of legitimacy that authorize specific modes of political attention. Such an approach to the aesthetics of politics considers the participation of technical objects with thinking as relevant to any account of political reflection.
With this in mind, the question “Why algorithms?” to me means that we have to rigorously propose a study of the transformations and continuities of a medium of communication and power which has entered our political lives and extends beyond the traditional analysis of the instrumentalization of technical media. Any critical analysis of algorithms cannot be constrained by our normative concerns over their use. Rather, to appreciate the specificity of the forms of political violence that algorithms enable, we need to attend to the “what” and “how” of their specific forms of participation in everyday life. In short, I feel it urgent to be able to answer the question: Are algorithms political media? And if so, how? In other words, I don’t think it is enough to ask, how are algorithms used for political benefit and (implicitly) political coercion and domination?
Can you elaborate more on what you think needs to be done to overcome the intellectual challenges of seeing algorithms as political media?
By means of a contrast, consider the analysis of film over the past one hundred years that offers us a rich and inexhaustible morphology of critical insights into (among many other things) forms of directing and redirecting attention. Such forms emerge out of the technical specificities of the visual medium like camera lenses, focus techniques, editing, the quadrant system, narrative continuity, and any other aspect of film’s aesthetic arsenal. By looking at what film does, and how it does it, theorists and critics alike have offered crucial insights into the political aesthetics of modern visual culture. The results are diverse theories of criticism emergent from careful analyses of what cinema is and does.
Do we have a comparable theory of criticism of algorithms? We certainly have diverse modes of criticism for the study of language, of photography, of film, of bureaucracies, et cetera. But can we say with ease that the terms and conditions of criticism for these technical systems, not to mention the normative ambitions of critique, are transferable from one medium to the other? In short, is critique a transcendental category? If not (and I do not think it is) then what are the conditions of criticism — and especially political criticism — in what I call #datapolitik? To me, this raises three challenges that I struggle with and that are central for understanding political violence in the age of algorithms:
- We don’t experience algorithms. We experience inputs and outputs. But not algorithms. All of our Western theories of political criticism (even the most idealist) are, as I suggest throughout my work, rooted in an experience of media. Even Plato required the medium of the cave for his idealist epistemology of dialectics and justice! Thus, all critical thinking is at some level at once aesthetic and experiential — that is, born of a dynamic between perception and sensation. But algorithms can’t be perceived or sensed. And so, the question of transference is relevant: can we transport theories of criticism that we have developed with our engagements and reflections with other forms of media to the algorithm which is fundamentally in-experiential?
- Most of our theories of criticism (from Plato to Adorno), in some way, shape, or form, arrive at the epistemic insight that resistance to mediatic domination (and thus political coercion or intellectual stultification) comes via a turning away movement. Indeed, the critical gesture of turning away (i.e., negation) is crucial to any form of dialectical thinking and thus to our intuitions about political resistance and change. But one of the characteristics of #datapolitik is that it has augured an age of ubiquity, thus making negation’s “turning away” a near impossible critical posture. What would it mean to turn away from, or turn off, algorithms?
- And so herein lies the question of political violence today: What is the nature of this new algorithmic rationality, and what forms of violence does it introduce? And if the forms of political violence materially do not change in #datapolitik, then what do we make of their scalar magnification and intensification? Finally, what is resistance and critique in #datapolitik?
Ever since Heidegger, political philosophy has been concerned with the triumph of technical modes of thinking and how it can become a mask of mastery for systems of oppression often concealed within objective — or should we say objecting — epistemological paradigms, which work through the reduction of life to statistical measure. What do you find particularly disturbing about contemporary forms of algorithmic power? And how does it speak directly to questions of discrimination?
You’re absolutely right that Heidegger gives centrality to technical modes of thinking in political philosophy. But more than the normative thrust of his reflections (which are debatable) regarding the reduction of life via the rise of what he calls “mechanistic thinking,” I actually think that Heidegger’s larger contribution is to show how political philosophy has always been complicit and entangled with techne. For Heidegger, technical thinking is a thinking that is fundamentally attuned to the “movedness” of being; crudely put, being is never static but always in motion.
To get at your question, then, what I find particularly disturbing about contemporary forms of algorithmic power is precisely this commitment to treating difference or change as a distance between two points, and thus relying on an account of value as a quantifiable interstitial measure. The midcentury English cyberneticist W. Ross Ashby puts it best in his classic work An Introduction to Cybernetics when he writes: “The most fundamental concept in cybernetics is that of ‘difference’, either that two things are recognisably different or that one thing has changed with time.” In short, at the root of cybernetics is the measurement and control of difference defined exclusively in terms of interstitial positionality — that is, defined in terms of the distance between two points.
To bring this directly into frame on the issue of discrimination, let’s turn for a moment to Andrew Ferguson’s study on the rise and development of predictive policing in the United States in an important book entitled The Rise of Big Data Policing. Here, Ferguson is invested in exploring the high risk involved with the scalar intensification of bias that algorithmic policing can produce. [This is something that Julia Angwin, in a series of articles on machine bias at ProPublica, has also explored; and I am fortunate to also be exploring similar issues with a fabulous group of scholars here at UCLA at AI.PULSE.]
Risk analysis coupled with computational developments in applied mathematics and AI (especially in the areas of correlationism and causal inference) create models of prediction that are indifferent to content. What do I mean by this? Take the case of one predictive policing algorithm (used for predicting theft) that was adapted from an algorithmic equation originally used to predict earthquake aftershocks. Like an earthquake aftershock, “[a] burglary in one neighborhood might trigger a second or third burglary in that same area close in time,” Ferguson writes. The idea here is the following: there is no qualitative difference between an aftershock and a crime. The pattern of behavior is correlated such that both are predictable according to the same criteria. This, because movedness (of the Earth’s tectonic plates and/or of the removal of property) share behavioral characteristics. What matters, in short, is mechanical movement. Predictive policing does not need either an explanation nor understanding of crime, of racial biases, economic inequality, et cetera. All that is necessary is the datafication of action and a computational calculation that correlates patterns.
Though the application of predictive policing is, as Ferguson rightly shows, a concern, what is of specific concern to me is the gesture of ubiquity described above that renders earthquakes and crime indistinct (because what matters is the movement of data points) such that both and everything can be policed in the same way. This means several things. First and foremost, it means that criminality is now not simply associated with virality (as was proposed in James Q. Wilson and George L. Kelling’s “broken windows” theory that was subsequently operationalized by Mayor Rudy Giuliani and his police commissioner William Bratton in New York City in the 1990s), but also with disaster. And this means that the criminalized body is now both a plague and a disaster. What concerns me further is how the use of algorithms for policing speaks to a truth about our contemporary condition: all of life (i.e., anything that moves on its own) is policeable because it can be datafied, tracked, correlated, and predicted. To police, in short, means to track movement.
What your question about discrimination exposes is just this fact about #datapolitik: that all movement is policeable. And thus, to the extent that algorithms do what Ashby said they did, then what we are currently living is a moment of ubiquitous policing. And this mode of policing is not the interpellative hailing that Louis Althusser had theorized — the “Hey, you there!” that stops you in your tracks. What we find in #datapolitik are practices of policing dependant on total movement and not simply on identity recognition. Thus, a curious emergence of a new political corporeality — the perpetually mobile body — is now the object of policing alongside (or, better, superimposed upon) racial bodies and docile bodies.
Mindful of these corporeal actors, I’d like to pursue further these connections between the virtual world of encoding and how it has distinct material effects upon the human body. A number of post-human scholars have dealt with the advent of digitalized bodies and how this has transformed what it means to be human, for better and certainly worse when thought about from the perspective of control societies. How much of this computational turn do you think is driven by a vision for a military diagram for society?
Part of me would like to simply answer: “All of it.” But that’s neither a satisfying nor particularly compelling answer. But let me suggest at least three distinct ways in which we might appreciate the computational turn in relation to the militarization of society.
The first is, once again, the issue of cybernetics and its development during the immediate aftermath of World War II. While much has been written about this, I’d like to especially highlight Orit Halpern’s book Beautiful Data because it was that work that made me realize the importance of the connection between cybernetics and contemporary algorithmic culture. After the war, concerns emerged among military officials that American soldiers fighting in Europe hardly ever fired their weapons, and when they did they were rarely successful in hitting their targets. The result was innovations in the behavioral modification of soldiers in basic training that attempted to affect their willingness and capacity to kill. The success of these programs was staggering. During World War II, it was estimated that 85 percent of soldiers did not fire their weapons. By the time of Operation Enduring Freedom, the US non-firing ratio was reportedly zero percent. That means that within a span of 50 years, the United States successfully trained their soldiers to be 100 percent effective killing machines.
What was required in order to do so was to innovate and adapt a series of ideas and ideals that came straight out of the behavioral innovations of cybernetics — an emerging science of information control that was born of the work that Norbert Wiener and many others had been doing in developing negative feedback computations for radar targeting technology and missile firing. And here is my point: the central military contribution of cybernetics to society is the innovation, development, and application of negative feedback at every level of social life and thus the transformation of everyday life into a kind of target practice. Hence, I would say, the ubiquity of gamification for all aspects of life.
This leads to a second (and related) point concerning what we might call “cynegetic predation.” This is an idea I adopt and adapt from Grégoire Chamayou’s study, Manhunts, which is a book about the historical emergence of police logics from the ancient and classical traditions of hunting manuals to the present — one of the most famous being Xenophon’s Cynegeticus. “Cynegetics” refers to the ancient art of training dogs for hunting. What Chamayou’s Manhunts offers us is a genealogy of tracking and capturing techniques, practices, sensibilities, and mentalities and their eventual adoption as tactics of police predation of criminals and escaped slaves in the 19th century. Cynegetic predation is at the heart of the police logic of #datapolitik — to wit, the pursuit and tracking of moving bodies whose motions are, as noted earlier, identified as data points along a negative feedback loop. If this is a little too abstract and abrupt, simply think of how recommendation algorithms like the ones used at Amazon or Netflix operate so as to capture taste preferences according to a correlational feedback loop. The better they are at hunting your tastes, the more you are inclined to use their specific recommendation platform. This kind of operation is what I mean by the ubiquitous police logic of cynegetic predation which, as I noted earlier, requires constant and incessant movement. Only things that move are subject to predation. But everything moves, including information.
The final point is the matter of logistics. And this is more or less straightforward. A networked society is a logistical society, and logistics is an ancient military development (most readily available in advice to princes’ literatures on the art of war, like de Vauban’s 1706 Traité de la défense des places) that attempts to deal with the highly complex problem of how to deploy troops and equipment in the most efficient way possible, in the quickest way possible, and with the maximal amount of effects. Deborah Cowen’s The Deadly Life of Logistics: Mapping Violence in Global Trade is excellent on this topic, and a must read for anyone interested in contemporary violence. Crucial to logistics is the development of mechanical rules to maximize deployment effectivity — and precisely because of this, we can see how algorithms quickly became the best technology for rendering logistics at once highly efficient and ubiquitous. After all, and as anyone who has ever written about algorithms will typically affirm, algorithms are simply mechanical rules that operate within a logistical framework. Tarleton Gillespie’s entry in the book Digital Keywords offers a helpful definition of an algorithm as
a particular kind of sociotechnical ensemble, one of a family of systems for knowledge production or decision making: in this one, people, representations, and information are rendered as data, are put into systematic/mathematical relationships with one another, and then are assigned value based on calculated assessments about them.
For an excellent recent discussion of the issue of military logistics and mechanical rules, I highly recommend Lorraine Daston’s 2019 lecture “Mechanical Rules Before Machines: Rules and Paradigms,” available here. That said, I would also add this important passage from Cowen’s book, which sums up the matter of military logistics, complexity, and violence today:
For most of its martial life, logistics played a subservient role, enabling rather than defining military strategy. But things began to change with the rise of modern states and then petroleum warfare. The logistical complexity of mobilization in this context meant that the success or failure of campaigns came to rely on logistics. Over the course of the twentieth century, a reversal of sorts took place, and logistics began to lead strategy rather than serve it. This military history reminds us that logistics is not only about circulating stuff but about sustaining life.
And so, the triangle of violence of #datapolitik: feedback-cynegetics-logistics. Couple this with the operationalization of ubiquity and we can begin to observe the contours of a recursive political aesthetic of the police that (for me) requires urgent and engaged critical attention.
To conclude, I’d like to press you on the question of resistance, which you already alluded to above. If the age of the algorithm invokes a sophisticated and seductive assay of all life’s movements, such that to be on the outside is tantamount to a certain social death, what alternative grammars might we draw upon to expose more fully its discriminatory and predatory violence?
As I stated earlier, I’m not persuaded that our extant critical vocabularies and concepts are sufficient to the task, principally because the utopian and idealist strand of normative criticism (from Kant to Adorno, Althusser, and beyond) tends not to focus on the metaphysics of movement but on the idea of causal influence (typically defined in terms of political coercion or false consciousness). The point here is that the available norms of critique operate on a model of determinate causality. But I think that the algorithm introduces a series of complexities that requires a different critical ontology; and if I’m right, a metaphysics of movement is central to a critical ontology of the algorithm as political medium. In my own work, I explore the metaphysics of movement of the algorithm via the tradition of radical empiricism from David Hume through some contemporary thinkers, including Richard Grusin, Samantha Frost, and Colin Koopman — all of whom articulate alternative grammars of critique to the ones available in normative idealism and/or dialectics. What I draw most from the tradition of empiricism is the emphasis it places on associationism, and hence relationality. The significant critical insight in this regard is that forms of relation are not dependent on the terms they relate (this is Gilles Deleuze’s great insight about Hume). To speak specifically in grammatical terms, the difference that empiricism introduces is an attention to conjunctions rather than oppositions.
In some recent work, I explore this difference in critical grammars in terms of the “dispositional powers” of technical objects. The matter of dispositionality — that is, a medium’s powers of arranging forces and movements in time and space — is central because that’s what the algorithm does: it disposes bodies, spatialities, temporalities, perceptibilities, and attentions. In this regard (and here my political aesthetic inclinations shine forth), we are dealing with a compositional technology whose task is the formal disposal of living systems, from biology to ecology and beyond: a total system of perpetual arrangement, if you will.
Brad Evans is a political philosopher, critical theorist, and writer, who specializes on the problem of violence. He is the founder/director of the Histories of Violence project, which has a global user base covering 143 countries.