WHEN I WROTE this review, the COVID-19 pandemic had claimed approximately 5.8 million lives worldwide. Given the threat the virus poses, ongoing surveillance that uses personal data for public health purposes has been routinely deployed. Unfortunately, contends David Lyon in Pandemic Surveillance, surveillance itself has now gone “viral,” and its rapid spread and varied mutations have created a “surveillance pandemic.”

Fearing time wasted meant lives lost at the beginning of the pandemic, the public and private sectors acted quickly. In many cases, they choose technologies hastily. In other instances, they appear to have selected monitoring tools that could easily be repurposed. Lyon, a renowned surveillance scholar, argues that the world got devices and systems plagued by technical and social problems. Issues with transparency, functionality, safeguards, and stakeholder participation have created distrust. And there’s been outrage over access issues exacerbating inequality. Just as tests and vaccines haven’t been equally available to everyone, so have exposure notification systems not been accessible to all because not everyone owns mobile phones (or suitable models).

Lyon examines these issues as they happened on the global stage from 2020 to 2021. He readily concedes that “[f]or public health responses to be equal to a pandemic, timely, reliable, and robust information is required.” And he has no problem admitting public health surveillance “has clear human benefit and should be a priority among available tools for confronting a pandemic, especially a global one.” Nevertheless, Lyon still reaches a depressing conclusion: “COVID-generated tech solutionism is creating digital infrastructures that tend to downplay negative effects on human life and are likely to persist into the post-pandemic world, endangering human rights and data justice.”

Pandemic Surveillance Technologies

So many overwhelming events have occurred in the past two years that it’s easy to forget how many controversial surveillance technologies have been deployed. At the beginning of 2020, a month before the World Health Organization (WHO) declared COVID-19 a pandemic, The Global Times, a daily tabloid newspaper sponsored by the Chinese government, released footage of a drone hovering above an elderly woman. It pursues her as she hurries indoors. A disembodied voice projected over a loudspeaker eerily admonishes, “Yes Auntie, this drone is speaking to you. You shouldn’t walk about without wearing a mask. You’d better go home, and don’t forget to wash your hands.”

This dystopian approach to public health management didn’t stay contained, even though this type of drone typically “could stay in the air for no longer than 30 minutes at a time.” Two months later, in Westport, Connecticut, the police department announced plans to use drones to detect fevers and monitor social distancing. Alarmed activists and citizens raised privacy concerns and questioned the validity of using fever as a criterion for determining whether someone has COVID-19. They were also discomfited by the possibility that a drone manufacturing company might be using the pandemic to create a pipeline for future business. After the pushback, it only took a few days for the police to back down. Were the authorities persuaded to scrap the program by compelling arguments? Or, since Westport is a homogeneous (predominantly white and Democratic) and extremely wealthy town, did they cower because a critic threatened to use political clout? We’ll probably never know.

But we do know that law enforcement used creepy drones in places like Italy, Spain, and in other US states (e.g., Arizona, Georgia, New Jersey, and Hawaii), leading commentators to coin a new term: “shout drone.” And we also know that the Electronic Frontier Foundation, an organization committed to defending civil liberties, recently warned that “a flood of COVID relief money” is likely to contribute to more US police departments using drones in the future. Furthermore, as Lyon chronicles, once the global surveillance experiment began, robots that spy and yell are only one of many contentious tools. Phone apps have monitored COVID-19 exposure. They have also been used for policing quarantine and stay-at-home orders, as have cameras with facial recognition technologyThermal imaging body scanners have monitored body temperatures, and students have had their vital signs scanned by devices like a BioButton. Remote workers’ productivity has been observed through “bossware” algorithms, and remote test-takers have been scrutinized by AI-powered proctoring software. Digital vaccine credential services, sometimes called vaccine passports, and a health code app have been used to determine who can travel.

Theorizing Pandemic Surveillance

Since surveillance is always a contentious issue, it was bound to generate debate. Consider the uproar around surveillance when it is supposed to keep us safe from non-pandemic threats. Monitoring communications for signs of terrorism is a classic example.

Perhaps the main reason why surveillance is controversial is that it’s fundamentally an instrument of control and power that divides groups by systematically sorting them. For example, suppose the government is aggressively monitoring online searches for possible evidence of terrorism. In that case, people might believe that they’ll be labeled suspicious and placed in a law enforcement database if they search Wikipedia for articles that contain words like “dirty bomb.” To avoid becoming targets of a government investigation, law-abiding citizens might limit their freedom of expression and self-censor their online activity.

During the COVID-19 pandemic, pronounced power imbalances have made public health surveillance especially contentious. Governments had emergency powers because the right to life was at stake. Furthermore, while the tech industry was eager to play the role of savior, it had a checkered history of overestimating product quality, underestimating the complexity of social life, and prioritizing profit over social welfare. Finally, because people everywhere have been scared, they’ve been desperate for things to get better. In light of these conditions, Lyon contends that pandemic surveillance has been a central issue for civil liberties and human rights activists and scholars. It’s shaped policies and practices that determine how people live, work, learn, socialize, shop, receive or are excluded from medical care, and become socially accepted or stigmatized.

Again, Lyon focuses on events between January 2020, when it was announced that a “novel coronavirus” was discovered in Wuhan, China, and some time in 2021. By integrating some of the core insights from privacy theory, data justice, and care ethics, he creates a novel conceptual toolkit that’s a solid theoretical starting point for critically analyzing pandemic surveillance. In my opinion, Lyon’s analysis has six key insights. The first is that privacy advocates were rightly concerned about abuses of data. We should remember this point in the future when we’re told not to worry about privacy because public health data will only be used responsibly. For example, “a cluster of gay men was ‘outed’ in South Korea when a number of COVID-19 cases came to light in a Seoul district well known for its gay bars. Also, a Minnesota law official appeared to claim the state was using ‘contact tracing’ to identify connections between Black Lives Matter protestors.” More recently, the Public Health Agency of Canada acknowledged it secretly monitored the movements of its citizens during a “lockdown by tracking 33 million phones.”

The second insight is that while the COVID-19 pandemic in some ways resembles past public health emergencies, it also has unique dimensions. For example, Lyon notes how Albert Camus’s The Plague, which was “based on histories of a cholera epidemic that hit Oran in 1849,” shines a light on many of the complex social dimensions of pandemic governance. Furthermore, Lyon contends that Michel Foucault’s account in Discipline and Punish of surveillance and registration in 18th-century plague towns “sounds a lot like” what is currently happening.

At the same time, he emphasizes that pandemic surveillance takes in the “crucially important” historically novel context of surveillance capitalism: technology companies have figured out how to “make profits from apparently inconsequential data.” Since the public can’t possibly know what value tech companies might extract from their health-related information or from establishing pandemic surveillance infrastructure, it’s inherently difficult to trust either the companies themselves or the partnerships being created between them and governments.

Echoing Naomi Klein, Lyon maintains that trust is all the more elusive because disasters are opportunities for restructuring organizations, even society. That’s called the “pandemic shock doctrine,” which Lyon illustrates by giving the example of former New York Governor Andrew Cuomo touting a vision of transforming New York through collaborations with Google and Microsoft. Success means “permanently integrating technology into every aspect of civic life.” Relatedly, as Lyon notes, Amazon has been immensely successful during the pandemic because of the massive demand for at-home shopping. We shouldn’t lose sight of the fact that Amazon is criticized constantly for its brutal fulfillment center surveillance and that the pandemic became a flashpoint for worker protests. Amazon also sells Ring surveillance systems, technologies characterized by scholars and activists as “fundamentally incompatible with basic human rights and democracy.”

The third insight is that a lot of pandemic surveillance technology suffers from “solutionism.” Solutionism minimizes the importance of social and cultural factors in determining who adopts technologies or gets excluded from their use. In so doing, it turns a blind eye to basic issues of distributive justice. Here Lyon points to the “more than 250 million elderly people” who don’t have smartphones and couldn’t use China’s Health Code contact tracing systems — a system that is “required for access to public transport as well as to most public places.” Another issue with solutionism is that it’s closely associated with the attitude that it’s better to be seen “doing something” than not doing anything at all. But during the pandemic, this mindset allowed security theater technologies to get rolled out, such as thermal imaging scanners that have well-known efficacy problems. These gave the public a false sense of confidence that COVID-19 was being carefully monitored.

Finally, solutionist technology applications are disconnected from essential perspectives — of stakeholders and professionals alike. For example, contact tracing is a complex process. Rather than focusing on automating it, more effort could have been made to provide professional contact tracers with backend resources, such as better data management software. Relatedly, many privacy advocates didn’t cheer when Apple and Google set aside their rivalry to create an exposure notification platform that uses phones to track those we’re in close proximity to. They noted that while the decentralized design might better protect privacy than a centralized one, it’s exceptionally difficult to build trust when the private tech sector — rather than public health professionals — determines the values that guide public health surveillance.

The fourth insight is that data is not neutral, and during emergencies like pandemics, we need to be especially vigilant about politicizing poor interpretations of it. “For instance,” Lyon writes, “the Indian government blamed the Muslim community for the spread of the virus, after a Tabighli Jamaat event in New Delhi in March 2020 that attracted outsiders.” This racialized accusation was amplified over social media but seems to have obscured sampling errors that involved one group being tested more than others. Because data is easily distorted, Lyon cautions against expecting that using better data to represent the difficulties experienced by marginalized groups during the pandemic will benefit them. He contends that there’s always the risk that showing how a group is having problems could help racists by “inadvertently” reinforcing “biological understandings of race.”

The fifth insight is that emergencies like pandemics exacerbate prior data justice problems. For example, before the pandemic started, facial recognition technology had been widely criticized for being biased — for not working well on certain demographics, like dark-skinned people. And yet, during the pandemic, facial recognition technology became a core component of identity verification systems. Lyon gives one of many possible examples to illustrate why the technology is flawed.

Alivardi Khan experienced great difficulty trying to get the ExamSoft facial recognition system to recognize him as he took the New York State Bar exam. He tried sitting in front of a window at home where sunlight flooded in, and even set himself up in a bright bathroom with light shining off white tiles.

Nobody should experience this level of stress when taking an important exam, and, unfortunately, that’s not the end of Khan’s plight. Lyon adds, “Even when set up with a college room in which to write the exam, he had to wave his arms to prevent the automatic space lighting from switching off, thus further risking suspicion of exam-room misbehavior.”

The sixth insight is that even when surveillance is oppressive, it doesn’t necessarily have to extinguish agency. Principled pushback against proctoring systems like the one Khan struggled with led to some schools abandoning them. And as Lyon recounts, citing Marcella Cassiano, Kevin D. Haggerty, and Ausma Bernot, the heightened surveillance in China “sparked the emergence of a new kind of ‘circumscribed individual autonomy.’” In other words, if given a green code on the app, users have some “flexibility […] to make individual decisions about their activities,” an outcome that marks a major contrast from “the dynamics of governance and control in China’s previous Mao era, when the government used surveillance […] primarily to silence personal interests and impose state-mandated decisions on the population.”

Expanding the Pandemic Surveillance Conversation

Pandemic Surveillance is a short book (160 pages, not including notes). The biggest virtue of this concise format is its digestible biggest-possible-picture narrative. It means that the book differs markedly from place-based accounts, such as Guobin Yang’s The Wuhan Lockdown and Lawrence Wright’s The Plague Year: America in the Time of Covid.

At the same time, because Lyon doesn’t offer any robust case studies, he leaves the reader hungry for more detail. For example, Lyon’s broad-brush comparison of how China and the United States respond to the pandemic paints a reductive picture that leaves out essential information. Lyon is right to emphasize that the pandemic is prompting discussion of comparative causality rates and China’s use of more aggressive public health measures. It’s significant that Xi Jinping, general secretary of the Chinese Communist Party, frames China’s high-tech, labor-intensive social control surveillance pandemic measures in geopolitical terms “as a model of secure order, in contrast to the ‘chaos of the West.’

But you can’t unpack a statement like this simply by making broad generalizations about China taking a centralized approach to contact tracing and the United States following a decentralized one. China has complex internal politics, and grappling seriously with US pandemic governance requires thinking carefully about America’s deep divides: our fractured politics and antagonistic political identities. When I wrote this review, in early 2022, the controversy over vaccine and mask mandates was leading some people to move to parts of the country that better align with their political sensibilities. We can’t understand this phenomenon without acknowledging that social surveillance plays a major role in amplifying Americans’ ongoing experiences of alienation and anger. If you’re wearing a mask in a part of America that rejects them, or vice versa, you’re likely to get judgmental looks, and maybe worse. Since the same type of polarization is happening elsewhere, Lyon’s lack of detailed discussion of social surveillance occurring in-person and online is a glaring omission.

Furthermore, even though Lyon employs normative concepts throughout the book, he doesn’t draw any red lines. To be sure, he explains why surveillance issues are ethically and politically charged and poses probing governance questions that require further consideration. But I would have liked him to specify when surveillance practices infringe too deeply on civil liberties and human rights, as well as, more broadly, what “civil rights” and “human rights” actually mean in the COVID era. For example, Lyon cites the UK Equalities and Human Rights Commission warning that requiring vaccine certificates to enter places and use services “could easily produce a […] two-tier society” that resembles South Africa under apartheid. Does this criticism have merit? Or is the comparison hyperbolic and possibly inappropriate, much like dangerous comparisons between the Nazi regime and pandemic measures? Obviously, context matters when answering these questions, but so does having a clear sense of the boundaries and basis of rights.

One of the most striking aspects of the pandemic is that people from all sides appeal to “rights” to justify their positions. Anti-mask and anti-vaccine mandate advocates claim they’re fighting for fundamental liberties. Some depict their struggle as civil liberties resistance to the “new biosecurity surveillance machine.” Meanwhile, in September 2021, the American Civil Liberties Union, the historical paragon of freedom defenders, took the opposite stance, claiming “vaccine mandates actually further civil liberties” in specific settings, like schools and workplaces, by protecting “the most vulnerable among us, including people with disabilities and fragile immune systems, children too young to be vaccinated and communities of color hit hard by the disease.” With “rights” functioning as an inherently contested concept, people have a hard time distinguishing good faith from bad faith claims. I wish Lyon did more to help here.

To put my cards on the table, I believe the ACLU, which recognizes the necessity of “limited exceptions” and the centrality of removing obstacles to access, took the right position for the United States. I agree with them as a matter of justice and because evidence suggests vaccine mandates help incentivize those who are vaccine-hesitant. To be sure, it’s a problem that security issues have marred some digital apps that store vaccine status information. However, the deficiency doesn’t defeat the reasons for being pro-mandate in America because “local, state, and federal officials continue to support paper proof of vaccination.”

I also hoped Lyon would give more detailed governance ideas for avoiding some of the problems he lists. For example, back in 2020, Brenda Leong and I wrote about the problem of government agencies like the Centers for Disease Control and Prevention “effectively endorsing unreliable surveillance systems and giving intelligent people reason to believe those measures increase public health and safety.” But in order for Lyon to give concrete policy advice, he would have to engage more substantially with legal, design, and political theory. And perhaps he also should have expanded the bibliography to include contributions from people who come at surveillance from an entirely different angle, like advocates for using machine-learning surveillance tools to better understand, monitor, and respond to vaccine-related misinformation. Instead of clarifying whether endeavors like this can help, Lyon’s comments on machine learning mainly underscore that biased and limited training data will produce undesirable results.

The limits of Lyon’s analysis shouldn’t detract from its importance. Since future pandemics will undoubtedly occur, it is essential that we establish trustworthy institutions to conduct public health surveillance. Hopefully Lyon’s insights will help shape the hard conversations that lie ahead.

¤

Evan Selinger (@evanselinger) is a professor of philosophy at Rochester Institute of Technology.