Peak Brain: The Metaphors of Neuroscience

By Henry M. CowlesNovember 30, 2020

Peak Brain: The Metaphors of Neuroscience

The Idea of the Brain by Matthew Cobb

UP UNTIL 2013, I carried a digital camera wherever I went: to the archive or the bar, and on the occasional trip. I took thousands of photos, sorting through them at odd intervals to post on social media or send to my mom. There was a rhythm to my relationship with the camera: I pointed, I shot, I uploaded, and (usually) I forgot. But that rhythm fell apart once I got an iPhone. Like many others tasks, taking and keeping photos didn’t just get easier — it fundamentally changed. If the move from film to digital lowered the bar for what was photographable, then the camera phone wiped that bar out entirely. The images seemed just as good, but they were no longer mindful records of my days so much as mindless drafts of myself. I was taking more photos and thinking less about each. It was almost as if the camera, now a part of my phone, had become an autonomous point-and-shoot, sort-and-post extension of myself. Or maybe I had become a part of it.

But this isn’t a story about the good old days. Instead, it’s about how tools like my camera and iPhone shift how we understand ourselves. They do so at a personal level, when we use them so much that we feel naked without them. But technologies have also shaped the sciences of mind and brain by anchoring conceptual metaphors. As far back as the telegraph, our tools for capturing, storing, and communicating information have served as analogies for the brain’s hidden processes. Such metaphors are enabling: they do work, as it were, as convenient shorthands that then spur scientific research and medical treatments. But metaphors also constrain our research and treatments, making some things visible while shielding others from view. In other words, metaphors have a politics. The question is: What kind of politics do they have?

¤


The role, if not the politics, of technological metaphors in neuroscience is the subject of Matthew Cobb’s new book, The Idea of the Brain: The Past and Future of Neuroscience. It proceeds from a simple idea that Cobb attributes to the 17th-century anatomist Nicolaus Steno. “The brain being indeed a machine,” Steno reasoned, “we must not hope to find its artifice through other ways than […] to dismantle it piece by piece and to consider what these can do separately and together.” Brains have been many things since then, as Cobb shows: voltaic piles and power grids, tiny factories and immense circuits, willful robots and programmable computers. The history of neuroscience can be read as a history of such metaphors. Whatever the tool, we find a way to see ourselves in it.

Cobb should know. A behavioral geneticist focused on the fruit fly, he has spent his career studying observable behavior as a proxy for hidden function. To be sure, using a model organism like Drosophila isn’t the same as analogizing a brain to a computer, but both approaches are about visualizing opaque processes and discovering broader patterns. That search for patterns is what The Idea of the Brain is all about.

After a brief tour of the ancient world, Cobb describes brains being dissected in early modern Europe, prodded and shocked in the 19th century, and injected and imaged in the 20th. Without falling prey to determinism or teleology, he maps metaphorical flows between neuroscience and technology. New machines do not cause theories to change, he cautions. Rather, causal arrows fly both ways. Human computers preceded digital ones, after all, and machine learning has been inspired by our own. Metaphors have slipped back and forth: our brains are special “hardware” even as my iPhone acts as “my brain.” Cobb’s history reveals something deep: the complex, codependent relationships we develop with our favorite tools do more than alter how we think. They become how we think. And we can’t do it without them!

As a practitioner, one might expect Cobb to sing the praises of this or that metaphor: to side with the computer, say, or the camera. Instead, he is sensitive to how any metaphor can help or hinder self-understanding. What matters is acknowledging that potential — and recognizing that, with or without metaphors, we have a long way to go. “[W]e are still at the very beginning,” Cobb cautions, nodding along with the Scottish philosopher John Abercrombie’s cynical 1830 summary: “The truth is, we understand nothing.” How is it, Cobb’s book asks, that we can know so much while understanding so little? The answer might lie in our machines.

¤


The year I abandoned my Nikon, it popped up in a surprising place: cognitive science. That year, Joshua D. Greene published Moral Tribes, a work of philosophy that draws on neuroscience to explore why and how we make moral judgments. According to Greene, we make them using two different modes — not unlike a digital camera. “The human brain,” he writes, “is like a dual-mode camera with both automatic settings and a manual mode.” Sometimes, the analogy goes, you want to optimize your exposure time and shutter speed for specific light conditions — say, when faced with a big life decision. Other times, probably most of the time, tinkering with the settings is just too much of a hassle. You don’t want to build a pro-and-con list every time you order at a restaurant, just like you don’t want to adjust the aperture manually for each selfie you take.

Greene’s “point-and-shoot morality” is an example of dual-process theory, made famous by Daniel Kahneman’s Thinking, Fast and Slow. This improbable best seller summarized decades of work, much of it by Kahneman and his longtime collaborator, the late Amos Tversky. Adopting terms proposed elsewhere, Kahneman organizes mental function into two “systems,” corresponding to Greene’s automatic and manual modes. “System 1” is fast and, often, involuntary; “System 2” is slower and more deliberate. All animals have some version of the first, while the second system is limited almost entirely to humans. We may be machines, Kahneman and Greene acknowledge, but we are reflective machines — or at least we can be.

Kahneman’s metaphorical debts are not as clear as Greene’s, but they are still very much present. Systems 1 and 2 “program” us, automatically or manually, through the development of “functions.” The “fictitious characters” of Systems 1 and 2 are cloaked in metaphor. System 1 is “the associative machine” and System 2 “the lazy controller.” Drawing on canonical, if controversial work in social psychology, we are “primed” to respond to stimuli in particular ways — an analogy to the canonical priming of the pump, either mechanical or economic. On and on the metaphors go, a litany of mechanical means for self-reflection. Taken together, they figure us as the cyborg products of an evolutionary assembly line, imperfect Terminators scrolling through the data of life in search of a comparative advantage or quick fix.

Mental life, on this view, is a constant, often subconscious assessment of one’s surroundings for threats or opportunities. And of course, Kahneman has a metaphor for that:

Is anything new going on? Is there a threat? […] You can think of a cockpit, with a set of dials that indicate the current values of each of these essential variables. The assessments are carried out automatically by System 1, and one of their functions is to determine whether extra effort is required from System 2.


But this raises an important issue: how helpful is the cockpit metaphor, given how little most of us know about operating airplanes? And what does it suggest about how we imagine ourselves, our capacities and purposes, and how we interact with one another? The same might be asked of the computer, or of Greene’s camera: what do we gain by framing our mental and moral lives in these technological terms, and what do we lose — in scientific or ethical terms?

¤


According to Cobb, it depends. Such metaphors keep us going for a while, in part because, as the cognitive linguists George Lakoff and Mark Johnson famously pointed out, we can’t help but filter our world through them. Like everyday life, scientific research is steeped in metaphor — which presents a productive paradox. We know brains are not computers. And yet, treating them as if they are computers is useful in suggesting new directions for research. The path toward truth may, then, be paved with fiction. Put slightly differently: fiction, by way of a conceptual metaphor, may be a shortcut to improving a scientific theory.

Or it may be a dead end. The problem starts when we forget analogies’ fictional origins, or fail to account for their baggage. Take our ubiquitous military metaphors, such as when you “shoot someone down” in an argument. Health, too, can be warfare, as when we “fight” COVID-19 or lose the “battle” to cancer. These metaphors are not mere rhetorical flourishes — they can be costly. As Susan Sontag argued before Lakoff and Johnson, metaphors of illness are “a vehicle for the larger insufficiencies of this culture.” When cancer is war, patients are soldiers, an image that can reinforce the individualism hampering American health care. In the case of AIDS, military metaphors divide sufferers, pitting them against each other. Such metaphors get in medicine’s way by shaping how we think in arenas that (should) have nothing to do with battle. But they are hard to escape.

The same goes for metaphors in science. We can see the limits of the computer metaphor — and in fact when the mathematician Claude Shannon looked back on his pre–World War II work with Alan Turing, it seemed to him that they had been overly optimistic, if not hubristic: “Turing and I used to talk about the possibility of simulating entirely the human brain, could we really get a computer which would be the equivalent of the human brain or even a lot better? And it seemed easier then than it does now maybe.” The computer metaphor made some tasks seem easy while hiding others from view. It emboldened Turing to predict that, by the year 2000, a machine would fool a human into believing in its humanity, passing what came to be called the Turing Test. The metaphor led Shannon to program robots to learn simple tasks and shaped his co-development of the modern concept of “information,” soon cemented as the core of cognition. Computers were productive as a metaphor, but their effect was to narrow as much as expand subsequent research.

On one level, Turing’s optimism and Shannon’s robots were right. The machines began to turn the tables in 1997, with Deep Blue’s defeat of Garry Kasparov at chess. Conversation came next, with the striking (if controversial) success of the chatbot Eugene Goostman fooling human interlocutors in the 2010s. With bots influencing elections and deep fake videos now causing us to doubt our eyes, it would seem that real-world Terminators are around the corner (though they will probably resemble internet trolls more than Arnold Schwarzenegger). In the 2014 film Ex Machina, the coming war between man and machines is less about precision artillery (drones have been around for decades, after all) and more about weaponizing affect. The android, Ava, flirts her way to freedom. Who knows if Turing saw that one coming?

The problem, in both Ex Machina and The Idea of the Brain, isn’t the computers. It’s us. When we try to get machines to mimic our minds, or imagine brains as engines or calculators (or cameras), what we should really be asking is: To what end? Is everything reducible to information, to data? Are these the best projects to pursue, or are we simply following a metaphor’s well-worn tracks? This is not to disparage basic research, or to think that we could proceed without metaphors. Rather, it is a reminder that motives, like metaphors, matter. Losing sight of them is an easy way to forget our aim: understanding the brain — or, in Cobb’s case, brains with an s. By the end of the book, he proposes dropping the metaphors and starting over again at smaller scale — with the fruit fly. Surely, he seems to suggest, this will be more illuminating than programming machines to beat us at backgammon — or believing that backgammon is the kind of behavior we should be programming at all.

¤


But would that be enough? Might we be better served by more radically reassessing what we want to learn and how best to learn it? We may be reaching Peak Brain, the saturation point for what we can ask of neuroscience and what it can deliver with its current metaphors. As with Peak Oil, the question of Peak Brain is not if, but when. When will we need new ways to answer ancient questions about how we think and how we might think better? Rather than swap out the computer for “the cloud” as our operating metaphor, we could loosen the ties binding brains to technology. This kind of unweaving, not to say unraveling, could help us rethink how thinking works.

Pressing pause must be done with caution. We are mired in science denial, willful ignorance, and grotesque gaslighting; we need science, including neuroscience, more than ever. But, in what I take to be the spirit of Cobb’s conclusion, we could reset without giving an inch to ignorance. We could heed a call made by Philip K. Dick almost 50 years ago to step back and take stock. “Rather than learning about ourselves by studying our constructs,” Dick wrote in 1972, “perhaps we should make the attempt to comprehend what our constructs are up to by looking into what we ourselves are up to.” On one reading, this is exactly what Cobb wants: to go small, prioritizing the fruit fly at the expense of grand projects. But one can also read it as a call to go big, to see neuroscience as just one aspect of “looking into what we ourselves are up to.” On this reading, we should reassess “our constructs” by looking not just in the brain, but also in other places: in our histories and cultures, our toxic politics and alternatives to it. Such a shared project would blend Cobb’s neuroscience with his history, seeing them not just as complementary but as symbiotic, even synthetic.

In the midst of a public health crisis that is rapidly revealing itself to be a mental health crisis as well, we may need a new idea of the brain — and its limits — as a tool for organizing both medicine and self-understanding. There are clues in Cobb’s books about how to reckon with neuroscience and its limits. One comes courtesy of Sigmund Freud a century ago: “I know nothing that could be of less interest to me for the psychological understanding of anxiety than a knowledge of the path of the nerves along which its excitations pass.” Freud may not invite the sympathies of many of Cobb’s readers, and he was himself drawn to the “anatomical analogy” he so blithely disparaged, but the point sticks. If you’re anxious about anxiety, the brain will furnish only partial answers to where it comes from and how to address it in yourself or in others.

That is, if we are drawn to the brain to understand “who we are” or to help those in pain, it is crucial to reckon with the limits of our tools — and our metaphors. We might ask whether our idea of the brain is right not only in the sense of accuracy, but also in the sense of justice. This is not a call to let politics guide science, at least in the familiar sense. Rather it is a call to acknowledge one of the oldest and least understood lessons of the history of science: politics has been there all along, inside and outside the lab. Prioritizing one project over another, or deciding to see cognition as computing and not as feeling, are political choices. Sometimes, that means capital-P Politics: metaphors guide federal funding, as when the 2013 BRAIN Initiative put “brain mapping” front and center. More often, though, our choice of metaphors is political in a lower-case sense. Who listens to whom, how we publicize neuroscientific findings, and the level of public trust or enthusiasm in such work are all dependent upon our ruling conceptual metaphors and what they make possible.

Where does this leave us? What’s next? Having given us a first-rate history, Cobb can’t tell us what lies ahead. “We are simply doing the best we can, just as our forebears did,” he writes early on. Of all the next steps we might take, perhaps the hardest is in the other direction: a step back, to reflect not just on the metaphors we live by — drawn from computing, warfare, and gaming — but on what we live for and how we might live better. History is one way to step back, and Cobb’s is a great model. But where we go from there, deciding what and how to study, whether to dig deeper into the brain or pull up for air, is the sort of choice no machine can make (yet). For now, it’s on us.

¤


Henry Cowles is a historian of science and medicine based at the University of Michigan. He is the author of The Scientific Method: An Evolution of Thinking from Darwin to Dewey, just out from Harvard University Press.

LARB Contributor

Henry M. Cowles is a historian of modern science and medicine based at the University of Michigan. His research and teaching focus on the sciences of mind and brain, evolutionary theory, and the experimental ideal in the United States and Great Britain. His book, The Scientific Method: An Evolution of Thinking from Darwin to Dewey, was published by Harvard University Press in 2020. Current projects include a study of the relationship between tools and theories in psychology and psychiatry since 1800 and a history of habit from the celebration of daily routine in Thoreau's Walden to the rise of “persuasive technologies" in Silicon Valley and beyond.

Share

LARB Staff Recommendations

Did you know LARB is a reader-supported nonprofit?


LARB publishes daily without a paywall as part of our mission to make rigorous, incisive, and engaging writing on every aspect of literature, culture, and the arts freely accessible to the public. Help us continue this work with your tax-deductible donation today!