FEBRUARY 27, 2018
THE REPORTERS WHO covered the Beatles’s first press conference in the United States, at JFK Airport on February 7, 1964, had never encountered anything remotely similar — and it showed. They asked the band a string of inane questions — about the accents, the hair, the money. Eventually someone asked what their secret was: what did these four lads have that made teenagers around the world scream at the mere sight of them, and spin their records until the needles were worn to a nub? Paul answered honestly: “We don’t know, really.” John cheekily chimed in: “If we knew, we’d form another group and be managers.”
We can forgive the Beatles for not being able to explain the Beatles — after all, creative types of all kinds have struggled to explain the creative process, and scientists haven’t had much more luck. Even the vocabulary we use is fraught: creativity, insight, talent, genius — these are ill-defined words with overlapping meanings. And yet, we somehow know it when we see it. We admire creativity, of course; but we also want to dissect it, to understand it. Can it be captured in a formula? Is there some magical combination of nature and nurture that produces the Fifth Symphony, or The Last Supper, or the theory of relativity?
Perhaps neuroscience can help. The Runaway Species: How Human Creativity Remakes the World is co-authored by Stanford neuroscientist David Eagleman — you may have seen him on TV, hosting PBS’s The Brain series — and Anthony Brandt, a composer and music professor at Rice University. While they don’t offer a magic formula, Brandt and Eagleman postulate three facets of creativity that might help us understand the nature of creative insight. The first is “bending” — taking existing ideas and materials and bending them into something new. A ballet dancer does this when she literally bends her body into a shape never seen before (the authors cite the late Martha Graham as an example), but so too does an artist when he paints something a little different from what he actually sees (like Claude Monet’s multiple, shimmering takes on Rouen Cathedral). The second is “breaking” — dividing something into its component parts, rearranging them, and throwing away parts if necessary. The invention of digital photography and digital sound recording are two examples — neither would be possible without the recognition that even the most seemingly continuous stimuli can be broken down into discrete “bits” of information. The authors describe the development of the MP3 — a digital audio format in which sounds are further compressed; only the most vital information is preserved (a JPEG does something similar with a photograph). Picasso, one might argue, did something analogous in his cubist portraits, keeping key elements of, say, a face — eyes, nose, mouth — but rearranging them in novel (even shocking) ways. Old rules of portraiture were discarded. The third facet is “blending” — taking two or more good ideas and combining them into an even better idea. An example from our ancient past is the fusing of copper and tin to form bronze — a material much stronger than either ingredient on its own. The blending can be mental rather than physical: novelists and filmmakers blend locations and time periods, and scientists might borrow ideas from one area of research for use in another.
The authors aim to persuade not so much by force of argument as sheer volume of representative cases. We’re confronted with a parade of examples — so many that it’s all a bit dizzying, though the many color images are a help. New topics are introduced and dispatched in no more than a couple of paragraphs. There are, as one might expect, tips for nurturing creativity in the workplace and in the classroom. These seem quite sound, if rather unsurprising — a section in the chapter on creative schooling is titled “Encourage Creative Risk-taking.”
The picture that emerges is one of perpetual tension between the familiar and the novel: if something is too familiar, it’s boring; too unfamiliar, and we dismiss it as crazy or even dangerous. The Beatles, one might argue, found the perfect middle ground — familiar enough to want to sing and dance along; dangerous enough to rattle parents. We also find that the new builds on the old; creativity, after all, doesn’t arise in a vacuum. One might point to Shakespeare: a rudimentary version of Hamlet had existed for centuries as a Scandinavian folk tale, but add a sarcastic gravedigger and a pair of buffoonish courtiers, and you have a play for the ages. And neither Mozart nor Beethoven reinvented the orchestra — they just found new ways to make use of it.
Mind you, not every great idea takes hold; as Brandt and Eagleman remind us, an idea that’s too far “ahead of its time” may simply disappear into the fog of history. Occasionally, with luck, it’s rediscovered. They give the example of Alfred Wegener’s theory of continental drift, first put forward in 1912. It was met with ridicule. A few decades later (sadly, after Wegener’s death) it was accepted as a cornerstone of geological science. Sometimes there are sound reasons for a new idea to be met with skepticism; sometimes it boils down to prejudice, or worse. Some German physicists dismissed Einstein’s theory of relativity as “Jewish science.”
There are a few hiccups along the way. For example, in describing an improvement to Japan’s famous “bullet train,” the authors say that engineer Eiji Nakatsu struggled to make the train quieter: “[T]he flat prow of its locomotive would create ear-shattering noise when moving at high speeds.” Nakatsu, fortunately, was an avid birdwatcher, and solved the problem by modeling the train’s “nose” on the beak of a kingfisher. The authors say that this happened in the 1990s — but Japan has had high-speed rail service on its Tokyo-Osaka corridor since 1964, and those trains were already pretty streamlined; as well, airplanes — and for that matter, bullets — had pretty sharp noses by the ’90s. So it’s not quite clear why a bird was needed for the great moment of insight.
And there’s a small problem with the authors’ treatment of the invention of the marine chronometer back in the 18th century. It was John Harrison, a self-taught clockmaker from Yorkshire, who eventually found a workable design (here the authors cite Dava Sobel’s wonderful book, Longitude). After building a succession of ingenious but bulky clocks (dubbed H-1 through H-3), Harrison finally settled on a much smaller design, known as H-4, which did the trick. The authors write that Harrison’s breakthrough was “to get rid of the pendulum entirely” — but it was already well known that no shipboard clock could use a pendulum; all of Harrison’s timepieces employed a balance wheel and spring (a late 17th-century innovation) rather than a pendulum to regulate the turning of the gears.
Here’s an odder thing: in telling how we gradually adapt to change over time, they write,
When we learn to drive a car, we begin with the small steps: checking the rearview and sideview mirrors, signaling when changing lanes, attending to the traffic around us, watching the speedometer. Later, we can drive with a piping hot coffee in one hand, talking to our spouse and kids, with the radio on and our cellphone ringing, all while speeding along at sixty miles per hour.
I humbly suggest that while we perhaps can do this, we shouldn’t.
A larger issue is the way the authors lump seemingly disparate types of creativity and invention together. They say, for example, that “the final, conclusive mobile phone will never be developed, nor the perfect television show whose appeal doesn’t fade, nor the perfect umbrella, bicycle or pair of shoes.” But isn’t our wish for innovation wildly different in each case? New mobile phone designs seem to appear almost monthly, because there’s money to be made, and more features can always be crammed into them. In contrast, umbrellas, by my estimation, have evolved only imperceptibly in the last 40 years (if it keeps us dry, we’re happy); and while shoe designs reflect changing fashions, their essential properties don’t change much. Bicycles, though a bit more technology-laden, seem to fall in that category as well. TV shows seem quite different. We want each episode to bring something new, and yet to stick within an established framework, but even if a show is successful (The Simpsons is in its 29th year) it surely has a limited lifespan in a way that the umbrella and the shoe do not. A further complication, which the book only briefly addresses, is the question of whether creativity can be objectively measured, or if it is, at least to some extent, a label we bestow on things after-the-fact: a kind of social construct. (Some evidence for the latter view can be found in the way our verdicts evolve over time. There’s a long list of novelties that were initially met with derision, only to be recognized as iconic some years or decades later — think the Eiffel Tower, the Sydney Opera House, The Rite of Spring, and AC/DC’s Back in Black.)
Elkhonon Goldberg’s Creativity: The Human Brain in the Age of Innovation is a markedly different affair. For starters, Goldberg, a neuropsychologist at NYU, goes into much more detail about the actual workings of the human brain. While the ever-confident Brandt and Eagleman keep things moving along like a briskly paced PowerPoint presentation, Goldberg adopts a more scholarly tone; he’s more cautious, more willing to admit that sweeping conclusions may not be warranted. On the issue of whether we can truly nurture creativity, for example, he writes,
[There] will not be a binary “yes or no.” The answer — or rather answers — will have to be more nuanced, taking into account many types of creative accomplishments, their many degrees, and many kinds of creative minds. We will also need better ways of defining and measuring creativity in numerous arenas of human endeavors.
Such caution prevails throughout.
Indeed, the two books are framed quite differently: Brandt and Eagleman believe that humans are driving change, while Goldberg takes it as a given that the world is changing, and that we need to embrace creativity and novelty in order to adapt to it. While Brandt and Eagleman stress the uniqueness of our species, Goldberg explains that at least some nonhuman primates respond to familiarity and novelty in the same way that we do (at least, their brains respond in a similar fashion). And the authors differ starkly on the possibility of computers being creative. Brandt and Eagleman say that “[w]hatever you put in is exactly what you get back out” — but as Goldberg points out, computer algorithms have created art and music “judged by humans as being different and valuable.” Besides, are we humans not in some sense “programmed”? “Since even the most unorthodox creative individual is a product of his time and a beneficiary of the previously accumulated knowledge, insight and tradition,” Goldberg writes, “any creative product generated by that individual, no matter how brilliant, is also in a broad sense derivative.”
As with Brandt and Eagleman’s book, there are a few problems. While Goldberg is clearly in favor of gender equality, his language may trouble some readers. He suggests that the contributions of men and women to creative endeavors “can be addressed constructively and rationally, without hysteria, defensiveness, or the corrosive effects of ‘political correctness.’” A lesser concern is repetition; by my count, we’re introduced to Goldberg’s two dogs — a now-deceased Bullmastiff named Brit, and an English Mastiff puppy, Brutus — at least three times. And the neuroscience is occasionally so dense as to be off-putting, as in: “modulation of the dopaminergic but not noradrenergic systems facilitate performance on lexical tasks which are based on more automatic processing and require the use of well-established semantic relationships.”
Creativity is a fascinating subject, and the human brain — that three-pound lump of exquisitely connected gray matter — is the organ that makes it happen. If you could combine the best of these two books, you’d have an entertaining and scientifically rigorous exploration of that subject.