Creepy Futures: Nicholas Carr’s History of the Future

By Geoff NunbergNovember 21, 2016

Creepy Futures: Nicholas Carr’s History of the Future

Utopia Is Creepy by Nicholas Carr

IN 1964, having taken a year off from college, I got a job in the General Motors pavilion at the New York World’s Fair in Flushing Meadows. Sporting a red blazer emblazoned with the Cadillac crest, I escorted VIP guests on the Futurama II ride, a successor to the original 1939 Fair’s Futurama, billed as “a journey for everyone today into the everywhere of tomorrow.” To a sonorous narration, moving chairs carried visitors past dioramas depicting an imminent automotive Zion. In one, a vast road-building machine used lasers to carve highways through the Amazon Jungle, “bringing to the innermost depths of the tropic world the goods and materials of progress and prosperity.”

In another, robots worked the oil deposits of the continental shelf. On the continent, cities multiplied, teeming with electronically paced traffic and bristling with soaring skyscrapers and underground parking garages. When humans were visible, it was as tiny stick figures without discernible clothing or features.

Elsewhere on the fairgrounds, other corporations presented their own tomorrows. GE’s Progressland featured the Disney-designed Carousel of Progress, which traced the history of an audio-animatronic family from the 1890s to modern push-button living. In an egg-shaped pavilion designed by Eero Saarinen, IBM demonstrated machine translation and handwriting recognition. Bell debuted its picture phone — and a troupe at the Du Pont Pavilion celebrated the miracles of modern chemistry (Tedlar! Fabilite! Corfam!).

For the visitor, it was one-stop shopping for all the corporate versions of the technological sublime — the expression of a uniquely American conviction that technology would, as Leo Marx put it, “create the new garden of the world.” In earlier American apostrophes to the machine, awe was tinged with the terror that was essential to the Burkean sublime. Walt Whitman sounded that note in his paean to the “fierce-throated beauty” and “lawless music” of the steam locomotive in Leaves of Grass — as did Arturo Giovannitti in his “The Day of War” (1916), with its modernist description of the skyscraper as “[challenging] the skies, terrible like a brandished sword.”

But there was none of that here. Visitors to the Futurama saw nothing unsettling as they looked down at the miniaturized road-builders and skyscrapers. “Technology can point the way to a future of limitless promise,” the narration concluded. But what it presented was hardly limitless, just a familiar, domesticated, consumerist tomorrow — one that was “awesome” only in the degraded modern sense of the word. Like the Jetsons (then in their second season), we’d keep on doing what we always had, only more efficiently.

¤


This realization figures prominently in Nicholas Carr’s collection Utopia Is Creepy. The selections are mostly posts from Carr’s tech blog Rough Type, written between 2005 and 2015. The book is fleshed out with a nosegay of tweet-sized aphorisms and a few longer essays that contain some of the book’s best writing. The result is what has been called a blook (a term that inspired one self-publishing platform to launch a short-lived “Blooker Prize”). The genre has its limits. A blog entry is inevitably less compelling when it appears bare and linkless on the printed page, years after its posting — particularly when the topic is technology. If, as people say, one internet year corresponds to seven calendar years, then the earliest selections in this collection go back to the digital equivalent of the Truman presidency. It’s hard to work up any interest in Carr’s thoughts about Steve Jobs’s presentation of the first iPhone or the controversies over the commercialization of Second Life.

Still, a long view of the period can be useful. If nothing else, it gives us an idea of what a sprawling landscape the label “tech” encompasses. Over the years, Carr’s posts have touched on, among other things, social media, search engines, open source, Wikipedia, high-frequency trading, wearables, Big Data, self-tracking devices, smartphones, AI, video games, music streaming, and holography. Yet, as Carr’s compilation makes clear, even as technologies replace one another and vogues come and go, the rhetoric of progress simply readapts. As at the Fair, each new technological wave throws off its own utopian vision. The posts give us glimpses of an internet Eden, a Big Data Eden, a cyborg Eden, a biosensor Eden, an automation Eden. Carr has taken on some of these scenarios at length in earlier books like The Shallows and The Glass Cage, which established his reputation as a thoughtful critic. But it’s instructive to see him summarily dispatching them one after another here. He’s hardly a Luddite, at least in the loose modern sense of the term: someone with a naïve and unreasoning aversion to machines. Rather, he can be eloquent and engaging when discussing the human costs of technology and automation, or the self-serving delusions of enthusiasts.

¤


Technological utopianism is always self-aggrandizing. “We stand at the high peak between ages!” the poet Filippo Tommaso Marinetti wrote in his “Manifeste du Futurisme” in 1909, predicting, among other things, that the Futurist cinema would spell the end of drama and the book. Every other modern era has seen itself in exactly the same way, poised at the brink of an epochal transformation wrought by its newly dominant technology, which, as Carr notes, is always seen as “a benevolent, self-healing, autonomous force […] on the path to the human race’s eventual emancipation.”

At the moment Carr started his blog, the agent of millenarian change was the internet — in particular, what enthusiasts were touting as “Web 2.0,” with its promise of universal collaboration, connectedness, and participation. User-created content like Wikis and blogs would displace the old media, and participants would ultimately congeal into a collective intelligence capable of acting on a global scale.

Carr’s blog first came to wide attention on the strength of his critique of an influential article called “We Are the Web,” by Wired’s “Senior Maverick” Kevin Kelly. Kelly wrote that the accumulation of content on the web — from music, videos, and news, to sports scores, guides, and maps — was providing a view of the world that was “spookily godlike.” By 2015, he predicted, the web would have evolved into “a megacomputer that encompasses the Internet […] and the billions of human minds entangled in this global network.” With chiliastic zeal, he announced, “There is only one time in the history of each planet when its inhabitants first wire up its innumerable parts to make one large Machine […] You and I are alive at this moment.” Future generations, he said, will “look back on those pivotal eras and wonder what it would have been like to be alive then.” Or, as Wordsworth might have put it, “Bliss was it in that dawn to be online.”

In a post called “The Amorality of Web 2.0,” Carr taxed Kelly with using a “language of rapture” that made objectivity impossible: “All the things that Web 2.0 represents — participation, collectivism, virtual communities, amateurism — become unarguably good things, things to be nurtured and applauded, emblems of progress toward a more enlightened state.” On the contrary, he countered, those features are invariably mixed blessings. As a manifestation of the age of participation, Wikipedia is certainly useful, but it’s also slipshod, factually unreliable, and appallingly written. “It seems fair to ask,” he said, “when the intelligence in ‘collective intelligence’ will begin to manifest itself.”

Similarly with blogs: Kelly described them as part of “a vast and growing gift economy, a visible underground of valuable creations” that turns consumers into producers. Carr, himself a blogger, pointed to the limits of the blogosphere: “its superficiality, its emphasis on opinion over reporting, its echolalia, its tendency to reinforce rather than challenge ideological polarization and extremism.” In short, “Web 2.0, like Web 1.0, is amoral. It’s a set of technologies — a machine, not a Machine — that alters the forms and economics of production and consumption.”

Carr’s post was widely discussed and contested. In retrospect, it seems largely unexceptionable. True, the promises of participation and amateurism have been realized. We’re awash in user-created content, including not only Wikipedia but also Facebook and Twitter, Yelp and Reddit, Pinterest and Flickr, and all the rest. But when data scientists run their analytics on those media, they find the interactions they enable to be extraordinarily fragmented and polarized. And what glimpses we get of the whole are apt to curl our hair. (As Clay Shirky observes: “The internet means we can now see what other people really think. This has been a huge huge disappointment.”) Worse still, in public discourse, the voices of amateurs, while plentiful, are increasingly drowned out by more-or-less corporate ones. Individual blogs like Carr’s are still around, but no one speaks of the Blogosphere anymore; rather, blogs exist cheek by jowl with a variety of new media like Mashable, Huffington Post, and the content octopus BuzzFeed, as well as the online operations of old ones. As Carr points out in another post, branded content has driven the peculiar and idiosyncratic to the bottoms of search pages and the margins of our attention — “the long tail has taken on the look of a vestigial organ.”

The persistent question is not “when the intelligence in ‘collective intelligence’” will begin to manifest itself, as Carr asks, but when are we going to get to the “collective” part? You can look in vain for a perch that provides a “godlike view” of the whole, or for an emergent “community of collaborative interaction.” We thought we were building the New Jerusalem but we wound up with something more like the current one.

¤


Anyway, the conversation has moved on, as it always does. Looking back over the history of technological enthusiasms in his American Technological Sublime, the historian David Nye notes that, in each generation, “the radically new disappears into ordinary experience.” By now, the internet is ubiquitous, and for just that reason no longer a Thing. There are between 50 and 100 processors in a modern luxury car, about as many as there are electric motors (think power steering, seats, wipers, windows, mirrors, CD players, fans, etc.). But you wouldn’t describe the automobile as an application of either technology.

So the futurists have to keep moving the horizon. One feature that makes this era truly different is the number of labels that we’ve assigned to it. Carr himself lists “the digital age, the information age, the internet age, the computer age, the connected age, the Google age, the emoji age, the cloud age, the smartphone age, the data age, the Facebook age, the robot age”; he could have added the gamification age, the social age, the wearable age, and plenty of others. Whatever you call it, he notes, this age is tailored to the talents of the brand manager.

In his more recent posts, Carr is reacting to these varying visions of a new millennium, where the internet is taken for granted and the transformative forces are innovations like wearables, biosensors, and data analytics. The 2011 post from which he draws his title, “Utopia is creepy,” was inspired by a Microsoft “envisionment scenario.” Direct digital descendants of the World’s Fair pavilions, these are the videos that companies produce to depict a future in which their products have become ubiquitous and essential, similar to the worlds pervaded by self-driving cars or synthetics described above. The Microsoft video portrays “a not-too-distant future populated by exceedingly well-groomed people who spend their hyperproductive days going from one computer display to another.” A black-clad businesswoman walks through an airport, touches her computerized eyeglasses, and a digitized voice lights up to define a personal “pick up” zone:

As soon as she settles into the backseat the car’s windows turn into computer monitors, displaying her upcoming schedule […] [h]er phone, meanwhile, transmits her estimated time of arrival to a hotel bellhop, who tracks her approach through a screen the size of a business card.


One thing that makes these scenarios disquieting, Carr suggests, is the robotic affectlessness of the humans — who bring to mind the “uncanny valley” that unsettles us when we watch their digital replicas. These figures are the direct descendants of those audio-animatronic families that Disney designed for the 1964 World’s Fair. As technologies become the protagonists of the drama, people become props. The machines do the work — observing us, anticipating our needs or desires, and acting on what they take to be our behalf.

It’s that sense of ubiquitous presence that has made “creepy” our reflexive aesthetic reaction to the intrusiveness of new technologies — there is already a whole body of scholarly literature on the subject, with journal articles titled “On the Nature of Creepiness” and “Leakiness and Creepiness in App Space,” etc. Creepy is a more elusive notion than scary. Scary things set our imaginations racing with dire thoughts of cyberstalkers, identity thieves, or government surveillance. With creepy things, our imagination doesn’t really know where to start — there is only the unease that comes from sensing that we are the object of someone or something’s unbidden gaze.

That creepy note is endemic to the enthusiasms of the Quantified Self (think of wearables like Fitbits and personal trackers). Like many new technologies, writes Carr, these were originally envisioned as liberating but have wound up as instruments of social control. He mentions the Hitachi Business Microscope, a sensor worn by employees on a lanyard, which monitors their movements, their interactions, and how often they speak up at meetings, all in the interest of “contributing to organization productivity by increasing the happiness of a group,” as Hitachi puts it. It’s symptomatic of what Carr calls a New Taylorism; these tools extend and amplify the reach of employee measurement from Frederick Winslow Taylor’s time-and-motion studies of factory workers to the behavior of white-collar ones, who are already under surveillance by software that registers their every keyboard click. In the modern case, though, the supervisor is a hovering but unseen presence, unobservable on the jobsite.

¤


What’s most striking about these pictures of the sensor-saturated world isn’t just their creepiness, but how trivial and pedestrian they can be. The chief of Google Android touts interconnected technology that can “assist people in a meaningful way,” and then offers as an example automatically changing the music in your car to an age-appropriate selection when you pick up your kids. Microsoft’s prototype “Nudge Bra” monitors heart rate, respiration, and body movements to detect stress and, via a smart phone app, triggers “just-in-time interventions to support behavior modification for emotional eating.” (A similar application for men was judged unfeasible since their underwear was too far from the heart — “That has always been the problem,” Carr deadpans.) They’re symptomatic of Silicon Valley’s reigning assumption, writes Carr, that anything that can be automated should be automated. But automatic music programming and diet encouragement — really, is that all?

Others extend these technologies to scenarios in which everything is centralized, rationalized, and Taylorized. In the futuristic reveries of PayPal’s co-founder Max Levchin, the ubiquity of networked sensors — in the world and in our bodies — will make it possible to eliminate the chronic inefficiencies of “analog resources” like houses, cars, and humans. That’s exactly what companies from Walmart to Uber are doing right now, but this only scratches the surface. Why not introduce “dynamically-priced queues for confession-taking priests, and therapists”? Why not have a car seat equipped with sensors that can notify your insurance company to increase the day’s premium when you put your toddler in it, and then reduce it when it turns out you’ve only driven two miles to the park. Levchin even contemplates maximizing the power of the human mind. Imagine dynamic pricing for brain cycles. With brain plug firmware installed, why not rent out your spare cycles while you sleep to solve problems like factoring products of large primes?

But, as Carr notes, if your insurance company can adjust your premium according to who’s in your car, it can also adjust it according to how many slices of pizza you eat. This is the nightmare world of Big Data, he says, where “puritanism and fascism meet and exchange fist bumps.” But Levchin offers it as a utopian vision, where technology can improve people’s lives by delivering “amazing opportunities they wouldn’t have in today’s world.” True, it introduces new risks — Levchin mentions bias and threats to privacy, and it hardly stops there — “but as a species,” he adds, “we simply must take these risks, to continue advancing, to use all available resources to their maximum.” As Carr notes, it’s that conspicuous willingness to break a few eggs that enables tech visionaries like Levchin to cast themselves as the protagonists of a heroic narrative. Yet when it comes to the crunch, all that centralized control doesn’t promise to make the future more exciting, just more efficient, like the electronically paced traffic on the Futurama freeways or the push-button kitchens in GE’s Progressland. To paraphrase what Karel Čapek said about intellectuals, has there ever been anything so awful and nonsensical that some technologist wouldn’t want to save the world with it?

¤


Geoff Nunberg teaches at the UC Berkeley School of Information. He writes on language and technology and is often heard on NPR as Fresh Air’s Language Guy.

LARB Contributor

Geoff Nunberg teaches at the UC Berkeley School of Information. He writes on language and technology and is often heard on NPR as Fresh Air’s Language Guy. His most recent book is Ascent of the A-Word.

Share

LARB Staff Recommendations

Did you know LARB is a reader-supported nonprofit?


LARB publishes daily without a paywall as part of our mission to make rigorous, incisive, and engaging writing on every aspect of literature, culture, and the arts freely accessible to the public. Help us continue this work with your tax-deductible donation today!