Ask Not What Your Robot Can Do for You, but What You Can Do for Your Robot

By Jerrine TanOctober 4, 2023

Ask Not What Your Robot Can Do for You, but What You Can Do for Your Robot
IN NOVEMBER of last year, I fell a little bit in love with a robot. I had been invited to Tokyo to give a paper on Kazuo Ishiguro’s Klara and the Sun (2021), a novel about AI friends for children in a dystopian future, as well as to take part in the Being Human Festival, the United Kingdom’s national festival of the humanities. As part of the program, we visited the LOVOT studio and spoke to its founder, Kaname Hayashi, and to its designer, Kota Nezu. A Japanese robotics and AI company, LOVOT creates cute little robots on wheels. Their one purpose: “to be loved by you.” I was skeptical at first. When I had mentioned my projected LOVOT visit to friends, they had all snickered at the LOVOT portmanteau, a combination of “love” and “robot” too irresistible to the crooked mind. And so, I had breezed into the studio burnished (I thought) with the lacquer of academia, coolly cerebral—if curious—and mostly unconvinced. I did not expect to be wholly won over by the little rotund tykes with their kooky wheeling patterns. Minutes into entering the space, I was a cooing puddle of sap—the LOVOTs were adorable: innocent, gleeful, silly, irreverent, warm to the touch, and yes, lovable. Their bright blinking eyes and infantile gurgling incite protective care in the way so astutely adumbrated by Sianne Ngai in her investigation of “the kawaii” but without the negative feelings she describes. Their felt covering is soft to the touch, yet their bodies are not squishy; in other words, they do not elicit a simultaneous desire to dominate, which Ngai claims cute things incite (or reveal) in us.

The arrival of ChatGPT late last year caused a stir to say the least, inciting panic across universities, raising questions about the relevance of professors, and throwing into question the very essence of what makes us human. Philip K. Dick’s 1968 novel Do Androids Dream of Electric Sheep? anticipated our current malaise, prompting that age-old question: what exactly makes us human? Recent books and films—such as Ishiguro’s Klara and the Sun, Ian McEwan’s Machines Like Me (2019), and the 2022 Blumhouse box office hit M3GAN, about a killer robot friend—describe our anxieties around artificial intelligence even as they acknowledge our inevitable slouch towards an AI-reliant future. In the weeks immediately after the launch of ChatGPT, a wave of think pieces flooded the internet, reflecting on how new AI technology will “steal jobs” and make humans obsolete. The New York Times published the transcript of a two-hour conversation in which a reporter tries to “outsmart” Sydney, the Bing chatbot. More recent articles have taken a more accommodating approach to AI, describing its applications and focusing on how it can support rather than supplant us, though they too betray a deep fear of our being outwitted.

More alarming is what these anxieties reveal about how we perceive our value as human beings: it would seem that our greatest existential fears stem from being replaced in our professions. It also bears remarking that the industries most vocal about these anxieties are often privileged ones, previously seen as secure. There seems to be less outrage about the replacement of manual laborers with automation—which is painfully ironic because many manual and low-wage workers in the world today already inhabit the position of dehumanized labor, and are already viewed and treated as machines, as merely robots. This can be observed in technocratic societies such as Singapore, where I am from. As one of the main hubs of cryptocurrency and fintech, it is at the forefront of our virtual age—yet its actual city is built by and continues to rely on low-wage, low-skilled manual laborers for cheap and quick construction, who are put to work like robots and treated as if they were not human. As Sean Cubitt writes in Finite Media: Environmental Implications of Digital Technologies (2016), there is a “myth of immaterial media” that requires the projection of “consumer goods that have no history: no mines, no manufacture, no freighting, and no waste.” The fecundity and abstract nature of data aids in laundering the very fact of our built physical world upon which it is still contingent. In Race After Technology: Abolitionist Tools for the New Jim Code (2019), Ruha Benjamin adroitly lays out how “[m]any tech enthusiasts wax poetic about a posthuman world” without considering that “posthumanist visions assume that we have all had a chance to be human.” Referring to “the unnatural cyborg women making chips in Asia” as enabling the “night dream of post-industrial society,” Donna Haraway suggests that “‘women of colour’ might [already] be understood as a cyborg identity.”

This past summer, the conversation around AI infiltrated our everyday lives in an even more intimate way—by grinding the brakes on the entertainment we consume. In May, the Writers Guild of America (WGA) announced a strike over ongoing labor disputes. The central issues were related to residuals and the current and future use of generative AI technology such as ChatGPT, which writers predicted might well undercut their labor. Soon after, in June, a new season of Black Mirror was released on Netflix, which included an episode titled “Joan Is Awful” detailing the tribulations of a woman (Joan, played by Annie Murphy) whose life is used as fodder for a streaming service; its release happened only a month before the American actors union, SAG-AFTRA, joined the WGA strike. As Joan’s life unravels, she finds she has no legal recourse because she has willingly—though unwittingly—surrendered all her rights in agreeing to the terms and conditions that come with using various technological products. The timing of the episode could not have been more perfect nor more ironic. Underpaid and overworked writers who had been propping up Hollywood on the spindly stilts of gruel-thin royalties and residuals wanted to preemptively push for protection against the use of AI to generate endless volumes of cheap content. Several platforms published essays on how background actors, the most unprotected and underappreciated actors in the industry, will be hardest hit. NPR published a chilling story on background actors who have been subject to body scans and given little information on how their images will be used and how or if they will be remunerated.

The union response to Hollywood’s capitalistic greed—now souped-up on AI, granting it power to expand exponentially ad infinitum—is strategic, gratifying, and not unfounded. But nonwhite and non-Western actors have always known of and pushed against this threat, in part because to be a person of color in Hollywood is already to be less than human, already technologized. It’s why Jet Li turned down the role of Seraph in The Matrix Reloaded and The Matrix Revolutions (both 2003). In an interview, he revealed that “for six months, they wanted to record and copy all my moves into a digital library. By the end of the recording, the right to these moves would go to them.” He rationalized, “I’ve been training my entire life. And we martial artists could only grow older. Yet they could own [my moves] as an intellectual property forever. So I said I couldn’t do that.” Techno-Orientalism, in which Eastern elements denote and embellish a dystopian yet technologized future, has long demonstrated the dehumanizing action of capitalism, technologization, and racism. The essentialization or erasure of Asian people, art, and labor was simply accepted as an aesthetic. The anxiety around replacement and obsoletion today stems from the fact that this dehumanizing threat may now come for everyone else—even major (mostly white) stars.

We might read Li’s refusal (revealed 15 years after the release of Matrix Reloaded) as a pioneering glitch. In their 2020 manifesto Glitch Feminism, Legacy Russell writes that “[w]ithin technoculture, a glitch is part of machinic anxiety, an indicator of something having gone wrong,” “an error, a mistake, a failure to function.” It is also “a form of refusal.” Li’s belief that profits would always be favored over labor and artistry, and therefore that even those making a film criticizing a technocratic world could be guilty of perpetuating such injustices, augured the WGA and SAG-AFTRA’s current anxieties.

More intimate still than the entertainment we consume is the way we receive care. In the wake of a global pandemic that made the physical act of socializing deadly, we have had to confront and acknowledge caregiving work as dangerous and crucial. As many parts of the world confront aging populations and a strained workforce, conversations have turned to the use of robot caregivers. When we think of care robots, we tend to think about robots as replacement labor. The word “robot,” after all, which derives from the Czech word “robota,” was coined in 1920 by Karel Čapek to describe artificial workers, and refers to serf labor.

But according to the founders of LOVOT, we’ve got it upside down. Care robots should be for us to care for. In 1950, Alan Turing introduced the “Turing test,” the assessment of a machine’s ability to exhibit intelligent behavior indistinguishable from a human’s. Western notions of humanity revolve around intelligence and “the individual”—can an AI beat a chess champion, write an essay, or generate complex code? But the Japanese have a different relationship to technology, intelligence, and humanity, one that does not weigh the essence of humanity against intelligence but focuses instead on the relational aspects of being human.

Relationality is rooted in the Japanese language itself, as Japanese philosopher Watsuji Tetsurō has pointed out. Watsuji formulated a uniquely Japanese ethics in which he explored the notion of “aidagara,” which denotes the social relationship that exists between persons. The characters for “human” in Japanese is “人间” which translates to something like “between people” (as opposed to the Chinese character for “person,” which is only the single character 人). Of this observation, Watsuji writes,

Viewed in such a way, we can use the word ningen in the double meaning of world (seken) and individual person or persons (hito). I would have to say that this best puts into words the essence of human being […] if human being is not simply the individual person, it is also not simply society. Within human being, these two are unified dialectically.


Whereas human existence for German philosopher Martin Heidegger in Being and Time (1927) is conceived “as a being-there, an existence in-the-world but open to Being itself. […] Watsuji’s focus was on the ‘interhuman,’” according to Thomas P. Kasulis. For Watsuji, human existence does not “discover a world or constitute a world” but rather, Kasulis says, “from its very beginning, we find ourselves ‘in the midst’ of a field of engagement.” By Watsuji’s construction, it is the network of relationality which provides humanity with social meaning. To live as a person is to exist and participate in such betweenness.

Humanity, then, is not defined here through an individual alone but in relationality. As a consequence, machine intelligence poses no existential challenge to one’s sense of self. For Kota Nezu and Kaname Hayashi, we are not made more human by receiving love but by giving it. Both men used to be designers for high-performance cars—the LOVOTs are thus not low-tech products. Each LOVOT has high-performance sensors, 10 or more CPU cores, 20 or more microcontroller units, microphones, state-of-the-art luminosity sensors, thermal sensors, and room-mapping technology, which all have one aim: to incite love in us. The LOVOT website reads: “We packed all the available technologies into this small body to bring you the cuteness.” What seems like an excessive and superfluous use of technology actually raises serious questions around what technology is for—in this case, the answer is “to be more human.” As loneliness becomes an epidemic across the globe, such a counterintuitive robot, which does no work and instead demands care, could be just what we need to rehabilitate our atrophying emotional muscles.

When asked how we should think of the LOVOT, Hayashi responded that LOVOTs are like family members. “You don't love your brother because of what he does for you. You love him because he is there.” This form of love, which emerges from an almost Shinto-esque animism, struck me as a radical way of thinking about love and our relationships with each other and with the world. It is based not on utilitarian calculations but in our commitment to caring for, rather than everything that is of use, everything that is.

Nobel Prize–winner Kazuo Ishiguro’s poignant latest novel, Klara and the Sun, about an AI-powered “artificial friend,” Klara, who serves as a companion and close observer to a young girl named Josie, felt hauntingly clairvoyant, published in the middle of the COVID-19 pandemic: the dystopian future it depicts is one we were already living in. Unbeknownst to Josie, her mother procured Klara for another motive, one that is as understandable as it is disquieting because it is driven by one of our deepest human desires: to have those we love always with us. Her mother had planned to have Klara learn Josie’s mannerisms, and then upload Klara’s consciousness into a doll made in Josie’s image so she could play the part of Josie forever. By the end of her journey, Klara concludes that this plan would not have worked out even though she was confident of her ability to “achiev[e] accuracy” in “continu[ing] Josie.” The doll-maker, according to Klara,

believed there was nothing special inside Josie that couldn’t be continued. He told the Mother he’d searched and searched and found nothing like that. But I believe now he was searching in the wrong place. There was something very special, but it wasn’t inside Josie. It was inside those who loved her.


The poignance of Klara and the Sun lies in how Klara’s objective yet detailed acts of observation, her attentive patience, her act of caring as a coded task, actually look something like real love—and feel human. As the cliché goes, love is a verb. One has to do it. The point is not necessarily to love robots but to exercise that “something […] special” in us that in fact lives outside of us in our relationships with other beings. After all, loving a robot like a brother means little if we love a robot while neglecting our brother and those around us.

The conversation around ChatGPT and AI is still focused on how we can or should use it. It still revolves around anxieties about human obsoletion. What if we remove that core existential anxiety and think outside of a utilitarian and capitalistic structure? What if, in our panicked and paranoid tech-infused world, we embraced this strange love and learned to love the robot? It might mean less obsolescence and environmental damage. It might mean reflecting more deeply on what needs we want our tech to fill rather than creating tech for tech’s sake. While the latter has spawned our fantasies around human exceptionalism and infinite growth, it has also created our biggest problems—the climate crisis as a result of mining, the epidemic of atrophying minds undone by anxiety, and an insatiable consumer culture that is condemning half the world to sweatshops while sticking the rest of us on an endless treadmill of desire. We might actually learn a more radical—more human—way of interacting not only with AI but also with each other and the world, offering us one way of navigating our digital futures—the quality of which remains uncertain even as its advent is inevitable.

¤


Jerrine Tan is an assistant professor of English at City University Hong Kong.

¤


Featured image: From “Warm & Soft-Touch Skin” section of lovot.life page on official site.

LARB Contributor

Jerrine Tan is an assistant professor of English at City University Hong Kong.

Share

LARB Staff Recommendations

Did you know LARB is a reader-supported nonprofit?


LARB publishes daily without a paywall as part of our mission to make rigorous, incisive, and engaging writing on every aspect of literature, culture, and the arts freely accessible to the public. Help us continue this work with your tax-deductible donation today!