Humans, machines, and characters: A not so modern love story
In 1966, MIT professor Joseph Weizenbaum wrote the first computer programme to work with natural language processing. His aim? To prove how superficial communication is between humans and machines. Its name was Eliza. And contrary to Weizenbaum’s intention, Eliza passed the Turing Test and provided the blueprint for interactive assistant technology, including Microsoft’s infamous “Clippy”.
This computer program turned paperclip, enabled Apple to introduce the first smartphone digital assistant 45 years later, as well as Microsoft’s Cortana and Amazon’s Alexa.
No longer limited to the family computer, personal assistants can be found in our back pockets, on our mantle pieces, and in our children’s bedrooms. This physical proximity—a major determinant of relationship formation—and instantaneous access has made them an omnipresent companion. They don’t just tell us whether or not to pack an umbrella, they teach our kids to read, and make our parents feel less lonely.
This article aims to explore our relationships with machines, from social robots to virtual influencers, and its ethical, psychological, and societal consequences.
The Meaning of “Things”
Before looking at our relationship with our machines today, it’s worth considering our history and meaning of “things” more broadly.
We’ve attached meaning to things for as long as we’ve been able to make them. So much so that in Western cultures, broad stages of history are marked by the things we could make or how we used them, like the Bronze and Iron ages, and later the Industrial Revolution. In that sense, the evolution of humanity hasn’t been measured by gains in intellect, but the “things” we possess. As psychologist Mihaly Csikzenthihalyi explains, the things we interact with aren’t just tools for survival, things embody our goals and shape our identity. More than Homosapiens, we’re also Homofabers—“the maker and user of objects, his self to a large extent a reflection of things with which he interacts.”
Today this meaning is no longer limited to the physical.
Apps like Instagram, Pinterest and Depop can be used as mood boards, enabling users to virtually collect goods as a means to generate new identities on and offline. According to Mike Molesworth and Janice Denegri-Knott, these temporary states of ownership enable users “to initiate a journey or self knowing through object knowing.” Meaning that physical ownership isn’t necessary to facilitate identity discovery or projection. Explaining why adding items to your wishlist (or forking out on an NFT) is sometimes enough in itself.
From Objects to Assistants and From Assistants to Companions
Technological advancement hasn’t just changed how we form our identity and create meaning, it’s also changed our relationship with machines. Before, technology was just another category of our “things”. The machine was an information system for humans to control: we were the master, and our technological things, our servants. Technological advancement made this arrangement more efficient with automation, but it upheld the existing dynamic: we still told the machine what to do, this time it was based on a set of predefined instructions.
Then machine learning was introduced, and our relationship changed.
Why? Because eventually machines will be able to complete tasks without instructions from us: making them autonomous not automated. Enabling our once passive information systems to become intelligent companions - more of an equal, as opposed to our subservient.
Kate Darling, a leading expert in robot ethics, talks of how “people will treat technologies like they’re alive, even though they know that they’re just machines.” Or, as Jefferey Van Camp noted when he introduced a social robot—Jibo—into his home, “in just a week there’s a chasm between how I interact with Jibo and Alexa. For better and worse, I treat Jibo more like a person and Alexa like an appliance.”
Love, death, and robots
As you’d expect, higher order robots—like the aforementioned Jibo, entertaining AIBO and therapeutic PARO—have been proven to encourage intimate relationships. But, as our meaning of “things” extends to the virtual, physicality is no longer necessary to facilitate this type of intimacy. This phenomenon is perhaps best explained by Ali, a participant in one of Pamela Pavilscak’s emotional design studies:
“Netflix’s recommendations have become so right for me that even though I know it’s an algorithm, it feels like a friend.”
But media coverage and science fiction have created a moral panic around this type of connection between human and machine. Take dating simulation games for example, when they first became popular in Japan, it was reported on with a tone of moralizing disgust, an attitude echoed in western media. Following the story of Nene Anegasaki—the man who married a character from the dating simulation Love Plus—the New York Times magazine described these games as a last resort for men who needed virtual women as a “substitute for real, monogamous romance.” Along with anime and manga, dating sims were blamed for Japan’s low fertility rate, sometimes going as far as calling the men that played these games “herbivores” as if they were incapable of physical sexual desire.
With love simulation games growing outside of Japan more recently, these same concerns continue to arise. One particularly cynical take came from a Chinese reporter who argued that “the simplicity, consumerism, and hypocrisy of romantic simulation games reflects the love-free disease that belongs to this era.” By contrast, many of those who play these games—like Mystic Messenger player Wild Rose—don’t see virtual love as a replacement for real love but an addition to. And that these spaces can make your emotional life more stable and fulfilling, since you’re able to freely explore unmet emotional needs and other ways of loving.
Companionship without Judgement
The perceived freedom and lack of judgement algorithms offer isn’t limited to dating and companionship simulation games. When Weizenbaum developed Eliza, much to his surprise, many users began forming an emotional attachment to the algorithm. They would confide in the machine, confessing problems and their innermost feelings. His secretary even requested that he leave the room for their conversations.
Although this type of attachment wasn’t his intention, this serendipitous finding has informed the design of other computer-human programmes, like the University of California’s veteran triage system. This system, named Ellie, was built to help doctors diagnose those who may be experiencing mental illness like PTSD and depression by recognising speech and facial expressions. A social psychologist working on the project, Gale Lucas, explains how people react differently based on whether they are told it’s a machine operated by a person or a computer program. When it’s the latter people are more likely to open up. In fact, Lucas explains that “people are more willing to express negative emotions like sadness non-verbally, when they think Ellie is a computer compared to when they think she’s a human.”
“While human touch is necessary for our mental wellbeing there are instances where its absence - and attached social norms - is beneficial.”
The Commodification of Humanity: To Serve, To Entertain, To Sell (?)
Evidently, machines don’t need to be human-like or real for us to form a connection with them. And technological advancement is only increasing the types of machines and characters we’re able to interact with; including virtual celebrities like Lu do Magalu, Guggimon, and Lil Miquela.
Virtual celebrities might be a new medium, but they aren’t necessarily a new concept. Characters have existed for as long as we’ve been able to tell stories. And the stories we tell are integral to our evolution and survival. As Yuval Harari explains, fiction has enabled us to not just imagine things, but to do so collectively: “We can weave common myths [...] that give Sapiens the unprecedented ability to cooperate flexibly in large numbers.” Arguably, virtual celebrities are just another form (albeit machine enabled) of storytelling.
These virtual celebrities are able to appear in a multitude of media. Take virtual YouTuber Kizuna AI as an example—while created for YouTube, she’s since appeared on prime-time talk shows, performed at festivals and sold out her own virtual tour. No longer passive voyeurs, humans now have the capacity to interact with their favourite characters, no doubt leading to more intimate relationships. While these characters may serve us by offering a more immersive form of entertainment, they boast significant earning power for their creators: an example of the evolving definition of celebrity and a new type of commercial commodity.
Virtual influencers have been applauded for their malleability. As influencer agency co-founder Harry Hugo explains, unlike their human counterparts virtual influencers “can be available 24/7 and [...] can be whatever you want them to be.” Not only is their image controllable, through our interactions they can learn from us. According to Aslada Gu, product & innovation director, virtual influencers “can process and transform thousands, if not millions, of consumer behaviors into direct data.” As a result of their marketability and data capturing capabilities, virtual influencers have been criticized for turning identity into a form of currency: “a money-making vessel that bends to the will of corporate interests.”
While the income virtual celebrities generate is undeniable—identity was already a form of currency before their creation. Perhaps virtual celebrities are just a consequence of one of the main climates they exist in: social media. Virtual, celebrity or otherwise, social media users are at the mercy of an algorithm created to drive engagement and conversion, resulting in an arena where individuals mine details of their lives in exchange for views: commodifying personal moments and examples of humanity.
But more than just consumers, social media users are commodities themselves, as Terry Nguyen writes the social internet is “a place where users are induced in a state of ambient shopping and function as both consumer and commodity; they are virtual commodities in the data, advertising, and monetized interactions they provide, and consumers of the commodities touted by the targeted ads, influencers, and brands that float across their feeds.”
The social media companies that define the parameters of our digital worlds are now trying to expand our digital lives by creating the metaverse. Facebook defines the metaverse as “a set of virtual spaces where you can create and explore with other people who aren’t in the same physical space as you.”
In theory the metaverse offers an opportunity to re-write social norms and value systems, freed from cultural and economic pressures. But in reality, the metaverse—like our real world—is likely to become an exercise in ownership, awarding large sections to those that can afford to be there first, or at least the first to lay claim to it. With some commentators warning that it’ll be a place you’ll be forced to be “where every device is smart (tracking you and charging fees and functioning as a constant advertisement for itself) and every other entity is swathed in proprietary content meant to signal its social credit.” The metaverse risks becoming, as one software developer described it: “virtual reality with unskippable ads”, and collaborations like Fenty and Arcane—curated beauty looks for an animated series—indicates that everything in this new world is for sale.
Sadly, signs of the potential monopolization of the metaverse are already appearing; it’s no coincidence that Facebook recently changed their name to “Meta”. And let's not forget how Web3—the ambition to decentralise the world wide web as we know it—is predominantly being funded by big venture capital firms.
Looking to the Future;
Implementing Ethical Design to go beyond “Pay Back” and Instead Measure the “True Cost” of Human Made Creations
Change through technology has always been a part of our life—whether that’s through the invention of new materials, like in the Bronze and Iron ages or the invention of new energy sources, processes and communication in the Industrial Revolution—and these changes have societal implications. As with previous advancements, we need to challenge ourselves to design and innovate ethically; ensuring effective safeguarding and, where necessary, regulation.
Casey Fiesler—assistant professor at the University of Colorado Boulder, and expert on tech ethics, encourages tech designers and entrepreneurs to look beyond the bottom line. She argues that while financial health is critical, it shouldn’t be the only measure of success. Take YouTube’s algorithm as an example; in a bid to boost ad revenue they reward content with high engagement—consequently promoting conspiracy theories on their platform. In response to public outrage they changed their algorithm to ensure harmful content wouldn’t be recommended. As Casey explains “it was an ethical call, and I’m sure it led to lost revenue - but sometimes you have to do what’s right for the good of society.”
We can’t, and shouldn’t, determine big tech as the solution to our civilisations problems, but it does have a critical role to play in meeting the challenges of a new century. As journalist Derek Thompson writes, transformative advances will require participation from local, state, and federal government, as well as the people it serves.
Our things, machines—and more recently—virtual characters and worlds are a great source of comfort. It’s why people name and dress up their Roomba’s, attach emotional value to their Netflix recommendations and dedicate hours creating the perfect island in Animal Crossing. But as with everything we humans create, we must consider the potential societal impacts and evaluate who’s really benefiting and at what cost.
“Fundamentally, we must challenge ourselves to create machines that enhance, not just monetise, our reality.”