The Ethics of Posthumous Chatbots
In 2015, Russian computer engineer Eugenia Kudya lost her best friend in a car accident. In the months that followed, she began to feed his text messages into a neural network built by her artificial intelligence startup: the result was the Roman bot: one of the most well-known examples of a functional griefbot.
Griefbots are the newest, most controversial iteration of the chatbot — a piece of software designed to mimic human conversation. When you type out a message and send it to the chatbot, the software works to analyze the text and send back what it thinks is an appropriate response. Most likely, if you’ve ever had to endure a live customer support “live chat” service, you’ve interacted with a chatbot.
As a tool, chatbots serve many different purposes. Businesses implement chatbots in order to save time and money spent answering redundant questions, typically by incorporating them into their website or Facebook page. In fact, Facebook Messenger provides tools for anyone to build their own basic chatbot, without the need for an expensive computer science degree.
While business-oriented chatbots are largely dull and repetitive, chatbots created for recreation can be incredibly entertaining. Cleverbot, which has been online since 1997, learns not from developer-provided scripts but from user interaction. For over a decade, users have delighted in roleplaying with the bot and teaching it naughty words.
Chatbots are also helping us become healthier. Planned Parenthood’s chatbot, Roo, exists to answer questions about sexual health that individuals might not feel comfortable asking anywhere else. Targeted toward teens, the bot answers questions about sex, dating, and health based on information from health educators. Florence, which can be accessed through Facebook Messenger, Kik, or Skype, helps users build medication regimens and reminds you when you need to take your pills.
Where do griefbots fit into all of this? When Kudya developed the Roman bot, she was looking for a way to keep in touch with a loved one she had lost way too early. The chatbot’s responses ranged from bittersweet nostalgia (“I miss coffee and breakfasts together”) to jarring and out of place — in response to being told “I’ve been having dreams about you” and “I’m tired of everything here,’’ the bot simply responded: “BTW, where can I buy fresh berries at night?”
A year before the debut of the Roman bot, an American company was looking to do what Kudya was doing on a much larger scale.
Eternime, a start-up from the Massachusetts Institute of Technology Entrepreneurship Development program, promises users the ability to become “virtually immortal”. The company, which now has almost 50,000 subscribers, plans to collect all virtual data from their clients — Facebook statuses, tweets, text messages, and even Fitbit data — and use it to create a digital avatar. “This avatar will live forever,” their website boasts, “and allow other people in the future to access your memories.”
The private beta still isn’t operational, but there is a clear interest in the service that the website promises. According to a 2017 article, the private beta test is ongoing, and the feedback has been positive.
For now, the technology is strictly opt-in, meaning that unless you happen to be friends with a talented programmer, the chances of anyone scooping up your data and turning your digital remains into a talking, learning, thinking chatbot are slim to none.
However, it’s not difficult to imagine a future where Facebook announces a new Messenger feature: the ability to chat with Memorialized accounts, using speech patterns drawn from their own statuses, comments, and chat logs. According to a 2018 study from researchers at the University of Oxford, Oxford Internet Institute, Facebook is already a major player in the Digital Afterlife Industry, which covers everything that has to do with the online data we leave behind after we die. From the study: “A growing volume of digital remains necessitates an increase in posthumous interaction online. If not deleting them, what would make the cost of storing billions of dead profiles financially viable?”
In “An ethical framework for the digital afterlife industry”, Professor Luciano Floridi, Professor of Philosophy and Ethics of Information and Director of the Digital Ethics Lab, and Carl Öhman, a postdoctoral researcher at Oxford Internet Institute, argue that digital remains like our social media accounts should be treated with the same respect as one would treat a dead body — meaning, not as a subject for financial gain.
Floridi and Öhman express specific concern toward posthumous chatbots: “As chat bots are frequently enhanced and updated, the image of the person they depict changes over time; even within only five years of a user’s death, the chat bot for which they signed up will likely have developed into something far more sophisticated and commercially calibrated.” In other words, they’re worried that given both user input and changes to the software that the chatbot runs on could become entirely unrecognizable from the virtual remains it was built from.
Even without a human identity linked to them, chatbots can be notoriously difficult to maintain when released to the public. Remember Tay, the AI chatbot that Microsoft published on Twitter in 2016? Anyone with a basic understanding of Internet (mis)behavior could have seen the headline coming: “Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day.”
For some people, the idea of being turned into a griefbot is haunting, and the technological limitations we face today offer enough cause for concern. For others, it’s a whole new way to get to connect with people, including loved ones they never met.
For example, when data scientist Muhammad Aurangzeb Ahmad learned that his father was dying, he was struck with the realization that his children would never get to meet their grandfather. He decided to create a chatbot based off of his father’s messages so that his own children could interact with it to learn more about their late grandfather.
In a cultural attitude towards death that has been shaped by silence and avoidance, perhaps the quiet intimacy of being able to interact one-on-one with the memories of a deceased loved one could help make the grieving process more tolerable. Kudya says she liked the reassurance she received from the Roman bot. One journalist who covered the story said that for Kudya, developing the griefbot was the “21st century equivalent of sitting shiva”. It was a way for her to observe her grief, sorting through thousands of texts and photos between her and her best friend while she looked for excerpts to feed to the neural network that would become the Roman bot. It was a way for her to earn back some of the time with a friend that death had stolen from them.
Ultimately, there is no easy answer to whether or not these conversations forged from the remains of digital ghosts are helpful or harmful. For some, like Kudya, grief bots offer a way to process the passing of a loved one in a quiet, intimate way. For Ahmad, it’s a way for his children to get to know a grandfather they’ll never meet — a high-tech version of looking through letters or photos that his father might have left behind.
It is when that power falls into the hands of a company responsible for preserving the virtual souls of tens of thousands of people, that people start to get uncomfortable. How can a company ever recreate the experience of interacting with a loved one that they have never met?
Companies like Eternime are likely years from ever being open to the public. In that time, the best thing that you can do to leave your loved ones equipped for your passing is to take a proactive approach to managing their digital afterlife.
All Rights Reserved for Sarah Wood