AI assistants continue to reinforce sexist stereotypes, but queering these devices could help reimagine their relationship to gender altogether.
This November, the Smithsonian’s FUTURES festival, featuring innovations that are set to change the world, will include a familiar face. Or, rather, voice: Q, introduced in 2019 as the first “genderless AI voice,” is a human voice for use in digital assistants specifically created to be gender-ambiguous.
“Q was designed to start a conversation around why we gender technology when technology has no gender to begin with,” says Ryan Sherman, one of Q’s co-creators. To design the voice, a team of linguists, sound engineers, and creatives collaborated with nonbinary individuals and sampled different voices to land on a sound range they felt had the potential to disrupt the status quo and represent nonbinary people in the world of AI.
When Q was announced several years ago, it was hailed as “the genderless digital voice the world needs right now,” and an acknowledgment of the harm of feminizing assistants, which perpetuates misogynistic stereotypes of women as submissive and obedient. It won praise from a United Nations report on gender divides in digital skills. The same report warned that “nearly all of these assistants have been feminized—in name, in voice, in patterns of speech and in personality.” The title of the report—“I’d blush if I could”—was the answer Siri originally provided to users who called it a bitch. (Nowadays, Siri simply replies, “I won’t respond to that.”) In another sign of progress, earlier this year Apple eliminated the default “female” voice for Siri, now including the option for a male voice and allowing US users to choose from a set of voices referred to as voices 1, 2, and 3. Similarly, Google Assistant and Cortana currently let users select a male voice, further proving that companies do respond to public pushback about their products.
Yet uprooting the feminization of digital assistants will take more than just adding a male voice option to the roster. And even the idea of a “genderless AI” voice that registers somewhere between what would be traditionally considered masculine and feminine pitch ranges reveals some of the misconceptions we still confront when thinking about ways to avoid reinforcing stereotypes. In particular, Q might strengthen the outdated belief that nonbinary individuals are neither men nor women, but something in the middle of the binary, rather than outside of it. Instead of striving for “neutrality,” we must reimagine the future of the relationship between digital assistants and gender altogether.
One path forward comes from Yolande Strengers, associate professor of human-centered computing at Monash University and coauthor, with Jenny Kennedy, of The Smart Wife: Why Siri and Alexa Need a Feminist Reboot. They don’t think the solution is to remove gender from the equation altogether, because “this oversimplifies the ways in which these devices treat gender, which are not only gendered by voice, but by the types of things that they say, their personalities, their form, and their purpose,” Strengers says. Instead, they propose queering the smart wife so that digital assistants may exist in defiance of gender stereotypes.
Queering the smart wife could mean, in its simplest form, affording digital assistants different personalities that more accurately represent the many versions of femininity that exist around the world, as opposed to the pleasing, subservient personality that many companies have chosen to adopt.SUBSCRIBESubscribe to WIRED and stay smart with more of your favorite Ideaswriters.
Q would be a fair case of what queering these devices could look like, Strengers adds, “but that can’t be the only solution.” Another option could be bringing in masculinity in different ways. One example might be Pepper, a humanoid robot developed by Softbank Robotics that is often ascribed he/him pronouns, and is able to recognize faces and basic human emotions. Or Jibo, another robot, introduced back in 2017, that also used masculine pronouns and was marketed as a social robot for the home, though it has since been given a second life as a device focused on health care and education. Given the “gentle and effeminate” masculinity performed by Pepper and Jibo—for instance, the first responds to questions in a polite manner and frequently offers flirtatious looks, and the latter often swiveled whimsically and approached users with an endearing demeanor—Strengers and Kennedy see them as positive steps in the right direction.
Queering digital assistants could also result in creating bot personalities to replace humanized notions of technology. When Eno, the Capital One baking robot launched in 2019, is asked about its gender, it will playfully reply: “I’m binary. I don’t mean I’m both, I mean I’m actually just ones and zeroes. Think of me as a bot.”
Similarly, Kai, an online banking chatbot developed by Kasisto—an organization that builds AI software for online banking—abandons human characteristics altogether. Jacqueline Feldman, the Massachusetts-based writer and UX designer who created Kai, explained that the bot “was designed to be genderless.” Not by assuming a nonbinary identity, as Q does, but rather by assuming a robot-specific identity and using “it” pronouns. “From my perspective as a designer, a bot could be beautifully designed and charming in new ways that are specific to the bot, without it pretending to be human,” she says.
When asked if it was a real person, Kai would say, “A bot is a bot is a bot. Next question, please,” clearly signaling to users that it wasn’t human nor pretending to be. And if asked about gender, it would answer, “As a bot, I’m not a human. But I learn. That’s machine learning.”
A bot identity doesn’t mean Kai takes abuse. A few years ago, Feldman also talked about deliberately designing Kai with an ability to deflect and shut down harassment. For example, if a user repeatedly harassed the bot, Kai would respond with something like “I’m envisioning white sand and a hammock, please try me later!” “I really did my best to give the bot some dignity,” Feldman told the Australian Broadcasting Corporation in 2017.
Still, Feldman believes there’s an ethical imperative for bots to self-identify as bots. “There’s a lack of transparency when companies that design [bots] make it easy for the person interacting with the bot to forget that it’s a bot,” she says, and gendering bots or giving them a human voice makes that much more difficult. Since many consumer experiences with chatbots can be frustratingand so many people would rather speak to a person, Feldman thinks affording bots human qualities could be a case of “over-designing.”
This precise issue garnered a great deal of attention with the launch of Google Duplex, a technology now integrated into Google Assistant, which eerily mimics a human voice to execute tasks such as making restaurant reservations or setting up an appointment to get a haircut. Following a slew of accusations stating the technology was unethical and “horrifying,” Google stated the robot would identify itself as such when calling on behalf of users. In 2019, California became the first state to require bots to identify as such online, and though the law has been described as hollow and deeply flawed, in the US, it’s the only legal advancement in the matter.
To reimagine the future of the relationship between digital assistants and gender, companies must be willing to take a hard look in the mirror and ask difficult questions about how truly groundbreaking they are willing to be. As of this moment, 75 percent of professionals in the fields of artificial intelligence and data science are male. And it shows. Queering these products will only be possible if diverse women and nonbinary people play a significant role in the process of designing them. To think outside of traditional binary boxes, companies must be willing to understand that innovation cannot exist in the absence of difference. There are many ways forward, and exploring the numerous possibilities is precisely the point. “Queering is about diversity. It is not about saying, ‘OK, here’s the one solution that’s going to solve all of your problems,’” Strengers said. “It’s about representing a very diverse set of experiences and options that take us away from the norm and disrupt the heteronormative model that’s been very embedded into these devices so far.”
All Rights Reserved for Salomé Gómez-Upegui