we need to question our motivation for developing robots that automate blessings, hearing confession, or chanting at a funeral
One of the charges against Socrates was that his arguments were like robots. As the Greek philosopher approached his own trial, Euthyphro told Socrates, “You are like Daedalus.” He meant that just as Daedalus made automata that moved on their own in Greek myth, Socrates’ arguments were so persuasive that his ideas seemed to move under their own power. Even 2,500 years ago, automata inspired both fascination and fear.
I recently speculated about whether a machine could have a mystical experience. If we aren’t careful, the claim of divine inspiration can make the mystic’s words influential. When someone, whether human or machine, claims to have peeked behind the veil, we don’t know whether the prophet or the mystic has really glimpsed the divine. We only know what they claim, and it’s up to us to decide whether to trust them.
Deus ex machina
My interest in the connection between religion and robots is related to the charge against Socrates, and it’s a pragmatic interest rather than a technical one. What matters is not whether we have invented true artificial intelligence, but whether we believe we have invented it. If we trust the machine, we might let it function as a mystic or a priest, even if it isn’t one.
This raises the interesting question of what to do when someone makes a machine that is actually intended to play the role of clergy. Some pastors joke that they help people “hatch, match, and dispatch,” by celebrating births, weddings, and funerals. They joke, but even if we aren’t religious, we do tend to trust professionals to guide us through those serious moments. A few years ago, Mark Zuckerberg suggested that Facebook could play a similar role, giving meaning to lives just as a pastor does for a church. Given the amount of trust we put in clergy — and given the many examples of Facebook’s untrustworthiness — Zuckerberg’s suggestion is alarming. What does that trust entail?
Maybe our intention is to distance ourselves from the difficult work of care. Our machines might offer one kind of care, while being the physical expression of our lack of interest in those who need the care.
That’s an important question, because we’re being given more and more opportunities to trust machines to act in the roles of clergy. The company SoftBank Robotics created Pepper the robot to chant at Buddhist funerals in Japan, and a church in Germany programmed a machine to pronounce traditional blessings. Very recently in Dubai, the government’s cultural and Islamic affairs agency IACAD launched the first-ever “Virtual Ifta” that uses A.I. to issue fatwas. Other groups have experimented with machines that can hear confessions, offer prayers, or even offer sacraments.
Religious communities will need to decide whether they accept machines performing these functions within their traditions, but there’s a bigger issue that affects all of us: these machines are tools we have made, and to various degrees, they already “make arguments move around.” If they persuade us with voices that sound divine, we only have ourselves to blame.
Ursula Le Guin once wrote that “a machine is more blameless, more sinless even than any animal. It has no intentions whatsoever but our own.” The function of machines is the result of their design, even if the designers did not intend that function. As Charles Sanders Peirce wrote, even if we eventually make machines that can “wind their way through the labyrinths” of complex thinking, “the machine would be utterly devoid of original initiative, and would only do the special kind of thing it had been calculated to do.”
Perhaps someday Peirce will be proven wrong, and we will have machines that act originally and creatively. But in general we want machines that do what we tell them to do. We might want a machine to write original music, but we don’t want too much creativity; what we want is a machine that figures out what people already like, and writes songs that will sell. Only quirky academics are likely to pay for a machine that wrote songs that machines wanted to hear. Peirce adds, with some irony, “We no more want an original machine, than a housebuilder would want an original journeyman, or an American board of college trustees would hire an original professor.”
So we might not want a truly mystical machine, but maybe we could use machines that do the best things clergy do for us. A machine that resembles a human could chat all night with a lonely person, and might make a very good counselor. It could offer comforting words at the bedside of someone who suffers from dementia, or who needs a listening ear. It could read stories or sing songs. Why not automate the singing of hymns, the reciting of scripture, the chanting of prayer, the pronouncement of blessings? All of those things are desirable, at least to some people.
What risks come with the benefits of care-machines? As Euthyphro and Socrates point out, automated ideas and religious authority can be very persuasive.
But are there kinds of work, like caring for our communities and for our own bodies, that we should not automate? Tools amplify our efforts. They also amplify our intentions, and maybe our intention is to distance ourselves from the difficult work of care. Our machines might offer one kind of care, while being the physical expression of our lack of interest in those who need the care.
Here’s another question: What risks come with the benefits of care-machines? As Euthyphro and Socrates point out, automated ideas and religious authority can be very persuasive. Automata that speak and act with religious authority could be doubly persuasive. We worry about the influence of corrupt human clergy; what political, ethical, and economic influence could automated clergy have?
And here’s a third question: A machine can repeat ritualized “hatch, match, and dispatch” words for us, but can it share our experience as an empathetic companion? And if it can’t, does that diminish the meaning of the ritual?
What has it got in its pocketses?
In Jonathan Swift’s Gulliver’s Travels, the Lilliputians try to understand Gulliver by looking in his pockets. They have never seen a pocket watch before, so they observe how he uses it. They decide it must be “the god he worships: for he seldom did any thing without consulting it. He called it his oracle, and said it pointed out the time for every action of his life.”
The pocket watch was a new technology in Swift’s time. At first, pocket watches helped us to be on time. Little by little, we shifted from measuring our lives in hours to measuring them in seconds. The technology we invented to help us observe time wound up changing the way we viewed our own lives. There is a lesson here.
Paul Virilio puts a finer point on this: “When you invent the ship, you also invent the shipwreck; when you invent the plane you also invent the plane crash; and when you invent electricity, you invent electrocution… Every technology carries its own negativity, which is invented at the same time as technical progress.”
Whether we believe in gods or not, our technologies can begin to function like gods, or like the priests that tell us how to behave. Even if we don’t intend them to, our machines can become our oracles, and where there are oracles, there are people ready to profit from those oracles.
Pandora’s Facebook Box has been opened. I don’t know if robots can be priests, but some are beginning to function like priests. This calls for care on our part, and I don’t think it is wise to expect a machine to care on our behalf.
All Rights Reserved for David O’Hara