Designing a Moral Machine
Designing a Moral Machine. Artificial intelligence is learning right from wrong by studying human stories and moral principles.
Back around the turn of the millennium, Susan Anderson was puzzling over a problem in ethics. Is there a way to rank competing moral obligations? The University of Connecticut philosophy professor posed the problem to her computer scientist spouse, Michael Anderson, figuring his algorithmic expertise might help.
At
the time, he was reading about the making of the film 2001:
A Space Odyssey, in which spaceship computer HAL 9000 tries to murder its
human crewmates. “I realized that it was 2001,”
he recalls, “and that capabilities like HAL’s were close.” If artificial
intelligence was to be pursued responsibly, he reckoned that it would also need
to solve moral dilemmas.
In
the 16 years since that conviction has become mainstream. Artificial
intelligence now permeates everything from health care to warfare, and could
soon make life-and-death decisions for self-driving cars. “Intelligent machines
are absorbing the responsibilities we used to have, which is a terrible
burden,” explains ethicist Patrick Lin of California Polytechnic State
University. “For us to trust them to act on their own, it’s important that
these machines are designed with ethical decision-making in mind.”
The
Andersons have devoted their careers to that challenge, deploying the first
ethically programmed robot in 2010. Admittedly, their robot is considerably
less autonomous than HAL 9000. The toddler-size humanoid machine was conceived
with just one task in mind: to ensure that homebound elders take their
medications. According to Susan, this responsibility is ethically fraught, as
the robot must balance conflicting duties, weighing the patient’s health
against respect for personal autonomy. To teach it, Michael created
machine-learning algorithms so ethicists can plug in examples of ethically
appropriate behavior. The robot’s computer can then derive a general principle
that guides its activity in real life. Now they’ve taken another step forward.
“The
study of ethics goes back to Plato and Aristotle, and there’s a lot of wisdom
there,” Susan observes. To tap into that reserve, the Andersons built an
interface for ethicists to train AIs through a sequence of prompts, like a
philosophy professor having a dialogue with her students.
The
Andersons are no longer alone, nor is their philosophical approach. Recently,
Georgia Institute of Technology computer scientist Mark Riedl has taken a
radically different philosophical tack, teaching AIs to learn human morals by
reading stories. From his perspective, the global corpus of literature has far
more to say about ethics than just the philosophical canon alone, and advanced
AIs can tap into that wisdom. For the past couple of years, he’s been
developing such a system, which he calls Quixote — named after the novel by
Cervantes.
Riedl
sees a deep precedent for his approach. Children learn from stories, which
serve as “proxy experiences,” helping to teach them how to behave
appropriately. Given that AIs don’t have the luxury of childhood, he believes
stories could be used to “quickly bootstrap a robot to a point where we feel
comfortable about it understanding our social conventions.”
As
an initial experiment, Riedl has crowdsourced stories about going to the
pharmacy. They’re not page-turners, but they contain useful experiences. Once
programmers input a story, the algorithm plots the protagonist’s behavior and
learns to mimic it. His AI derives a general sequence — stand in line, tender
the prescription, pay the cashier — which is then practiced in a game-like
pharmacy simulation. After multiple rounds of reinforcement learning (where the
AI is rewarded for acting appropriately), the AI is tested in simulations.
Riedl reports more than 90 percent success. More remarkably, his AI figured out
how to commit “Robin Hood crimes” by stealing the meds when the need was urgent
and funds were insufficient — mirroring the human capacity to break the rules
for higher moral ends.
Ultimately,
Riedl wants to set AIs loose on a much broader body of literature. “When people
write about protagonists, they tend to exemplify their own cultural beliefs,”
he says. Well-read robots would behave in culturally appropriate ways, and the
sheer volume of available literature should filter out individual biases.
Cal
Poly’s Lin believes that it’s too soon to settle on just one technique,
observing that all approaches share at least one positive attribute. “Machine
ethics is a way for us to know ourselves,” he says. Teaching our machines to
behave morally requires an unprecedented degree of moral clarity. And that can
help refine human morality.
AI
just might teach us philosophy.
[This article originally appeared in print as "Caring Computers."]
Comments