Each week, In Theory takes on a big idea in the news and explores it from a range of perspectives. This week we’re talking about robot intelligence. Need a primer? Catch up here.
Patrick Lin is an associate philosophy professor at California Polytechnic State University and an affiliate scholar at Stanford Law School’s Center for Internet and Society. He works with government and industry on technology ethics, and his book “Robot Ethics” was published in 2014.
Forget about losing your job to a robot. And don’t worry about a super-smart, but somehow evil, computer. We have more urgent ethical issues to deal with right now.
Artificial intelligence is replacing human roles, and it’s assumed that those systems should mimic human behavior — or at least an idealized version of it. This may make sense for limited tasks such as product assembly, but for more autonomous systems — robots and AI systems that can “make decisions” for themselves — that goal gets complicated.
There are two problems with the assumption that AI should act like we do. First, it’s not always clear how we humans ought to behave, and programming robots becomes a soul-searching exercise on ethics, asking questions that we don’t yet have the answers to. Second, if artificial intelligence does end up being more capable than we are, that could mean that it has different moral duties, ones which require it to act differently than we would.
[Other perspectives: If robots can become like us, what does that say about humanity?]
Let’s look at robot cars to illustrate the first problem. How should they be programmed? This is important, because they’re driving alongside our families right now. Should they always obey the law? Always protect their passengers? Minimize harm in an accident if they can? Or just slam the brakes when there’s trouble?
These and other design principles are reasonable, but sometimes they conflict. For instance, an automated car may have to break the law or risk its passengers’ safety to spare the greatest number of lives on the outside. The right decision, whatever that is, is fundamentally an ethical call based on human values, and one that isn’t answerable by science and engineering alone.
That leads us to the second, related problem. With its unblinking sensors and networked awareness, robot cars can detect risks and react much faster than we can — that’s what artificial intelligence is meant to do. In addition, their behavior is programmed, which means crash decisions are already scripted. Therein lies a dilemma. If a human driver makes a bad decision in a sudden crash it’s a forgivable accident, but when AI makes any decision, it’s not a reflex but premeditated.
This isn’t just philosophical; it has real implications. Being thoughtful about a crash decision — accounting for numerous factors that a human brain cannot process in a split-second — would be assumed to lead to better outcomes overall, yet it is where new liability arises. An “accidental” accident caused by a person and a “deliberate” accident involving a computer system could have vastly different legal implications.
Why would we hold artificial intelligence to a higher standard? Because, as any comic-book fan could tell you, “With great power comes great responsibility.” The abilities of AI and robots are effectively superpowers. While it may not be our moral duty to throw a ticking bomb into outer space to save people on the ground, it’s arguably Superman’s duty because he can. Where we may duck out of harm’s way, a robot may be expected to sacrificeitself for others, since it has no life to protect.
But even superheroes need a Justice League or a Professor X for a sanity check; or campaigners emerge to fill that vacuum, on issues from love to war. Some companies, such as Google DeepMind, recognize the value of an “ethics board” to help guide their AI research and its resulting products in uncharted territory. Berkeley’s Stuart Russell, a computer science professor, supports bringing in ethics discussions: “In the future, moral philosophy will be a key industry sector.” Stanford’s Jerry Kaplan, another AI expert, predicts that, within 10 years, a “moral programming” course will be required for a degree in computer science.
Our society is increasingly becoming a black box — we don’t know how things work anymore, because it’s hard to inspect the algorithms on which many of our products run. These formulas are mostly hidden away as corporate trade secrets, whether they are financial trading bots or car operating systems orsecurity screening software. Even within a company, their own algorithms can be too complex to understand: New code is stacked on top of old code over time, sometimes resulting in “spaghetti code” that can literally kill. The unintended acceleration by Toyota vehicles, resulting from badly structured code, may have been involved in the deaths of at least 89 people since 2000.
But then again, fears about AI may just reflect fears about ourselves. We know what kind of animals we are, and we worry that AI might wreak the same havoc (some algorithms have already been accused of discrimination). But in the same way that we can raise our children to do the right thing, we can ease our worries about unprincipled artificial intelligence systems by building ethics into the design. Ethics creates transparency, which builds trust. We’ll need trust to co-exist with the technological superheroes we’ve created to save us all.
Explore these other perspectives:
Q&A: Philosopher Nick Bostrom on superintelligence, human enhancement and existential risk
Francesca Rossi: Can you teach morality to a machine?
Patrick Lin: We’re building superhuman robots. Will they be heroes, or villains?
Ari Schulman: If robots can become like us, what does that say about humanity?
Murray Shanahan: Machines may seem intelligent, but it’ll be a while before they actually are
Dileep George: Killer robots? Superintelligence? Let’s not get ahead of ourselves.