Skip to content

Terrible Questions

September 29, 2016

A train is careering down a track out of control. There is a switch up ahead. Currently, the switch is set to send the train down a track where it will hit four people. If you flip the switch, it will go down a different track where it will hit only one person. You cannot warn the people, only choose to flip the switch or not. What do you do?

Okay, now say the four people on the first track are all in their 90s, and two of them are former Nazis. The one person on the other track is a kindergartner. What do you do?

A different scenario:

Your spouse is injured badly in an accident at the home. There’s no time to wait for an ambulance. Is it okay to drive twice the legal speed limit down crowded city streets, weaving in and out of oncoming traffic, to get to the emergency room?

Okay, say that injured person is you. You’re bleeding pretty heavily, and it’s likely that as you drive recklessly down these crowded city streets, you will pass out from loss of blood. Is it okay to try to make it?

These are the kinds of discussions I remember from my ethics class in grad school (along with, vaguely, something called the Potter Box). They were interesting to debate, but almost always contrived hypotheticals.

This morning, I caught a bit of an NPR story about a group being formed to discuss these sort of ethical dilemmas as they pertain to artificial intelligence. We’re in the very early stages of AI, but we’re quickly headed toward a world where computer-controlled machines will be operating out there “among us.” Perhaps the most prominent recent example is the self-driving car.

The thought of self-driving vehicles freaks some people out. But if they’re safer than human drivers (which statistically they are) and are only going to get safer via machine learning, then I wouldn’t hesitate to get into a Johnny Cab if it showed up at my house to take me to the airport.

But what the folks on NPR were talking about were taking those ethics-class hypotheticals and working through them to set industry standards. A person, faced with the unlikely decision of flipping a train switch to save four ninety-year-olds or one kindergartner, wouldn’t run the decision through the Potter Box like in ethics class. In real life, that decision would likely fall more to the reptile brain. It would be instinctual, good or bad.

But computers don’t have reptile brains. A computer couldn’t look at the face of the child and be emotionally moved. It has to be programmed for these situations. Which gets into the morally unsettling territory of assigning quantitative values to human lives. Can a quantitative value be assigned to a life based on age, fitness, accomplishments, transgressions, etc.?

Of course, we do this every day with actuarial science—the mathematics behind insurance. What is the likelihood that someone will die during the term of this life insurance policy? What is the likelihood of an earthquake, the likely loss of life from a bridge collapse, and how does that compare to the cost to retrofit the bridge? And we do it in the military, though maybe with less statistical precision: if we take out that ISIS leader with a drone strike, what is the likelihood of also killing innocent civilians in the market across the street?

future-car

What people find unsettling about self-driving cars is that it feels like we’re turning our decision-making power over to a machine. That’s true in some regard, and if it’s a decision of whether to take the freeway or side streets based on traffic data, most of us are fine with that. But when it comes to the life-or-death decisions, that’s when we get queasy.

There are school children in the street. Do you plow through them or jerk the car onto the sidewalk and into the front of a McDonald’s?

If you took a moment to think about that, it’s too late. That is a life-or-death decision, it comes out of the blue and it depends not on a rational deliberation but on an immediate reaction. And that, if we climb into a self-driving car, is what we are really admitting—that the machine can react faster than we could. This is without a doubt true. Automatic braking systems—technology that senses and impending collision and will slam on the brakes if the human driver is too slow to react—are already standard in a dozen automobiles. And auto insurance companies believe in them, some giving as high as a 16% discount on premiums.

So what we cede to the machine is the reaction time. We cede the decision to a committee of ethicists and engineers in a room. Who do have time to deliberate, and debate, and apply the Potter box to all kinds of hypothetical dilemmas. They’re asking terrible questions that may, one day, be played out in a fraction of a second in a car that you’re riding in. Or maybe not one that you’re riding in, but one that you own and have sent—driver- and passenger-less—to the grocery store to pick up a few things for dinner.

You start going down this rabbit hole and it’s easy to see why AI is such an inspiration for science fiction. It opens a world of hypotheticals. I listened to maybe four minutes of that NPR story, but here are a few other things I started to spin on:

  • Could I sign my driverless car up for Uber and send it out to make money for me while I sleep? In that case, the transaction is cash for liability—after all, I’d be held responsible if my car did plow into the front of a McDonald’s, right?
  • If my car did plow into a McDonald’s, could I sue the auto manufacturer who programmed it to do so? After all, I ceded my decision to those people in that room. I trusted their algorithm in the same way I trust their brake design.
  • Could I tweak my algorithm in my car? Tell it, for example, that I prefer it just never leave the road, regardless of any situation?
  • What other situations could this instantaneous application of a carefully calculated algorithm be applied? In the news lately, what about an interaction between police and a motorist they’ve pulled over? Instead of putting a gun (and the life/death decision) in the hand of a human police officer, what if there was an armed robot able to make that decision? Plenty of research shows the impact of implicit bias in our decision-making, particularly split-second decisions. A police officer need not be an overt racist for race to influence a shoot/don’t shoot decision. They may not even be aware of their bias (research shows we all have bias, much of which we’re unaware or in denial of). This could potentially be eliminated if the decision is removed from the amygdala of a police officer and given to a robot.
  • Uncomfortable yet? What about robot soldiers? We already have AI baked into our drones, why not ground troops? Wouldn’t we prefer to put a machine in harm’s way than one of our human soldiers? And if we trust AI with tough decisions in cars, wouldn’t we trust the cold, calculating machine (backed by the room full of ethicists and engineers, of course) to decide whether that woman approaching on the street of Rahmadi is friendly or wearing a bomb vest? In these situations with human soldiers, and in the police/suspect encounters, the conflict is so highly charged because there are two lives at stake—the cop/soldier and the civilian. Any error in judgment, an over- or under-estimation of the danger, can result in a death one way or the other. But inserting a machine into the soldier/cop slot, we can weigh the decision more heavily toward restraint, back away from the need to pull the trigger. Because if the AI underestimates the threat, the only thing lost is a robot.

Of course, those robots are expensive—millions of dollars each. So then there is another room, this one full of businessmen and bean counters (no ethicists in attendance), debating whether or not we should tweak the algorithm, just a little nudge toward pulling the trigger earlier. Which puts us back in the more familiar territory of calculating human life in terms of money.

spiros_07

I should stop there. You could do hypotheticals for decades (and sci-fi writers have). The only thing is, these hypotheticals are now getting closer and closer to reality. We’ll see how they play out. In the meantime, maybe I should trust AI to write my blog posts and keep them on the road. A robot would know better.

 

 

Advertisements
2 Comments leave one →
  1. September 29, 2016 8:19 pm

    Today, I read an article on self-driving trucks. Many believe there is significant financial insensible to push trucks onto the highway faster than cars accounting to a 40% drop in truck driving jobs in the next 10 years. I realize your point about the ethics of AI are much more complicated but I can’t help but think about the practical ethics of wiping out another middle class livelihood. Then again maybe a computerized truck will save 1000’s of lives.

    • September 29, 2016 9:42 pm

      It’s a fair point. I think like any technology (maybe MOST technologies), AI isn’t inherently good or bad. It’s how we use it. Fewer trucking jobs may be the tradeoff for safer roads. But that may also create other jobs (at distribution centers, if there’s more shipping) and an uptick in overall productivity. But that could also increase pollution. So many different ways to look at it, it’s hard to know where it could go.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: