The Road to Self-driving Cars Is Full of Speed Bumps

Posted on Categories Discover Magazine

A Plan for the Inevitable

Though this is a serious problem, there is an alternative. The car companies could accept that humans will be humans, acknowledge that our minds will wander. After all, being able to read a book while driving is part of the appeal of self-driving cars.

Some manufacturers have already started to build their cars to accommodate our inattention. Audi’s Traffic Jam Pilot is one example. It can completely take over when you’re in slow-moving highway traffic, leaving you to sit back and enjoy the ride. Just be prepared to step in if something goes wrong. But there’s a reason why Audi has limited its system to slow-moving traffic on limited-access roads. The risks of catastrophe are lower in motorway congestion.

And that’s an important distinction. Because as soon as a human stops monitoring the road, you’re left with the worst possible combination of circumstances when an emergency happens. A driver who’s not paying attention will have very little time to assess their surroundings and decide what to do.

Imagine sitting in a self-driving car, hearing an alarm and looking up from your book to see a truck ahead shedding its load onto your path. In an instant, you’ll have to process all the information around you: the motorbike in the left lane, the van braking hard ahead, the car in the blind spot on your right. You’d be most unfamiliar with the road at precisely the moment you need to know it best.

Add in the lack of practice, and you’ll be as poorly equipped as you could be to deal with the situations demanding the highest level of skill.

A 2016 study simulated people as passengers, reading a book or playing on their cell phones, in a self-driving car. Researchers found that, after an alarm sounded for passengers to regain control, it took them about 40 seconds to do it.

Ironically, the better self-driving technology gets, the worse these problems become. A sloppy autopilot that sets off an alarm every 15 minutes will keep a driver continually engaged and in regular practice. It’s the smooth and sophisticated automatic systems that are almost always reliable that you’ve got to watch out for.

“The worst case is a car that will need driver intervention once every 200,000 miles,” Gill Pratt, head of Toyota’s research institute, told technology magazine IEEE Spectrum in 2017.

Pratt says someone who buys a new car every 100,000 miles might never need to take over control from the car. “But every once in a while, maybe once for every two cars that I own, there would be that one time where it suddenly goes ‘beep beep beep, now it’s your turn!’ ” Pratt told the magazine. “And the person, typically having not seen this for years and years, would . . . not be prepared when that happened.”

Adjusting Expectations

As is the case with much of the driverless technology that is so keenly discussed, we’ll have to wait and see how this turns out. But one thing is for sure: As time goes on, autonomous driving will have a few lessons to teach us that apply well beyond the world of motoring — not just about the messiness of handing over control, but about being realistic in our expectations of what algorithms can do.

If this is going to work, we’ll have to adjust our way of thinking. We’re going to need to throw away the idea that cars should work perfectly every time, and accept that, while mechanical failure might be a rare event, algorithmic failure almost certainly won’t be.

So, knowing that errors are inevitable, knowing that if we proceed we have no choice but to embrace uncertainty, the conundrums within the world of driverless cars will force us to decide how good something needs to be before we’re willing to let it loose on our streets. That’s an important question, and it applies elsewhere. How good is good enough? Once you’ve built a flawed algorithm that can calculate something, should you let it?

Leave a Reply