Why fully automated cars are a lot further away than you think

By Jason Contant | July 8, 2018 | Last updated on October 2, 2024
2 min read

Don’t hold your breath waiting for the first fully autonomous car to hit the streets anytime soon.

Car manufacturers have projected for years that we might have fully automated cars on the roads by 2018. But for all the hype that they bring, it may be years, if not decades, before self-driving systems are reliably able to avoid accidents, according to a blog published Tuesday in The Verge.

The million-dollar question is whether self-driving cars will keep getting better – like image search, voice recognition and other artificial intelligence “success stories” – or will they run into a “generalization” problem like chatbots (where some chatbots couldn’t make unique responses to questions)?

Generalization, author Russell Brandom explained in the blog Self-driving cars are headed toward an AI roadblock, can be difficult for conventional deep learning systems. Deep learning requires massive amounts of training data to work properly, incorporating nearly every scenario the algorithm will encounter.

That challenge has implications for self-driving vehicles, such as in the recent case in which Uber’s software misidentified a pedestrian.

Brandom said that the same algorithm can’t recognize an ocelot unless it’s seen thousands of pictures of the wild cat – and even if it’s seen pictures of house cats and jaguars and knows ocelots are somewhere in between.

“For a long time, researchers thought they could improve generalization skills with the right algorithms, but recent research has shown that conventional deep learning is even worse at generalizing than we thought,” Brandom wrote. “One study found that conventional deep learning systems have a hard time even generalizing across different frames of a video, labelling the same polar bear as a baboon, mongoose or weasel depending on minor shifts in the background,” meaning that even small changes to pictures can completely change the system’s judgement.

In March, a self-driving Uber crash killed a woman pushing a bicycle after she emerged from an unauthorized crosswalk in Phoenix. A U.S. National Transportation Safety Board report found that Uber’s software misidentified the woman as an unknown object, then a vehicle, then finally a bicycle, updating its projections each time.

“Nearly every car accident involves some sort of unforeseen circumstance, and without the power to generalize, self-driving cars will have to confront each of these scenarios as if for the first time,” Brandom wrote.

One study by the Rand Corporation estimated that self-driving cars would have to drive 275 million miles without a fatality to prove they were as safe as human driver. The first death linked to a Tesla autopilot system came roughly 130 million miles into the project, less than half way to the mark.

Jason Contant