Pulled from the pages of science fiction, self-driving cars seem to be on the way to becoming real. Cruise and Waymo are expanding their reach with their autonomous taxi experiment and Tesla (TSLA) -), of course, has been amping up its Full Self-Driving rollout amid repeated promises that true FSD is right around the corner.
But the technology remains heavily flawed. TheStreet reported last week that there are a number of vulnerabilities in the artificial intelligence models that power self-driving cars, not the least of which involves a lack of standardized testing platforms across the industry to ensure independently verified safe models.
Related: Here's what needs to happen to achieve safe self-driving cars
Safe, human-level self-driving, however, isn't somewhere up around the bend, according to Navy veteran and engineer Michael DeKort. The costs, he says, in human lives, time and money, are too high for true, safe self-driving to ever be achieved.
The issue for DeKort — the engineer who exposed Lockheed Martin's subpar safety practices in 2006 — is that artificial general intelligence (an AI with human-level intelligence and reasoning capabilities) does not exist. So the AI that makes self-driving cars work learns through extensive pattern recognition.
Human drivers, he said, are scanning their environment all the time. When they see something, whether it be a group of people about to cross an intersection or a deer at the side of the road, they react, without needing to understand the details of a potential threat (color, for example).
The system has to experience something to learn it
"The problem with these systems is they work from the pixels out. They have to hyperclassify," DeKort told TheStreet. Pattern recognition, he added, is just not feasible, "because one, you have to stumble on all the variations. Two, you have to re-stumble on them hundreds if not thousands of times because the process is extremely inefficient. It doesn't learn right away."
"You can never spend the money or the time, or sacrifice the lives to get there," he said. "You have to experience to learn and you have to experience over and over again."
Self-driving cars would have to clock billions to hundreds of billions of miles using their current methods to achieve a fatality rate in line with that of human drivers: one per 100 million miles, a 2016 study by Rand found. Rand found that as self-driving cars seem to improve, it gets harder to analyze their performance accurately because of the rarity of certain edge cases.
Tesla's beta version of FSD, according to Elon Musk, has covered some 300 million miles; the company would have to scale up mileage by 100 to 1,000 times to create a system that is as good as human, according to Rand's calculations. Still, as Musk himself inadvertently demonstrated in a recent demo, drivers can't yet take a nap while their Tesla takes them somewhere; human drivers need to be ready to take control at a moment's notice.
Tesla is currently facing a series of investigations into the safety of its FSD software.
More Tesla stories:
- Tesla's EV range is far from accurate (and that's on purpose)
- Elon Musk explains final hurdle in Tesla's Full Self-Driving tech
- Top analyst believes Tesla's business could bring in $20 billion a year by 2030
The focus, DeKort said, then centers around simulation, because "you can't spend the time and you can't sacrifice enough people" to teach pattern recognition in the real world. The issue with simulation training is that the game simulators currently in use aren't good enough. They lack the kind of specificity and real-time performance needed to be truly safe.
"They can do a lot of it. They'll make progress, which is why they are where they are," he said. "But they will not get far enough to where they're better than a human."
DeKort doesn't even think a simple scenario — such as highway driving — can be considered remotely safe.
"As soon as you go out into the public domain, it could be in front of your house, you incur so many of these problems that it's impossible to do," he said. There are just too many crash scenarios to cover.
The only path to true FSD, DeKort said, would be a path lit by artificial general intelligence (AGI). And despite the frequent debate over whether AGI is even a remote possibility, unlocking AGI would not suddenly solve the problems at hand; companies would still need to enhance their simulators and find ways to guarantee the safety of their vehicles and the strength of their AI models.
Carmakers, DeKort said, would additionally have to enhance their sensor systems, combining the cameras that Tesla is known for with Lidar, 3D imaging radar and a sound localization system.
Waymo and Cruise have collectively driven more than 8 million driverless miles, in total reporting around 100 crashes, a ratio that works out to around one crash for every 60,000 miles. In the last quarter of 2022, Tesla recorded one crash for every 4.85 million miles driven by Autopilot (which is not a full self-driving technology). Drivers in the U.S., meanwhile, crash around once every 600,000 miles.
The systems are getting better — they have become good enough to play the odds, DeKort said. But there are potentially insurmountable limitations and vulnerabilities baked into these systems that, according to DeKort and other experts, mean that, as it stands, human-level self-driving is likely not achievable.
"I'm not against autonomous vehicles. I'm against using people needlessly to do it and companies going bankrupt for no reason trying to do it," DeKort said. "In general, I think there's use for autonomy and I'd like to help people get there."
"It's just not this way."
If you work for Tesla, contact Ian by email ian.krietzberg@thearenagroup.net or Signal 732-804-1223