Get all your news in one place.
100’s of premium titles.
One app.
Start reading
InsideEVs
InsideEVs
Technology

Tesla Has A Lot To Prove On Robotaxi Day. Top Experts Have Doubts

The Tesla Robotaxi Day event on Thursday at a Warner Bros. Hollywood studio is a high-stakes moment for CEO Elon Musk. He has hinged the company's future on the idea that Tesla isn’t just an electric carmaker, but a rising force in AI and robotics.

But Tesla’s technical approach to self-driving cars—including what we know of it so far and what's expected to happen in Los Angeles—raises major red flags, artificial intelligence and autonomous vehicle experts told InsideEVs.

Get the best news, reviews, columns, and more delivered straight to your inbox.
For more information, read our
Privacy Policy and Terms of Use.

Some warned that deploying Tesla Robotaxis at scale would be dangerous. Tesla’s technology remains unproven and it keeps its safety data mostly under wraps. Others said Tesla is at least a decade away from legally launching a self-driving taxi service, and many agreed that its approach to autonomy is fundamentally flawed, barring some big shift in thinking.

The automaker is set to reveal a purpose-built autonomous vehicle, potentially called the “Cybercab," that could underpin some upcoming rival to Uber and Google's Waymo. Musk is also expected to lay out plans for a robotaxi service that will incorporate both Cybercabs and regular Tesla owners’ cars, which he has long promised would gain autonomous capability someday.

Even so, critics and experts in the space—many of whom have been in it for decades—said that this demonstration may be less about future products and more about proving to investors that Tesla is on the right track to "solving" full autonomy. Even Musk has claimed that Tesla could be worth trillions if it does this, but essentially worthless if it does not.

“There's just no corroborating evidence that would suggest that they're anywhere close to having actual self-driving cars," said Missy Cummings, the director of the Autonomy and Robotics Center at George Mason University and former safety adviser to the National Highway Traffic Safety Administration. "This is just another attempt for [Musk] to raise cash."

Some FSD Basics First 

Tesla FSD V12.4.1

It's worth noting at the outset that there are no truly self-driving vehicles for sale to consumers today. Yet nearly all automakers have advanced driver assistance systems (ADAS) that can operate with close driver supervision in some situations, including highways and in traffic. 

Tesla’s autonomous ambitions revolve around software that customers can buy today called Full Self-Driving (FSD). Despite its misleading name, FSD doesn’t make Teslas fully autonomous. It is certified as a Level 2 ADAS that requires constant driver supervision, but Musk has said for years that a game-changing software update is coming.

The most important thing to know here is that Tesla is taking a radically different approach to autonomous driving than others in the space.

To make FSD work, Tesla uses multiple cameras acting as the vehicle’s "eyes." This visual data feeds into what the company calls neural networks—machine-learning models inspired by the human brain. These networks process the information, make sense of it and then help the car make active decisions based on what it “sees.”

Around mid-2023, Tesla started shifting to this neural network approach, and away from a system based on 300,000-plus lines of code that guided a vehicle in certain situations. Last June, it explained in a thread on X how the system was already operational in customer vehicles.

The backbone of these neural networks is, supposedly, a growing number of AI-powered “supercomputer clusters.” They process billions of data points to train FSD to drive more like humans.

Tesla’s rivals have taken a different approach. Google's autonomous ride-hailing service Waymo operates on pre-mapped roads and uses a full suite of sensors including cameras, radar and LIDAR, whereas Tesla only uses cameras and AI. Waymo EVs, white Jaguar I-Paces outfitted with that hardware, are legally operating in four U.S. cities: San Francisco, Phoenix, Los Angeles and Austin.

General Motors’ Cruise self-driving division has taken a similar approach as Waymo but suspended its operations last year after dragging a pedestrian in an accident. It resumed testing recently in Phoenix, Houston and Dallas with human drivers on board. All three companies are under federal safety investigations.

On the consumer side, an increasing number of automakers are turning to LIDAR and expanding their ADAS options, although broadly speaking, all have been more cautious than Tesla in the space. But Tesla insists its outside-the-box approach will create a “generalized” solution to self-driving that will let cars operate virtually anywhere. Cruise and Waymo, on the other hand, focus on mastering discrete areas and then expanding from there.

Many experts have their doubts about Tesla’s approach on both hardware and software.

The Hallucination Problem

“Wherever you have a neural net, you will always have the possibility of hallucination,” Cummings said.

“It’s just that they do it infrequently enough to give people false confidence,” she added. Hallucinations are the same thing that happens when ChatGPT spits out a totally nonsensical answer.

Tesla’s system could be prone to “statistical inference errors,” she said, which basically means analyzing a particular set of data inaccurately, leading to wrong conclusions. In Tesla’s case, that means making wrong decisions on the road.

The automaker is still a decade away from being a legitimate self-driving car company, according to Cummings. The key problem, she said, was that Tesla hasn’t made its FSD safety data public yet. It releases some Autopilot and FSD data periodically showing the number of accidents per million miles of driving using those systems, but the reports are not detailed and nearly not enough to prove that the system is safe, she said.

Independent testing has found that FSD had an average disengagement rate of one in every 13 miles. That’s a big red flag, according to Cummings.

"It’s just not a reality until we see a Tesla reporting actual testing with bonafide testing drivers and/or testing the vehicles with no drivers in them." 

The Problem With Edge Cases

So-called “edge cases,” or rare events, are another potential problem area, experts said.

“What matters in safety is not the average day. What matters is the bad day and the bad days are extremely rare,” said Phil Koopman, a professor of electrical and computer engineering at Carnegie Mellon University who has worked extensively on autonomous vehicle safety.

According to the Federal Highway Administration, the fatality rate for human drivers is 1.33 deaths per 100 million miles driven in the U.S. “Saying ‘I drove 10 miles without an intervention’ means nothing,” Koopman said, referring to Tesla owners who post videos of their experiences using FSD. That’s statistically insignificant. After all, humans can log “99,999,999 miles without a fatality.”

Tesla uses end-to-end machine learning in the latest version 12 of FSD. That means feeding the neural networks with raw data (lots of videos, in this case) which directly results in an action on the road (acceleration, braking, turning). Koopman said this approach works well for common driving scenarios but is “horrible at handling rare events.”

The issue there is that extremely uncommon situations—like a house fire or an odd object on the road—may not be represented in even a large data set, said Dan McGehee, who directs the University of Iowa’s Driving Safety Research Institute. Rather, those kinds of hyper-specific events need to be painstakingly taught to a self-driving system, he said.

AI-based self-driving systems can also make it more difficult for engineers to trace back why a vehicle made a certain decision—good or bad—industry experts say.

The Hardware Dilemma

Waymo relies on a few hundred expensive LIDAR-equipped cars, while Tesla has sidestepped those costs to deploy millions of camera-equipped vehicles.

Both strategies come with trade-offs, but Koopman likened skipping LIDAR to “tying one hand behind your back while trying to solve an impossible problem.” LIDAR sensors, which use lasers to create a 3D understanding of the surrounding world, are far superior at depth perception and fare better in adverse weather.

Tesla’s FSD user manual admits that cameras struggle in such scenarios. “Visibility is critical for FSD to operate. Low visibility, such as low light or poor weather conditions (rain, snow, direct sun, fog, etc.) can significantly degrade performance,” the disclaimer reads.

For that exact reason, McGehee, of the University of Iowa, says it’s critical to think about redundancy when designing driverless cars.

“Not only do you have to have a 360-degree view of the world, but you have to have an overlapping view of the world with a different modality,” he said, adding that Tesla’s decision to go with cameras only is “problematic.”

Krzysztof Czarnecki, professor of electrical and computer engineering at the University of Waterloo and a member of SAE task forces for automated driving said that a Tesla Robotaxi with its current set of hardware and software “would cause mayhem and accidents and [the cars] will disappear very quickly from the road.”

New Hurry Tesla FSD Mode

“This is like taking ChatGPT and putting it behind the wheels,” Czarnecki said. “Not literally, of course, because it's fed with driving data, but the underlying technology is kind of that, and you can’t build a safe system that way,” he added.

Tesla could create a driverless service using a vision-only system, said Alex Roy, a former executive at the now-defunct self-driving startup Argo AI and a cofounder at New Industry VC. However, that would mean either deploying far and wide while compromising safety and performance, or deploying in a highly constrained environment.

“I’m absolutely convinced that a camera-first or camera-only system will be able to do this. The only question is when,” Roy said, acknowledging that he’s in the minority. Even so, he said he doesn’t think Tesla’s event will yield anything that can be commercialized in the near term.

While none of the experts opposed robotaxis, they emphasized the need for extensive real-world testing, along with increased data sharing with regulators to address issues transparently. “Self-driving cars can succeed in limited domains,” Cummings noted, adding that she advocates for controlled pilot testing to make that happen.

Koopman, on the other hand, said he had very low expectations from the Robotaxi reveal. A prototype car that triggers discussions is perfectly fine, he said.

"But that would have no predictive power whatsoever as to when robotaxis will be on the road at scale."

Additional reporting by Tim Levin.

Contact the authors: suvrat.kothari@insideevs, tim.levin@insideevs.com

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.