Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Sage Lazzaro

Uber's deadly self-driving car crash exposed tricky A.I. workplace issues

(Credit: Justin Sullivan/Getty Images)

Hello and welcome back to Eye on A.I. I'm Sage Lazzaro, filling in for Jeremy.

As most people logged off Friday evening to enjoy another sweltering summer weekend, a landmark legal case about who bears responsibility when A.I. is directly involved in physical real-world harm finally came to a close after five years. 

Rafaela Vasquez, the operator behind the wheel of a self-driving Uber test car that struck and killed a pedestrian in Tempe, Ariz., in 2018, pled guilty to one count of endangerment. Maricopa County Superior Court Judge David Garbarino accepted the plea deal and sentenced her to three years of supervised probation, closing out the case for good. Vasquez was originally charged with negligent homicide, a felony that carries a sentence of up to eight years in prison.

The 2018 crash, where a woman named Elaine Herzberg was killed as she walked her bicycle across the street, was the first fatal collision involving a fully autonomous vehicle. The case gripped observers as Uber and Vasquez each sought to deflect blame for a situation that not only lacked precedent, but begged several questions about responsibility in a world where human workers are increasingly monitoring A.I. machines, taking direction from algorithms, and sitting on the frontlines of imperfect A.I. systems built by corporate engineers. 

When the crash initially happened, Vasquez thought Uber would stand behind her, according to an in-depth interview with Wired published last year. She was genuinely excited about the burgeoning industry and saw herself as a proud steward of the company, doing her job to monitor the company’s self-driving vehicles as they racked up practice miles. Arizona, which was loosening restrictions to bring in more business from Silicon Valley companies, had recently become a haven for Uber’s on-road testing program after California revoked the Uber cars’ registrations. The medical examiner officially labeled Herzberg’s death an accident, and Uber initially provided Vasquez with an attorney, but interactions with her supervisor quickly went from “consoling to unnerving,” according to Wired.

The tides really changed for Velasquez when the investigation revealed that her personal phone was streaming the TV show The Voice at the time of the crash. Dashcam footage also showed that she was looking down in the moments before the collision, and the police analysis later determined that Vasquez could’ve taken over the car in time and deemed the incident “entirely avoidable.” 

While the case didn’t go to trial, Vasquez’s defense was stacked with arguments that pointed culpability at her employer. In legal filings, Vasquez claimed that she wasn’t watching but only listening to The Voice, which was permitted per Uber’s guidelines. And when she was looking down, it was to check Slack messages on her work device, which she said needed to be monitored in real-time. This was historically handled by a second operator, but Uber had recently revoked the requirement to have two test operators in every vehicle and now had backup drivers like Vasquez working alone. This changed the dynamics of the job, including how operators input their feedback of the driving system, and led to lonely, long shifts circling the same roads, usually without incident or the need to intervene. 

In another key part of her pre-trial defense, Vasquez’s attorneys cited a ruling from the National Transportation Safety Board that found the car failed to identify Herzberg as a pedestrian, which caused the failure to brake. The board also found that Uber had “an inadequate safety culture” and failed to prevent “automation complacency” among its test operators, a well-documented phenomenon wherein workers tasked with monitoring automated systems come to trust that the machines have it under control and stop paying attention. Additionally, a former operations manager at the company submitted a whistleblower complaint about a pattern of poor safety practices in the self-driving car division just days before the crash. 

“This story highlights once more that accidents involving A.I. are often ‘problems of many hands’ where different agents have a share of responsibility,” said Filippo Santoni de Sio, a professor of tech ethics and philosophy at Delft University of Technology who specializes in the moral and legal responsibility of A.I. and robotics. He’s previously written about this case

“While Uber or the regulators have come out clear from the legal investigations,” he added, “they clearly have a big share of moral responsibility for the death of Elaine Herzberg.”

As companies across industries rapidly integrate A.I. at a breakneck pace, there’s a pressing need to interrogate the moral, ethical, and business questions that arise when human workers increasingly work with and at the behest of A.I. systems they had no role in creating.

Just last week, Pennsylvania Democratic Senator Bob Casey argued that A.I. will be the next frontier in the fight for workers’ rights and introduced two bills to regulate the technology in the workplace. One, called the “No Robot Bosses Act” would forbid companies from using automated and algorithmic systems in making decisions that affect employment, while the other targets A.I. workplace surveillance. Neither of these bills relate directly to situations like Vasquez’s (though they would seemingly impact Uber’s algorithm-directed rideshare drivers and corporate employees), but they’re just a small taste of what Congress, the EU, and other governments around the world are considering in terms of A.I. regulation for the workplace and beyond. Workers’ rights in the age of A.I. are even taking center stage in the current Hollywood strike, where actors are fighting a clause in their contract that would allow studios to pay them for one day of work and then replicate their likeness in perpetuity using A.I. 

“The legal dispute over the liability is over,” Santoni de Sio said of Vasquez’s case, “but the ethical and political debate has just begun.” 

With that, here's the rest of this week's A.I. news.

Sage Lazzaro
sagelazzaro.com

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.