Horacio Villalobos - Corbis/Corbis via Getty Images
- Uber's former self-driving lead, Raffi Krikorian, has a warning about AI.
- He wrote in The Atlantic that his Tesla was "totalled" after he crashed it while in Full Self-Driving mode last year.
- Krikorian said he knew the risks of giving up control to AI, but the tech was so good that it was hard not to trust it.
The former head of Uber's self-driving program says a Tesla crash taught him a lesson about "overtrusting" AI.
In an essay published Tuesday in The Atlantic, Raffi Krikorian wrote that his Tesla Model X was "totalled" after it crashed, with him at the wheel, into a wall while in Full Self-Driving mode last year.
"Something felt off—the steering wheel jerked one way, then the other, and the car decelerated in a way I didn't expect. I turned the wheel to take over," Krikorian, who is now the CTO of Mozilla, wrote.
"I don't know exactly what the system was doing, or why. I only know that somewhere in those seconds, we ended up colliding with a wall," he added.
Krikorian said he had a concussion, but otherwise, no one was hurt in the crash, which took place while he was on his way to drop his son off at a Boy Scouts meeting in San Francisco. Tesla did not respond to a request for comment.
But the tech executive, who ran Uber's self-driving division from 2015-2017 and oversaw testing of a fleet of autonomous vehicles, said that the incident taught him a disquieting lesson about AI-powered driving tech: it's easy to switch off when systems are so close to perfect.
"I was asked to snap from passenger back to pilot in a fraction of a second—to override months of conditioning in the time it takes to blink," he wrote.
Modern driver assist systems, like Tesla's Full Self-Driving (FSD), can handle nearly every driving situation.
However, these AI systems are not infallible. Tesla and other manufacturers require drivers to supervise the vehicle and be ready to take over at a moment's notice if something goes wrong.
"A machine that works perfectly needs no oversight. But a machine that works almost perfectly? That's where the danger lies," Krikorian wrote.
FSD has a history of incidents
Incidents of FSD making mistakes are well-documented. Tesla is facing regulatory investigations over reports that vehicles equipped with the software drove on the wrong side of the road, ran red lights, and attempted to drive through railroad crossings.
The company has faced several lawsuits over fatal crashes involving FSD and its predecessor Autopilot, and last year was ordered to pay just over $242 million in damages over a 2019 crash that left a 22-year-old woman dead.
Elon Musk's automaker has also been scrutinized over its marketing of FSD and Autopilot, with a California judge ruling last December that Tesla's advertising falsely suggested cars equipped with the tech could drive themselves.
Tesla is not the only manufacturer to get into hot water over its driver assist system.
Ford is facing a regulatory investigation over two fatal crashes in 2024 that involved the Detroit automaker's BlueCruise tech, with documents released by the National Transportation Safety Board (NTSB) earlier this month suggesting that both drivers were distracted prior to the crashes.
Uber had its own run-in with the challenges posed by assisted driving tech in 2018. A year after Krikorian left the company, an Uber test vehicle driving autonomously with a safety operator struck and killed a pedestrian in Arizona.
An NTSB investigation found that the safety operator failed to monitor the environment and was distracted by her cellphone, and criticized Uber's "inadequate safety culture." Uber abandoned its attempt to build self-driving cars in-house in 2020.