Austin American-Statesman/Hearst Newspapers/Houston Chronicle via Getty Images
- Tesla says cameras alone are enough to help autonomous cars safely drive themselves.
- Waymo robotaxis uses multiple sensors, including cameras, radar, and lidar.
- A Waymo executive said the company isn't comparing its robotaxis' capabilities to humans.
If people can drive with their eyes, can an AI drive only with cameras?
Tesla leans on that analogy to defend its hotly debated cameras-only approach to autonomous cars.
"It should be solved with cameras just like how every other human or animal lives around this world," Ashok Elluswamy, Tesla's vice president of AI, said at the ScaledML Conference on January 29. "Self-driving problem is thought of as a sensor problem. It's actually not a sensor problem, it's an AI problem."
Alphabet's Waymo has a fundamentally different engineering approach to autonomy. Srikanth Thirumalai, Waymo's vice president of onboard software, pushed back on Elluswamy's comparison.
"I think the bar is higher than human driving," he told Business Insider.
The contrast between Waymo and Tesla goes beyond philosophy and is built into the hardware.
Tesla wants to reach autonomy with fewer than 10 cameras and an AI trained on billions of miles of real-world driving data. Waymo also relies on AI, but is paired with a multi-sensor system — 29 cameras, five lidars, and six radars — to give the AI driver different ways to perceive an environment. The Alphabet company has so far deployed about 2,500 robotaxis across multiple US cities.
The debate often boils down to cost and safety: More sensors could increase costs, which could be a barrier to scale. Fewer sensors could present safety challenges, some say, which is another constraint for mass robotaxi adoption.
Lloyd Lee/BI
Thirumalai manages a team of more than 600 people building Waymo's AI driver software. During a rare interview at Waymo's HQ, which spans multiple buildings, the vice president told Business Insider he expects the sensor suite to shrink over time as the hardware improves and gets cheaper. But he framed the lidar or no-lidar debate as a distraction from the company's safety-oriented objective.
"Given where the technology is right now, the question is what is it going to take for that product to be safe?" he said. "So you work backwards from that safety bar and say, 'What does it take to build a safe product?' And then keep pushing and iterating and innovating to reduce the cost of the sensors, and to improve the quality of the software and how it uses the sensors."
The soft-spoken Thirumalai looked to the future and explained his position.
"In three to five years, will our sensor stack look different than it is right now? Absolutely."
Waymo has previously said it expects the next generation of robotaxis to have fewer sensors: 13 cameras, four lidars, and six radars. A Waymo spokesperson previously told Business Insider that the company expects to serve public riders by late 2026.
A Tesla spokesperson did not respond to a request for comment.
How safe should a robotaxi be?
Humans can be bad drivers. They're easily distracted, swayed by emotions, and can be slow to make the right decisions. Leaders in autonomy will say they're driven by a mission to build something safer than humans. The challenge is defining what "safer" means in a way that regulators, riders, and engineers can measure.
"This notion of what the bar is is a very important question," Thirumalai said. "And one that we have only refined over the years, and in some cases, we're still sort of discovering what the bar is."
Instead of an arbitrary goalpost that says robots will be multiple times safer than a human driver, Thirumalai said Waymo looks at individual driving cases and assesses how often those events can occur.
"We break it down and say, 'Well, how often do those events actually occur per million miles of driving? And how serious are those events?" he said, adding that his team can then aim for a lower incident rate.
Thirumalai and even Waymo's top brass aren't selling perfection. A human fatality caused by a robotaxi isn't a matter of if but when, Waymo co-CEO Tekedra Mawakana has said.
Reports and videos shared across social media have shown that AVs can make mistakes, whether in school zones, emergency response scenes, bad weather, or even seemingly ordinary driving scenarios.
"People might say, 'Hey, look, this is AI. We never want it to make a mistake.' That is an unachievable bar," Thirumalai said.