“Oh Jeeeesus”: Drivers react to Tesla’s full self-driving beta release

  News
image_pdfimage_print
Two cars nearly collide in a parking lot.
Enlarge / YouTuber Brandon M captured this drone footage of his Tesla steering toward a parked car. “Oh Jeeeesus,” he said as he grabbed the steering wheel.

Last week, Tesla released an early version of its long-awaited “full self-driving” software to a limited number of customers. It was arguably Tesla’s biggest Autopilot update ever. The software enables Tesla vehicles to autonomously navigate the vast majority of common roadway situations and complete many trips from start to finish.

Tesla considers it to be beta software and says it’s not intended for fully autonomous operation. Drivers are expected to keep their eyes on the road and hands on the wheel at all times.

To understand the new software, I watched more than three hours of driving footage from three Tesla owners who got the FSD update. These YouTube videos underscored how important it is for drivers to actively supervise Tesla’s new software. Over the course of three hours, the drivers took control more than a dozen times, including at least two cases when the car seemed to be on the verge of crashing into another vehicle.

On the one hand, it’s impressive that Tesla has gotten as far as it has. At the same time, the software clearly has a long way to go before it’s anywhere close to human levels of driving performance. An experienced human driver can drive for thousands of miles without making a serious mistake. Tesla’s new software falls far short of that.

“It’s crazy, it’s scary, and it’s unbelievably good”

"That car was going so fast," Brandon M said as he disabled Autopilot and hit the brakes.
Enlarge / “That car was going so fast,” Brandon M said as he disabled Autopilot and hit the brakes.

In one video posted last Friday, Tesla owner Brandon M approached a four-way intersection at night with the full self-driving software engaged. The car stopped at the stop sign and then began to accelerate. A second later, Brandon disengaged the FSD software and hit the brakes—just as another car zoomed by on the cross street.

“That car was going so fast,” Brandon said. “I had to disengage there because it didn’t detect that car for some reason.”

In another video, Brandon’s Tesla was making a left turn but wasn’t turning sharply enough to avoid hitting a car parked on the opposite side of the cross street. “Oh Jeeeesus,” Brandon said as he grabbed the steering wheel and jerked it to the left. “Oh my God,” Brandon’s passenger added.

“That was a good example of this is still beta and how important it is to have control at all times,” Brandon said. “It just steered directly into the back of this parked car, and it wasn’t going to brake.”

To be fair to Tesla, we don’t know that either of these incidents would have necessarily led to a crash. Maybe the software would have realized its mistake and hit the brakes at the last second. And Brandon’s overall impression of the technology was positive.

Minutes earlier, Brandon had raved about the software’s performance. “Compared to when we did the drive two days ago, it’s so much smoother,” Brandon said. “The improvements from two software releases ago is incredible.”

The other two drivers also had mixed experiences. They were impressed at how quickly the software had improved, but each intervened multiple times when the software’s behavior made them nervous.

“It’s crazy, it’s scary, and it’s unbelievably good,” Tesla owner Zeb Hallock said in a video posted on Sunday. Hallock had just taken over control as his car was passing a bicyclist at a place where the road was curving. While the car did move over to give the bicyclist room, Hallock said that “it was weaving, and it was a curve, and I just wasn’t sure. Even if it was completely safe, it could scare someone with the whole weaving thing.”

Tesla owner James Locke experienced fewer glitches than the other two YouTubers. But at one point he took over because the car was “veering too far right” as it approached an intersection. He was impressed when the vehicle recognized construction cones and changed lanes to avoid them—a capability Tesla introduced a few months ago.

Tesla’s rivals have been cautious

One of the earliest families in Waymo's public trial in Phoenix poses with a Waymo minivan.
Enlarge / One of the earliest families in Waymo’s public trial in Phoenix poses with a Waymo minivan.

Tesla rivals like Alphabet’s Waymo and GM’s Cruise have spent billions developing self-driving technology. In recent months, they have started to believe their vehicles are ready for fully autonomous operation. But finding out if that’s really true requires taking a leap of faith: putting the cars on public roads and letting them drive without direct human oversight. If the companies do this too early, they could get people killed.

So the companies have cautiously tiptoed up to this line, looking for ways to test their software as thoroughly as possible before they fully take off the training wheels.

Since early 2017, for example, Waymo has operated a self-driving taxi service in the Phoenix suburb of Chandler with safety drivers behind the wheel of almost every vehicle. Earlier this month, after more than three years of testing, Waymo finally began offering driverless taxi rides to the general public. But it was the most cautious launch imaginable. The service is limited to a 50-square-mile corner of the Phoenix metropolitan area. The company is initially offering fewer than 100 driverless rides per week, and the rides are closely monitored by staff at Waymo’s operations center in Chandler, Arizona.

Cruise is taking a similar approach with plans to launch a low-speed taxi service in a single San Francisco neighborhood later this year.

Tesla’s business model is selling cars, not running a taxi service. And so the company has pursued a radically different testing strategy. Instead of trying to leap straight to a fully self-driving service, the company started with a basic lane-keeping system and has gradually added capabilities over the last four years. That strategy culminated in last week’s full self-driving release that enables Tesla cars to complete most trips end to end.

Tesla’s big gamble

Tesla's full self-driving software successfully recognized these traffic cones and moved to the right lane.
Enlarge / Tesla’s full self-driving software successfully recognized these traffic cones and moved to the right lane.

Instead of hiring professional safety drivers, Tesla has counted on customers to supervise their vehicles and prevent crashes. This hasn’t worked perfectly. At least three Tesla customers in the United States have lost their lives after failing to prevent Autopilot from steering into obstacles.

Tesla has reaped a wealth of real-world data that it can use to improve its software. If Tesla follows through on its plan to release its full self-driving software widely in the next few months, the company could accelerate the development of its software. But it could also be taking another gamble with its customers’ lives—and the lives of others on the roads.

Tesla drivers may not be able to effectively supervise a car that drives safely 99 percent (or even 99.99 percent) of the time. Indeed, Google found itself in a similar situation around 2012, when it let Google employees test-drive an early version of its self-driving technology on freeways. They found that some Googlers quickly started trusting the vehicle and stopped paying close attention to the road. The experience scared Google executives so much that they abandoned plans to roll out self-driving systems incrementally—the very strategy Tesla is pursuing now.

Even professional safety drivers have trouble paying attention to the road. In 2018, an Arizona woman died after being run over by an Uber self-driving prototype. The vehicle had a safety driver behind the wheel, but she was allegedly looking at her phone in the final seconds before the crash.

So far, Tesla has only released its full self-driving software to a limited number of early adopters—people who may be fully aware that it’s beta software and hyper-vigilant as a result. But even if these drivers are careful at the outset, they may become complacent over time. And it will be much harder to maintain high levels of driver engagement if Tesla makes the software available more broadly.

In the past, Autopilot has been primarily used on freeways, which tend to have wide shoulders and few obstacles. Supervising a self-driving system is more difficult on city streets crowded with pedestrians, bicycles, and other obstacles. Even if a driver is paying close attention, human reaction times might not be fast enough to intervene and prevent tragedy.

While Tesla has made progress over the last year, the company clearly has a long way to go. Based on early videos, Tesla’s software seems to make mistakes much more frequently than experienced human drivers.

Indeed, it’s not clear if Tesla actually has better self-driving technology than other carmakers or is simply willing to take more risks than more established rivals. Mercedes-Benz, for example, had a prototype vehicle in 2013 that appeared to have many of the same capabilities as Tesla’s current FSD software. Many other carmakers have worked on similar technology since then. Have they failed to bring products to market because their technology is not as good as Tesla’s? Or have they simply not been willing to take the risk Tesla is taking now? It’s not clear.

https://arstechnica.com/?p=1717551