The transparency site PlainSite recently published a pair of letters Tesla wrote to the California Department of Motor Vehicles in late 2020. The letters cast doubt on Elon Musk’s optimistic timeline for the development of fully driverless technology.
For years, Elon Musk has been predicting that fully driverless technology is right around the corner. At an April 2019 event, Musk predicted that Teslas would be capable of fully driverless operation—known in industry jargon as “level 5″—by the end of 2020.
“There’s three steps to self-driving,” Musk told Tesla investors at the event. “There’s being feature complete. Then there’s being feature complete to the degree where we think the person in the car does not need to pay attention. And then there’s being at a reliability level where we also convince regulators that that is true.”
Tesla obviously missed Musk’s 2020 deadline. But you might be forgiven for thinking Tesla is now belatedly executing the strategy he described two years ago. In October, Tesla released what it called its “full self-driving beta” software to a few-dozen Tesla owners. A few days ago, Musk announced plans to expand the program to more customers.
Given that the product is called “full self-driving,” this might seem like the first step in Musk’s three-step progression. After a few more months of testing, perhaps it will become reliable enough to operate without human supervision. That could allow Musk to make good on his latest optimistic timeline for Autopilot: in a December 2020 interview, Musk said he was “extremely confident” that Tesla vehicles would reach level 5 by the end of 2021.
But a letter Tesla sent to California regulators the same month had a different tone. Despite the “full self-driving” name, Tesla admitted it doesn’t consider the current beta software suitable for fully driverless operation. The company said it wouldn’t start testing “true autonomous features” until some unspecified point in the future.
“We do not expect significant enhancements”
In a pair of letters last November and December, officials at the California DMV asked Tesla for details about the FSD beta program. Tesla requires drivers using the beta software to actively supervise it so they can quickly intervene if needed. The DMV wanted to know if Tesla planned to relax requirements for human supervision once the software was made available to the general public.
In its first response, sent in November, Tesla emphasized that the beta software had limited functionality. Tesla told state regulators that the software is “not capable of recognizing or responding” to “static objects and road debris, emergency vehicles, construction zones, large uncontrolled intersections with multiple incoming ways, occlusions, adverse weather, complicated or adversarial vehicles in the driving path, and unmapped roads.”
In a December follow-up, Tesla added that “we expect the functionality to remain largely unchanged in a future, full release to the customer fleet.” Tesla added that “we do not expect significant enhancements” that would “shift the responsibility for the entire dynamic driving task to the system.” The system “will continue to be an SAE Level 2, advanced driver-assistance feature.”
SAE level 2 is industry jargon for driver-assistance systems that perform functions like lane-keeping and adaptive cruise control. By definition, level 2 systems require continual human oversight. Fully driverless systems—like the taxi service Waymo is operating in the Phoenix area—are considered level 4 systems.
In its letter to California officials, Tesla added that “Tesla’s development of true autonomous features will follow our iterative process (development, validation, early release, etc.) and any such features will not be released to the general public until we have fully validated them.”
Critics pounced on the disclosure. “Here it is, straight from Tesla,” tweeted prominent Tesla skeptic Ed Niedermeyer. “‘Full Self-Driving’ is not, and will never be, actually self-driving.”
This might not be quite fair to Tesla—the company apparently does plan to develop more advanced software eventually. But at a minimum, Tesla’s public communication about the full self-driving package could easily give customers the wrong impression about the software’s future capabilities.
Full autonomy is always right around the corner
Since 2016, Tesla has given customers every reason to expect that its “full self-driving” software would be, well, fully self-driving.
Early promotional materials for the FSD package described a driver getting out of the vehicle and having it find a parking spot on its own. Tesla has repeatedly talked about the FSD package enabling a Tesla vehicle to operate as an autonomous taxi—an application that requires the car to drive itself without anyone behind the wheel. In 2016, Musk predicted that, within two years, a Tesla owner in Los Angeles would be able to summon their vehicle from New York City.
If Tesla is really going to achieve fully driverless operation in 2021, that doesn’t leave much time to develop, test, and validate complex, safety-critical software. So it would be natural for customers to assume that the software Tesla named “Full Self Driving beta” is, in fact, a beta version of Tesla’s long-awaited fully self-driving software. But in its communications with California officials, Tesla makes it clear that’s not true.
Of course, Elon Musk has a long history of announcing over-optimistic timelines for his products. It’s not really news that Tesla failed to meet an optimistic deadline set by its CEO.
But there’s a deeper philosophical issue that may go beyond a few blown deadlines.
The long road to full autonomy
Tesla’s overall Autopilot strategy is to start with a driver-assistance system and gradually evolve it into a fully driverless system. A bunch of other companies in the industry—led by Google’s Waymo—believe that this is a mistake. They think the requirements of the two products are so different that it makes more sense to create a driverless taxi, shuttle, or delivery service from scratch.
In particular, companies like Waymo argue that it’s too difficult to get regular customers to pay close attention to an almost-but-not-fully driverless vehicle. If a car drives perfectly for 1,000 miles and then makes a big mistake, there’s a significant risk the human driver won’t be paying close enough attention to prevent a crash. Waymo initially considered creating an Autopilot-like driver assistance system and licensing it to automakers, but the company ultimately decided that doing so would be too risky.
Musk has always shrugged this critique off. As we’ve seen, he believes improvements to Autopilot’s driver-assistance features will transform it into a system capable of fully driverless operation.
But in its comments to the DMV, Tesla seems to endorse the opposite viewpoint: that adding “true autonomous features” to Autopilot will require more than just incrementally improving the performance of its existing software. Tesla acknowledged that it needs more sophisticated systems for handling the “static objects, road debris, emergency vehicles, construction zones.”
And this makes it a little hard to believe Musk’s boast that Tesla will achieve level 5 autonomy by the end of 2021. Notably, Google’s prototype self-driving vehicles have been able to navigate most roadway conditions—much like today’s Tesla FSD beta software—since roughly 2015. Yet the company needed another five years to refine the technology enough to enable fully driverless operation.
And that was within a limited geographic area and with help from powerful lidar sensors. Tesla is trying to achieve the same feat for every street nationwide—and using only cameras and radar.
Perhaps Tesla will move faster than Waymo, and it won’t take another five years to achieve fully driverless operation. But customers considering whether to pay $10,000 for Tesla’s full self-driving software package should certainly take Musk’s optimistic timeline with a grain of salt.
https://arstechnica.com/?p=1748020