Tesla tests the new version of FSD, and the visualization is fully enhanced

On July 9, Tesla officially pushed the FSD Beta 9 update to car owners who successfully applied for the test program. Although Tesla's FSD focuses on fully autonomous driving capabilities, it is still an assisted driving system. In the future, it will be necessary to rely on OTA software upgrades to enable vehicles to have fully autonomous driving capabilities.

Since its launch, Tesla’s pricing for FSD has also been changing. In 2018, this additional feature will cost $3,000 when buying a new car, and subsequent purchases will cost $4,000. Since then, the price of FSD has continued to rise, from US$5,000 to US$8,000. Last year, the price of FSD software was raised to US$10,000 after the launch of the FSD software beta. Of course, such price changes are not only due to significant improvements in FSD software, but also hardware updates.

The launch of FSD beta 9 can be described as the most important update of the system, because it will now completely rely on Tesla Vison, and its computer vision system will only use the optical image of the camera, and no longer use the radar sensor. Users who have received FSD beta 9 can use this advanced assisted driving function on local non-highway roads.

FSD beta 9 provides a total of three major updates, which have been improved for assisted autonomous driving, driving visualization, and in-car cameras. Assisted autonomous driving uses a purely visual solution. Previously, Tesla's AI director also mentioned the use of deep learning to provide autonomous driving capabilities and label full automation. In addition to automatic labeling, Tesla also has a large manual labeling team. According to Musk, Tesla currently has more than 500 high-level labelers and will expand to 1,000 in the future. Therefore, in object recognition, Tesla's pure vision solution may not be lost to advanced sensors such as lidar.

The biggest change felt by the tester is naturally the improvement of the visual UI on the screen. When the autopilot is started, the screen will provide more surrounding information. At present, the visual UI of beta 9 can not only display the taillights of surrounding cars but also display passing pedestrians and pets. Under the screen photos posted by netizens, Musk replied that FSD will soon implement steering and hazard signals, emergency vehicle and police car lights, and even hand gestures so that Tesla vehicles can also respond appropriately to emergencies. Shown on the screen.

The last is the camera system in the cockpit. Due to safety and policy considerations, the current assisted automatic driving still requires the driver to hold his hand on the steering wheel and observe environmental factors such as the route, passing vehicles, and objects. The camera on the rearview mirror will detect whether the driver is in a state of concentration and provide voice reminders. If the user does not turn on the data sharing, these image data will not be transmitted to the outside of the car body for data analysis by Tesla. Driver monitoring is also a very important function for autonomous driving at less than the L5 level. Last year, a Tesla owner slept at a speed of 93 miles per hour and was stopped by traffic police and fined him. There were also many people filming. Or propaganda, driving a vehicle when the system is deceived into thinking that someone is in the driver’s seat.

 

Musk has given unrealistic autopilot promises to Tesla owners since 2018, but the so-called fully autonomous driving has been postponed, and it is impossible to achieve the L5 level autopilot he claimed. According to the standards of the American Automobile Association, Tesla's autonomous driving can still only be regarded as L2 semi-autonomous driving. Musk himself also replied to netizens in early July, acknowledging that general-purpose autonomous driving is indeed a problem because it needs to solve a real-world AI problem. Looking back now, he did not expect such a difficulty.

summary

At present, many vehicles that claim to be self-driving above the L2 level are showing off their visual processing capabilities, and Tesla is not stingy in the display capabilities in the car. In the current situation where fully autonomous driving has not yet been realized, only by providing the driver with a more intuitive visualized UI can we achieve better-assisted driving. Tesla provides 10 TFLOPS of computing power for the IVI system of the new models, which is enough to support the rendering of various object models and lanes, rather than simply marking with squares, which undoubtedly has greater appeal to consumers force.

However, Tesla's FSD system is still in a test state. Although it has solved many of the visibility problems that ordinary camera vision has to solve, it is still difficult to compare with lidar in the measurement of object distance, so it is displayed in the display. There will also be the problem of drifting static objects. If you want to release the official version to the public, Tesla's self-driving team still has a lot of work to be done.