Tesla has expanded the testing group of their latest Full Self-Driving (FSD) Beta, deploying the latest version of the software, V11.3, to more employee vehicles over the weekend. The release notes from one of those employee vehicles leaked online, giving us our second look at what to expect with this version which unites the city and highway neural nets into a single stack.
We say this is the second look because all the way back in November the release notes for the first iteration of “V11” also leaked online. This latest version, contained in 2022.45.5, has some changes and additions, but the first paragraph of the release notes remains the same, explaining that it enables FSD Beta on the highway, replacing the four-year old legacy highway stack.
“Enabled FSD Beta on highway. This unifies the vision and planning stack on and off-highway and replaces the legacy highway stack, which is over four years old. The legacy highway stack still relies on several single-camera and single-frame networks, and was set up to handle simple lane-specific maneuvers. FSD Beta’s multi-camera video networks and next-gen planner, that allows for more complex agent interactions with less reliance on lanes, make way for adding more intelligent behaviors, smoother control and better decision-making.” (via Teslaupdates.org)
The second point in the release notes is brand new, which is the addition of voice memos. Until recently FSD Beta testers could provide feedback to Tesla engineers by tapping the video camera icon on their screen, which would send a short video clip back to the mothership for the engineers to analyze. While this provided valuable feedback, it lacked critical context that only the owner would know. Tesla never explained why, but we believe this is why they removed the camera icon in a recent update.
Now you can provide feedback again, and provide that critical context through voice memos, or voice drive-notes as Tesla calls it. The release notes explain “you can now send Tesla an anonymous voice message describing your experience to help improve Autopilot.”
The remaining points in the release notes are, as always, very technical, but the focus appears to be on increased safety. The reaction time to red-light runners has been improved, the reaction to blocked lanes has also been improved, and recall of road lines and road edges has also been improved. One point that many users were excited about had to do with lane changes.
“Improved lane changes, including: earlier detection and handling for simultaneous lane changes, better gap selection when approaching deadlines, better integration between speed-based and nav-based lane change decisions and more differentiation between the FSD driving profiles with respect to speed lane changes.”
We still don’t know when the single stack FSD Beta V11 will be released to the public, as it has seen numerous delays and missed deadlines provided by Elon Musk. However if testing goes well we could see a limited public release this week, meaning a wider expansion likely won’t happen until early March.
Here are the full 2022.45.5 (FSD Beta V11.3) release notes. (via Teslaupdates.org)
• Enabled FSD Beta on highway. This unifies the vision and planning stack on and off-highway and replaces the legacy highway stack, which is over four years old. The legacy highway stack still relies on several single-camera and single-frame networks, and was setup to handle simple lane-specific maneuvers. FSD Beta’s multi-camera video networks and next-gen planner, that allows for more complex agent interactions with less reliance on lanes, make way for adding more intelligent behaviors, smoother control and better decision making.
• Added voice drive-notes. After an intervention, you can now send Tesla an anonymous voice message describing your experience to help improve Autopilot.
• Expanded Automatic Emergency Braking (AEB) to handle vehicles that cross ego’s path. This includes cases where other vehicles run their red light or turn across ego’s path, stealing the right-of-way. Replay of previous collisions of this type suggests that 49% of the events would be mitigated by the new behavior. This improvement is now active in both manual driving and autopilot operation.
• Improved autopilot reaction time to red light runners and stop sign runners by 500ms, by increased reliance on object’s instantaneous kinematics along with trajectory estimates.
• Added a long-range highway lanes network to enable earlier response to blocked lanes and high curvature.
• Reduced goal pose prediction error for candidate trajector neural network by 40% and reduced runtime by 3X. This was achieved by improving the dataset using heavier and more robust offline optimization, increasing the size of this improved dataset by 4X, and implementing a better architecture and feature space.
• Improved occupancy network detections by oversampling on 180K challenging videos including rain reflections, road debris, and high curvature.
• Improved recall for close-by cut-in cases by 20% by adding 40k autolabeled fleet clips of this scenario to the dataset. Also improved handling of cut-in cases by improved modeling of their motion into ego’s lane, leveraging the same for smoother lateral and longitudinal control for cut-in objects.
• Added “lane guidance module and perceptual loss to the Road Edges and Lines network, improving the absolute recall of lines by 6% and the absolute recall of road edges by 7%.
• Improved overall geometry and stability of lane predictions by updating the “lane guidance” module representation with information relevant to predicting crossing and oncoming lanes.
• Improved handling through high speed and high curvature scenarios by offsetting towards inner lane lines.
• Improved lane changes, including: earlier detection and handling for simultaneous lane changes, better gap selection when approaching deadlines, better integration between speed-based and nav-based lane change decisions and more differentiation between the FSD driving profiles with respect to speed lane changes.
• Improved longitudinal control response smoothness when following lead vehicles by better modeling the possible effect of lead vehicles’ brake lights on their future speed profiles.
• Improved detection of rare objects by 18% and reduced the depth error to large trucks by 9%, primarily from migrating to more densely supervised autolabeled datasets.
• Improved semantic detections for school busses by 12% and vehicles transitioning from stationary-to-driving by 15%. This was achieved by improving dataset label accuracy and increasing dataset size by 5%.
• Improved decision making at crosswalks by leveraging neural network based ego trajectory estimation in place of approximated kinematic models.
• Improved reliability and smoothness of merge control, by deprecating legacy merge region tasks in favor of merge topologies derived from vector lanes.
• Unlocked longer fleet telemetry clips (by up to 26%) by balancing compressed IPC buffers and optimized write scheduling across twin SOCs.