In the release notes of the latest Tesla FSD Beta v11, Tesla explains what is happening to Autopilot with the new update, and it adds the capacity to send voice feedback.
Tesla FSD Beta v11 is both an exciting and scary step as it is supposed to merge Tesla’s FSD and Autopilot highway stacks.
FSD Beta enables Tesla vehicles to drive autonomously to a destination entered in the car’s navigation system, but the driver needs to remain vigilant and ready to take control at all times.
Since the responsibility rests with the driver and not Tesla’s system, it is still considered a level-two driver-assist system, despite its name. It has been sort of a “two steps forward, one step back” type of program, as some updates have seen regressions in terms of driving capabilities.
Tesla has frequently been releasing new software updates to the FSD Beta program and adding more owners to it.
Since the wider release of the beta last year, there are currently over 400,000 Tesla owners in the program in North America – virtually every Tesla owner who bought the FSD package on their vehicles.
The update is an important step because it includes many new neural networks, as Elon Musk stated, but from a consumer perspective, it’s also important because it is expected to merge Tesla’s FSD Beta software stack primarily used on roads and city streets with Tesla’s Autopilot software stack, which is used as a level 2 driver assist system on highways.
It has been delayed several times, but recently, Musk confirmed that a new version (v11.3) is going to a closed beta fleet this week – indicating that it might finally be about to be more widely released.
Now NotaTeslaapp, which tracks Tesla software updates, has obtained the FSD Beta v11.3 release notes, and they contain some interesting information.
Tesla starts out by explaining in more detail what it going to happen to Autopilot with this update:
Enabled FSD Beta on highway. This unifies the vision and planning stack on and off-highway and replaces the legacy highway stack, which is over four years old. The legacy highway stack still relies on several single-camera and single-frame networks, and was setup to handle simple lane-specific maneuvers. FSD Beta’s multi-camera video networks and next-gen planner, that allows for more complex agent interactions with less reliance on lanes, make way for adding more intelligent behaviors, smoother control and better decision making.
As expected this leaves the door open for some regression at first, but Tesla makes it clear that it believes this is the way to go long-term.
Another interesting new feature revealed by the release notes is the capacity to send Tesla voice memos about your FSD Beta experience. That’s something that Beta testers have been asking for a while as they can use it to give Tesla more details about a specific situation that they experience with the system.
A big part of the rest of the notes appears to focus on curbing some potentially dangerous driving behavior that FSD Beta has been known to do and has recently been described by NHTSA in its FSD Beta recall notice.
As we noted in our reporting of the recall, the notice made it sound like Tesla’s “fix” for the “recall” was simply its usual next software update, but now it looks like they did try to address some of these things more specifically as described in the release notes.
Here are the full Tesla FSD Beta v11.3 release notes:
Enabled FSD Beta on highway. This unifies the vision and planning stack on and off-highway and replaces the legacy highway stack, which is over four years old. The legacy highway stack still relies on several single-camera and single-frame networks, and was set up to handle simple lane-specific maneuvers. FSD Beta’s multi-camera video networks and next-gen planner, that allows for more complex agent interactions with less reliance on lanes, make way for adding more intelligent behaviors, smoother control and better decision-making.
Added voice drive-notes. After an intervention, you can now send Tesla an anonymous voice message describing your experience to help improve Autopilot.
Expanded Automatic Emergency Braking (AEB) to handle vehicles that cross ego’s path. This includes cases where other vehicles run their red light or turn across ego’s path, stealing the right-of-way. Replay of previous collisions of this type suggests that 49% of the events would be mitigated by the new behavior. This improvement is now active in both manual driving and autopilot operation.
Improved autopilot reaction time to red light runners and stop sign runners by 500ms, by increased reliance on object’s instantaneous kinematics along with trajectory estimates.
Added a long-range highway lanes network to enable earlier response to blocked lanes and high curvature.Reduced goal pose prediction error for candidate trajectory neural network by 40% and reduced runtime by 3X. This was achieved by improving the dataset using heavier and more robust offline optimization, increasing the size of this improved dataset by 4X, and implementing a better architecture and feature space.
Improved occupancy network detections by oversampling on 180K challenging videos including rain reflections, road debris, and high curvature.
Improved recall for close-by cut-in cases by 20% by adding 40k autolabeled fleet clips of this scenario to the dataset. Also improved handling of cut-in cases by improved modeling of their motion into ego’s lane, leveraging the same for smoother lateral and longitudinal control for cut-in objects.
Added “lane guidance module and perceptual loss to the Road Edges and Lines network, improving the absolute recall of lines by 6% and the absolute recall of road edges by 7%.
Improved overall geometry and stability of lane predictions by updating the “lane guidance” module representation with information relevant to predicting crossing and oncoming lanes.
Improved handling through high speed and high curvature scenarios by offsetting towards inner lane lines.
Improved lane changes, including: earlier detection and handling for simultaneous lane changes, better gap selection when approaching deadlines, better integration between speed-based and nav-based lane change decisions and more differentiation between the FSD driving profiles with respect to speed lane changes.
Improved longitudinal control response smoothness when following lead vehicles by better modeling the possible effect of lead vehicles’ brake lights on their future speed profiles.
Improved detection of rare objects by 18% and reduced the depth error to large trucks by 9%, primarily from migrating to more densely supervised autolabeled datasets.
Improved semantic detections for school busses by 12% and vehicles transitioning from stationary-to-driving by 15%. This was achieved by improving dataset label accuracy and increasing dataset size by 5%.
Improved decision-making at crosswalks by leveraging neural network-based ego trajectory estimation in place of approximated kinematic models.
Improved reliability and smoothness of merge control, by deprecating legacy merge region tasks in favor of merge topologies derived from vector lanes.
Unlocked longer fleet telemetry clips (by up to 26%) by balancing compressed IPC buffers and optimized write scheduling across twin SOCs.
FTC: We use income earning auto affiliate links.More.
Anthropic and Google officially announced their cloud partnership Thursday, a deal that gives the artificial intelligence company access to up to one million of Google’s custom-designed Tensor Processing Units, or TPUs.
The deal, which is worth tens of billions of dollars, is the company’s largest TPU commitment yet and is expected to bring well over a gigawatt of AI compute capacity online in 2026.
Industry estimates peg the cost of a 1-gigawatt data center at around $50 billion, with roughly $35 billion of that typically allocated to chips.
While competitors tout even loftier projections — OpenAI’s 33-gigawatt “Stargate” chief among them — Anthropic’s move is a quiet power play rooted in execution, not spectacle.
Founded by former OpenAI researchers, the company has deliberately adopted a slower, steadier ethos, one that is efficient, diversified, and laser-focused on the enterprise market.
A key to Anthropic’s infrastructure strategy is its multi-cloud architecture.
The company’s Claude family of language models runs across Google’s TPUs, Amazon’s custom Trainium chips, and Nvidia’s GPUs, with each platform assigned to specialized workloads like training, inference, and research.
Google said the TPUs offer Anthropic “strong price-performance and efficiency.”
“Anthropic and Google have a longstanding partnership and this latest expansion will help us continue to grow the compute we need to define the frontier of AI,” said Anthropic CFO Krishna Rao in a release.
Anthropic’s ability to spread workloads across vendors lets it fine-tune for price, performance, and power constraints.
According to a person familiar with the company’s infrastructure strategy, every dollar of compute stretches further under this model than those locked into single-vendor architectures.
Google, for its part, is leaning into the partnership.
“Anthropic’s choice to significantly expand its usage of TPUs reflects the strong price-performance and efficiency its teams have seen with TPUs for several years,” said Google Cloud CEO Thomas Kurian in a release, touting the company’s seventh-generation “Ironwood” accelerator as part of a maturing portfolio.
Claude’s breakneck revenue growth
Anthropic’s escalating compute demand reflects its explosive business growth.
The company’s annual revenue run rate is now approaching $7 billion, and Claude powers more than 300,000 businesses — a staggering 300× increase over the past two years. The number of large customers, each contributing more than $100,000 in run-rate revenue, has grown nearly sevenfold in the past year.
Claude Code, the company’s agentic coding assistant, generated $500 million in annualized revenue within just two months of launch, which Anthropic claims makes it the “fastest-growing product” in history.
While Google is powering Anthropic’s next phase of compute expansion, Amazon remains its most deeply embedded partner.
The retail and cloud giant has invested $8 billion in Anthropic to date, more than double Google’s confirmed $3 billion in equity.
Still, AWS is considered Anthropic’s chief cloud provider, making its influence structural and not just financial.
Its custom-built supercomputer for Claude, known as Project Rainier, runs on Amazon’s Trainium 2 chips. That shift matters not just for speed, but for cost: Trainium avoids the premium margins of other chips, enabling more compute per dollar spent.
Wall Street is already seeing results.
Rothschild & Co Redburn analyst Alex Haissl estimated that Anthropic added one to two percentage points to AWS’s growth in last year’s fourth quarter and this year’s first, with its contribution expected to exceed five points in the second half of 2025.
Wedbush’s Scott Devitt previously told CNBC that once Claude becomes a default tool for enterprise developers, that usage flows directly into AWS revenue — a dynamic he believes will drive AWS growth for “many, many years.”
Google, meanwhile, continues to play a pivotal role. In January, the company agreed to a new $1 billion investment in Anthropic, adding to its previous $2 billion and 10% equity stake.
Critically, Anthropic’s multicloud approach proved resilient during Monday’s AWS outage, which did not impact Claude thanks to its diversified architecture.
Still, Anthropic isn’t playing favorites. The company maintains control over model weights, pricing, and customer data — and has no exclusivity with any cloud provider. That neutral stance could prove key as competition among hyperscalers intensifies.
Redwood Materials, founded by former Tesla CTO and cofounder JB Straubel, has raised $350 million in new funding to scale its US-made battery storage systems and critical materials operations. The company is ramping up to meet surging demand from AI data centers and the clean energy sector.
The oversubscribed Series E round was led by Eclipse, with participation from NVentures, NVIDIA’s venture capital arm, and other new strategic investors.
As global supplies tighten, the US is racing to secure domestic production of critical materials like lithium, nickel, cobalt, and copper. In July, Redwood and GM signed a non-binding memorandum of understanding to turn new and second-life GM batteries into energy storage systems. Redwood launched a new venture in June called Redwood Energy that repurposes both new and used EV battery packs into fast and cost-effective energy storage systems.
Redwood says large-scale battery storage is the fastest and most scalable way to enable new AI data center rollout while unlocking stranded generation capacity and stabilizing the grid. Battery storage also helps industrial facilities electrify and balance renewable energy output. The company aims to deliver a new generation of affordable, US-built energy storage systems designed to serve the grid, heavy industry, and AI data centers, reducing dependence on imported Lithium Iron Phosphate batteries.
Advertisement – scroll for more content
Redwood will use the new capital to expand energy storage deployments, refining and materials production capacity, and its engineering and operations teams.
The 30% federal solar tax credit is ending this year. If you’ve ever considered going solar, now’s the time to act. To make sure you find a trusted, reliable solar installer near you that offers competitive pricing, check out EnergySage, a free service that makes it easy for you to go solar. It has hundreds of pre-vetted solar installers competing for your business, ensuring you get high-quality solutions and save 20-30% compared to going it alone. Plus, it’s free to use, and you won’t get sales calls until you select an installer and share your phone number with them.
Your personalized solar quotes are easy to compare online and you’ll get access to unbiased Energy Advisors to help you every step of the way. Get started here.
FTC: We use income earning auto affiliate links.More.
A report this morning detailed American EV automaker Rivian’s plans to lay off a portion of its current workforce as it tries to conserve cash while gearing up for the launch of its newest model, the R2, next year.
Update 10/23/25: As promised, Rivian followed up with more details of this morning’s report regarding layoffs. The following letter from Rivian founder and CEO, RJ Scaringe, was sent out to the automaker’s workforce moments ago:
Hi Team,
I am writing to share a difficult update.
With the launch of R2 in front of us and the need to profitably scale our business, we have made the very difficult decision to make a number of structural adjustments to our teams. These changes result in a reduction in the size of our team by roughly 4.5%.
These are not changes that were made lightly. With the changing operating backdrop, we had to rethink how we are scaling our go-to-market functions. This news is challenging to hear, and the hard work and contributions of the team members who are leaving are greatly appreciated.
To ensure we move forward with clarity, I want to summarize the areas most impacted.
Streamlining the Customer Journey: To provide a seamless experience for our customers, we are integrating the Vehicle Operations workstreams into the Service organization to create fewer customer handoffs and clearer ownership. We are also integrating the Delivery and Mobile Operations into the Sales organization to ensure the purchase experience is as seamless as possible with a single touchpoint throughout the entire sales process and to delivery.
Elevating Our Marketing Efforts: Historically we have had multiple functions that collectively capture what would typically be housed in a single marketing organization. We have made the decision to form a single marketing organization, and while we recruit our first Chief Marketing Officer (CMO), I will be acting as Interim CMO. Our Marketing Experiences team, led by Denise Cherry, and the Creative Studio team, led by Matt Soldan, will both report directly to me for now.
These changes are being made to ensure we can deliver on our potential by scaling efficiently towards building a healthy and profitable business. I am incredibly confident in R2 and the hard work of our teams to deliver and ramp this incredible product.
Thanks again everyone.
RJ
Not much backstory here, so we’ll get right into it.
A report from the Wall Street Journal this morning shared brief details of Rivian’s layoff plans, which could affect approximately 4% of the current staff. At the end of 2024, Rivian’s workforce tally sat around 15,000 people, so the reported layoff could affect as many as 600 individuals, possibly more.
Advertisement – scroll for more content
Other outlets have pointed out that EV automakers like Rivian have faced a tougher market following the end of the $7,500 federal tax incentive. While that may be true to a certain extent, most of Rivian’s R1 variants didn’t qualify, unless it was a lease, and the automaker has deployed its own incentive programs.
In fact, Rivian’s Q3 2025 deliveries exceeded expectations. It remains speculative at this point until we receive an official statement from Rivian explaining the plans to lay off staff, but this could be a preemptive decision based on market forecasts.
Furthermore, Rivian is closer than ever to launching R2 in 2026, which has the makings of becoming a bestseller in the EV industry if sales match a mere portion of the hype surrounding it. The layoffs could also be a lean-down to conserve funds through the home stretch of that development process before beefing back up again in 2026 or 2027 when demand is (ideally) higher.
We really do not and will not know the reasoning behind the decision until Rivian shares more information.
We reached out to Rivian for comment and were told the automaker will have more to share this afternoon. We will update this story as new information becomes available.
FTC: We use income earning auto affiliate links.More.