The race for autonomous driving has three fronts: software, hardware, and regulatory. For years, we’ve watched Tesla try to brute-force its way to “Full Self-Driving (FSD)” with its own custom hardware, while the rest of the automotive industry is increasingly lining up behind NVIDIA.
Here’s a table comparing the two chips with the best possible specs I could find. greentheonly’s teardown was particularly useful. If you find things you think are not accurate, please don’t hesitate to reach out:
Feature / Specification
Tesla AI4 (Hardware 4.0)
NVIDIA Drive Thor (AGX / Jetson)
Developer / Architect
Tesla (in-house)
NVIDIA
Manufacturing Process
Samsung 7nm (7LPP class)
TSMC 4N (custom 5nm class)
Release Status
In production (shipping since 2023)
In production since 2025
CPU Architecture
ARM Cortex-A72 (legacy)
ARM Neoverse V3AE (server-grade)
CPU Core Count
20 cores (5× clusters of 4 cores)
14 cores (Jetson T5000 configuration)
AI Performance (INT8)
~100–150 TOPS (dual-SoC system)
1,000 TOPS (per chip)
AI Performance (FP4)
Not supported / not disclosed
2,000 TFLOPS (per chip)
Neural Processing Unit
3× custom NPU cores per SoC
Blackwell Tensor Cores + Transformer Engine
Memory Type
GDDR6
LPDDR5X
Memory Bus Width
256-bit
256-bit
Memory Bandwidth
~384 GB/s
~273 GB/s
Memory Capacity
~16 GB typical system
Up to 128 GB (Jetson Thor)
Power Consumption
Est. 80–100 W (system)
40 W – 130 W (configurable)
Camera Support
5 MP proprietary Tesla cameras
Scalable, supports 8MP+ and GMSL3
Special Features
Dual-SoC redundancy on one board
Native Transformer Engine, NVLink-C2C
The most striking difference right off the bat is the manufacturing process. NVIDIA is throwing everything at Drive Thor, using TSMC’s cutting-edge 4N process (a custom 5nm-class node). This allows them to pack in the new Blackwell architecture, which is essentially the same tech powering the world’s most advanced AI data centers.
Advertisement – scroll for more content
Tesla, on the other hand, pulled a move that might surprise spec-sheet warriors. Teardowns confirm that AI4 is built on Samsung’s 7nm process. This is mature, reliable, and much cheaper than TSMC’s bleeding-edge nodes.
When you look at the compute power, NVIDIA claims a staggering 2,000 TFLOPS for Thor. But there’s a catch. That number uses FP4 (4-bit floating point) precision, a new format designed specifically for the Transformer models used in generative AI.
Tesla’s AI4 is estimated to hit around 100-150 TOPS (INT8) across its dual-SoC redundant system. On paper, it looks like a slaughter, but Tesla made a very specific engineering trade-off that tells us exactly what was bottling up their software: memory bandwidth.
Tesla switched from LPDDR4 in HW3 to GDDR6 in HW4, the same power-hungry memory you find in gaming graphics cards (GPUs). This gives AI4 a massive memory bandwidth of approximately 384 GB/s, compared to Thor’s 273 GB/s (on the single-chip Jetson config) using LPDDR5X.
This suggests Tesla’s vision-only approach, which ingests massive amounts of raw video from high-res cameras, was starving for data.
Based on Elon Musk’s comments that Tesla’s AI5 chip will have 5x the memory bandwidth, it sounds like it might still be Tesla’s bottleneck.
Here is where Tesla’s cost-cutting really shows. AI4 is still running on ARM Cortex-A72 cores, an architecture that is nearly a decade old. They bumped the core count to 20, but it’s still old tech.
NVIDIA Thor, meanwhile, uses the ARM Neoverse V3AE, a server-grade CPU explicitly designed for the modern software-defined vehicle. This allows Thor to run not just the autonomous driving stack, but the entire infotainment system, dashboard, and potentially even an in-car AI assistant, all on one chip.
Thor has found many takers, especially among Tesla EV competitors such as BYD, Zeekr, Lucid, Xiaomi, and many more.
Electrek’s Take
There’s one thing that is not in there: price. I would assume that Tesla wins on that front, and that’s a big part of the project. Tesla developed a chip that didn’t exist, and that it needed.
It was an impressive feat, but it doesn’t make Tesla an incredible leader in silicon for self-driving.
Tesla is maxing out AI4. It now uses both chips, making it less likely to achieve the redundancy levels you need to deliver level 4-5 autonomy.
Meanwhile, we don’t have a solution for HW3 yet and AI5 is apparently not coming to save the day until 2027.
By then, there will likely be millions of vehicles on the road with NVIDIA Thor processors.
FTC: We use income earning auto affiliate links.More.