NVIDIA has officially unveiled Alpamayo, a new artificial-intelligence platform that CEO Jensen Huang describes as the world’s first “thinking, reasoning” AI designed specifically for self-driving vehicles. The system is set to begin rolling out on U.S. roads later this year, starting with the Mercedes-Benz CLA.
At the core of Alpamayo is a new Vision-Language-Action (VLA) model architecture. Unlike traditional perception-based systems, VLA models are designed to see, reason, and act in an integrated loop. According to Huang, the model is trained end-to-end, much like Tesla’s model, allowing it to translate raw camera input directly into vehicle control while also generating an explanation for its decisions.
“It’s trained end-to-end. Literally from camera in to actuation out; It reasons what action it is about to take, the reason by which is came about that action, and the trajectory,” Huang said at CES 2026 on Monday.
$NVDA CEO Jensen Huang just announced Alpamayo which he calls the world’s first thinking and reasoning model built for autonomous vehicles.
— Shay Boloor (@StockSavvyShay) January 5, 2026
By open sourcing the Alpamayo stack, Nvidia is pushing self driving forward as a category after years of work by thousands of engineers. pic.twitter.com/1X4TWfbA1b
NVIDIA says this approach brings meaningful improvements in transparency, safety, and robustness—especially in complex, real-world driving environments. Alpamayo combines large reasoning models with simulation tools capable of stress-testing rare and edge-case scenarios, alongside open datasets for training and validation.
With a reported 10-billion-parameter architecture, Alpamayo 1 can generate driving trajectories alongside detailed reasoning traces, allowing developers to see not just what the system decided, but why it made that choice.
Importantly, NVIDIA is positioning Alpamayo as both a deployable foundation and a developer platform. The company says developers can adapt the core model into smaller, vehicle-ready runtime systems or use it as the basis for advanced AV tooling such as reasoning-based evaluators and automated data-labeling systems.
NVIDIA also confirmed that Alpamayo 1 will ship with open model weights and open-source inferencing scripts, with future versions expected to scale up in size, reasoning depth, flexibility, and commercial options.
The announcement quickly drew reactions from Tesla. Elon Musk weighed in on X, pointing to what he believes remains the hardest problem in autonomy. “What they will find is that it’s easy to get to 99% and then super hard to solve the long tail of the distribution,” Musk wrote, adding later, “I honestly hope they succeed.”
Tesla’s AI chief, Ashok Elluswamy, echoed that sentiment, noting that the “long tail is sooo long, that most people can’t grasp it.”
While NVIDIA is pushing a reasoning-first, open-platform approach, Tesla continues to bet on large-scale real-world data collection and vertically integrated hardware and software through its Full Self-Driving (FSD) and Robotaxi programs. Tesla has yet to launch fully driverless operations at scale, but Musk has said production of its purpose-built Cybercab is expected to begin in April 2026.

