GTC 2025 Announements
-
You find all things GTC, here.
The NVIDIA GTC, set for 17-21 March 2025 in California, is a premier event on AI and accelerated .This hybrid conference includes a keynote by CEO Jensen Huang on 18 March, revealing new tech, and over 1,000 sessions with talks and workshops. Developers, engineers, and leaders attend to explore innovations, network, and view exhibitions. The NVIDIA Deep Learning Institute provides certifications. GTC showcases NVIDIA’s real-world tech applications, promoting collaboration and progress in areas like healthcare and autonomous vehicles, making it essential for tech pioneers.
-
During the keynote. 3.5 hours if it. Huang said in ref to Blackwell .
‘Volume supply will be broadly available in Q2’(GB200) as hoped.
‘ Partners like Super Micro are delivering GB300 at scale,” GB300 is Blackwell Ultra expected in Q2/Q3GB300 is ramping smoothly with partners like Super Micro,” “We’ve streamlined the Blackwell Ultra rollout—partners are ready.”
The product and software announcement were profound. I’ll drip feed it over the next day or so.
Nvidia’s Co-Packaged Optics (CPO) photonics delivers significant benefits by integrating optical and electronic components, slashing power consumption by up to 3.5x, boosting bandwidth with speeds like 800 Gb/s per port, and reducing latency for massive AI and data center workloads. This technology enhances scalability and efficiency, enabling clusters of over 100,000 GPUs with fewer components and lower costs over time. By pioneering CPO in their Quantum InfiniBand and Spectrum-X Ethernet switches, Nvidia asserts its leadership in high-performance networking, positioning itself ahead of competitors like Broadcom and Intel in the race to power next-generation AI infrastructure.
-
Huang announced that the top four U.S. hyperscalers—presumed to be Amazon Web Services, Microsoft Azure, Google Cloud, and Meta—purchased 3.6 million Blackwell GPUs in 2025, compared to 1.3 million Hopper GPUs in 2024.
Note-this excludes Oracle, Xai, coreweave, 2nd tier cloud, sovereign, enterprise and stargate.
-
Huang also showed off Groot N1, Nvidia's foundational open-source model for robots.
"Physical AI and robotics are moving so fast," Huang said. "Everybody pay attention: this could be the largest industry of all."
Huang speculated that global factories will deploy 50 million humanoid robots
-
https://www.youtube.com/watch?v=_waPvOwL9Z8
For anyone who wants to watch the presentation. One impressive point-Huang didn’t use a teleprompter or notes for this presentation
Starts at 31 minutes -
The pace of change is staggering, absolutely. This is why 'competitors' like AMD and Intel are left in the dust. And so mis understood., the goal is to drive down the compute cost. Despite ever increasing vast sums being spent, the computational power is exponentially rising and importantly is needed. A 2X increase in spend (Hopper vs Blackwell), to date has resulted in a combined(hardware/software/switch/full stack) in a 40X increase in performance.
It is all about the cost per token! To date, from the very first GPU to Blackwell the cost per token has fallen 97%!
To reach their near term goal of real time zero shot inference the computational power needs to be even greater. Just think about a Feynman(rubin next) rack rated to 600Kw.
Heres an example of the GPU cost/efficiency and why it's not just 'AI', rather accelerated computing.
One feynman rack(GPU) based on the specs revealed will cost anywhere from 30-50M each. It will have the computational power of 250,000 of the latest intel based CPU servers which would cost between 3-5 billion and cost 150M annually to run. The GPU rack will cost around $12M to run annually. Not to mention take up a few million square feet of data centre building.
Now can one see that the transition to GPU servers is a no brainer.
Huang also said by 2028 annual DC capex spend will hit $1T-imo Nvidia will make more money in the long run from non DC segments. Non DC hardware segments.
-
Here is the compute comparison. One rack = the entire DC.
The DC cost 3b usd to build and 150m annually to operate excluding the land/lease
The single rack costs 30m and 12m to operate annually. Could live in a broom closet