Nvidia News
-
Happy NY, C!
If you recall we discussed Micron's HBM4 being in Dec production(early) which lead us to speculate that Rubin would be early. So yes, it's confirmation that Nvidia are ahead of schedule-excellent news. The flow on effects will be significant. Given the packaging utilises CoWoS-L (the same process as Blackwell), the transition should be seamless due to them using the 3NM process which is mature. AMD are going all in on 2NM but from what ive heard(as expected) yields are still poor and this will likely cause them problems in 2026. I don't think they had any choice as they can't compete with their current systems (not even against Hopper).
The wind is very much at Nvidia's back(as expected)
-
NVIDIA's CES 2026 announcements from Jensen Huang's keynote.
NVIDIA unveiled the Vera Rubin platform (successor to Blackwell) as its first extreme co-designed, six-chip AI supercomputer, now in full production. This includes the Vera CPU, Rubin GPU, NVLink 6 switch, ConnectX-9 SuperNIC, BlueField-4 DPU, and Spectrum-6 Ethernet switch—all redesigned simultaneously for rack-scale efficiency. Those in the know are calling this 'insane' (the complexity all at once)
Key confirmed details:The Rubin GPU delivers up to 5x greater inference performance (and 10x lower cost per token for some workloads) than Blackwell, with architectural gains (e.g., HBM4 memory providing ~2.8x bandwidth) far exceeding traditional Moore's Law increments (25% per generation).
Transistor count is around 1.6x higher (e.g., ~336 billion vs Blackwell's ~208 billion in comparable configs), enabling the outsized leap through co-design.
The Vera Rubin NVL72 rack offers massive scale-up bandwidth (~260 TB/s (260 Terrabytes/second!)via 6th-gen NVLink), often compared to exceeding global internet traffic in context. Expected price per rack, circa $8M-$10M each. Expected volume in 2026 as they ramp, 7k units and up to 30k units in 2027
Advanced liquid cooling supports hot water (up to 45°C), boosting efficiency and reducing data centre power consumption.
Alpamayo is the world's first open-source reasoning family of models (including chain-of-thought vision-language-action) for autonomous vehicles, with full code, datasets, and tools (e.g., AlpaSim) released on Hugging Face/GitHub.
The all-new Mercedes-Benz CLA will be the first production vehicle with NVIDIA's full AV stack (including Alpamayo for explainable reasoning, e.g., narrating decisions like braking for a truck/cyclist). Launch: US in Q1 2026, Europe Q2, Asia later in the year. Huang demoed it hands-free in San Francisco traffic.
Strategy: Open models accelerate ecosystem adoption (like DeepSeek's impact), driving demand for NVIDIA hardware ("give away the recipes, sell the kitchen"). Partnerships include Siemens (industrial tools), Cadence/Synopsys (chip design), Palantir/ServiceNow/Snowflake (platforms), and AV players like Lucid/JLR/Uber.
NVIDIA positions itself ~18 months ahead, with competitors chasing Blackwell while Rubin ramps.
-
Nvidia’s CEO said (7 Jan) demand for Nvidia products is high across the board, and "I'm fully expecting a really giant year for our business with TSMC," media report, which makes most of Nvidia’s chips
Nvidia CEO Jensen Huang said(7 Jan) H200 AI chip demand from China clients is “very high” and “we’ve fired up our supply chain, and H200s are flowing through the line,”, adding he said the signal for Beijing’s approval will be purchase orders. “If the purchase orders come, it’s because they’re able to place purchase orders,” he said.
None of this is factored into the stock. Let's see how this pans out but the takeaway, as always is listen to what the man himself says not the FUD-he has a great track record of being the only credible source of the facts.
A reminder- unlike most companies Nvidia quarter end is Jan 31 (not Dec 31)
-
News just reported by Bloomberg:
China approves H200 imports
China is moving towards reopening of its market to Nvidia’s H200 AI chips, allowing imports for selected commercial uses while restricting access for the military, sensitive government bodies, critical infrastructure, and state-owned enterprises. The policy mirrors China’s broader approach of balancing foreign technology access with national security.
The H200 is an older-generation chip cleared for export under US rules, with shipments approved in December under a 25% surcharge. Demand is strong, with Alibaba and ByteDance each reportedly interested in orders exceeding 200,000 units.NB-the 25% surcharge, to be passed on to the US Govt is believed to be a premium paid by the purchaser, not borne by Nvidia
Imo Demand is closer to 2M chips-Nvidia have 700k on hand and have asked TSM to restart production. Monetisation should start in February which is Q1 of the next fiscal year-quantum in the order of several billion which can only help the guide.
-
Nvidia’s rise in pre-market trading, driven by a mix of fresh headlines and ongoing sentiment.
The main focus is China. Reports indicate that Jensen Huang is visiting China, which has reignited speculation around Nvidia’s ability to maintain or expand its presence there despite export restrictions.
China remains a huge end-market for AI and data-centre chips, and even modest signs of regulatory flexibility or creative workarounds tend to lift investor confidence. Nothing concrete has changed yet, but markets are reacting to the possibility of improved access or stabilised demand rather than confirmed policy shifts.
Another contributor is talk around pricing power. There are reports circulating that Nvidia has been able to raise prices on certain GPUs and related components due to overwhelming demand for AI compute. While these claims are not formally confirmed by the company, they reinforce the prevailing narrative that Nvidia is operating from a position of strength, with customers willing to pay up for scarce, high-performance hardware. For investors, this translates into expectations of stronger margins and resilient revenue growth.
Momentum also plays a role. Nvidia has been a major driver of recent gains across US equity indices, and strong prior performance often spills into pre-market trading as traders position ahead of the open. Pre-market volumes are relatively thin, so even modest buying can exaggerate price moves.
In short, the stock is up on sentiment, speculation, and momentum — encouraging signals, but not a fundamental game-changer on their own.
-
It's coming. Rumoured to be an initial $18 Billion-should be a nice addition to Q1 (Feb through April 26).

-
Semiconductor news just in:
Orders for AI servers have outstripped supply at Foxconn, the world’s biggest AI server maker, media report, adding it’s working overtime to fulfill Nvidia GB300-based server orders now. Foxconn is boosting automation to raise output. Trial production of Vera Rubin-based servers has begun.
-
China has approved the first official imports of Nvidia’s H200 AI chips, reversing an earlier de-facto blockade at customs. That’s a big shift — Beijing had been blocking physical shipments even after the U.S. gave export permission.
The approval covers hundreds of thousands of chips, with ByteDance, Alibaba and Tencent cleared to buy a combined ~400,000+ H200 units.
Other Chinese companies are reportedly waiting in line for subsequent approvals — but the total number that will get licensed is still unknown.
The move came during a visit by Nvidia CEO Jensen Huang to China, underscoring the political timing.
This isn’t just a supply issue — it’s a policy shift in how China deals with foreign high-end chips.It's speculated that Nvidia have 700k H200 in hand and I would expect these orders to be booked in Q1 so theoretically they should be included in the Q1 guide, reported along side their Q4 earnings next month(which end 30 January). 400k chips would equate to around $13-$14B + memory and ancillary components+ tariff(25%)
-
Big numbers. Thanks Adam.
The thing that slightly bothers me with Nvidia is that they seem to be too good to be true. And we all know the phrase that applies when that's the case. They are sitting pretty having created the biggest ticket in the fattest technological revolution since the internet, in the juiciest industrial advance since the industrial revolution, and have a stranglehold on the good stuff so tight the established players can't even see which way they are going. Every man and his dog who thinks they want a slice of the AI pie are queuing around the block to buy their chips and they are selling things now that won't exist for another year, given the demand.
What are the chinks in their armour? What could go wrong? What are the risks? Huang is a seriously clever guy, and his brains are matched with a conservative integrity that is most appealing in business - what happens if he slips and falls under a big red bus? What if China does invade Taiwan? What if someone magics up some new technology (Organic Computing? Spintronics?) which stands the world on it's ear and makes everything we currently use redundant? What if aliens invade and .... essentially, what could go wrong?
I hear and understand the argument that says that if China invades Taiwan then we probably will have bigger things to worry about that the value of share portfolios, but the general question stands; where could it go wrong?
-
Some fair points.
My concern is more around I how much of the AI growth is actually vendors just moving things around.
The sort of picture drawn hereI don’t think it is just that…but I still feel we have road bumps ahead.
-
Bear in mind Nvidia owns 10% of Coreweave so if it chooses to fund them $2B I don't have a problem with it. The naysayers will obviously-no surprises there. We paid $25 for a stock which is close to $200. We got in a long time ago so whilst im sure anyone who is negative on tech would say 'well you've done well cos nvidia'-being somewhat predictable. It reminds of someone who last year and the prior year commented on our tech portfolio saying 'it's tech of course it's done well' as if to suggest it's just expected and they all do that-they don't.
Jensen Huang has done right by us, consistently being right and seeing this revolution a decade before anyone else. the internet is full of opinions, largely sour grapes and broken clocks. The facts remain the product is in very high demand and will be for the next 2 years at least-that is as far as we can reliably see.
The company makes so much money it would seem only natural to re-invest in the sectors they themselves think will prosper. And why not. Does anyone really think £2B is material to Nvidia. Does anyone think they are running out of customers such that they need creative ways to sell GPU's? NB_they are not loans. It's equity.
At the end of the day it is your decision to make. We can give you our thoughts and some hard facts and you weigh the evidence.
If you worry about hypotheticals, china, huang, an earth quake wiping out TSM then maybe Mcdonalds stock is also a risk because all the cows might get BSE or Vegans take over the world ;). DT nationalises KO?
-
Reported today by SemiAnalyis: Vera Rubin is going to be a game changer.
IMPORTANT: NVIDIA announced that their compute tray assembly time fell by 36x from 2 hours to 5 minutes due to the new VR200(Vera Rubin) cableless & hose less design.
NVIDIA follows the footsteps of AWS Trainium2/3 cableless design that lead to faster manufacturing time & full automation as robotics still have a difficult time plugging cables into the correct ports. Note that due to the strength of NVIDIA's world class serdes team, they are able to have cableless NVswitch trays too without any retimers leading to faster assembly for switch trays too.
In contrast, due to AMD's lack of rack scale experience, they did not opt for abolded text** cableless design which will lead to slower production ramp for rack & tray assembly.** Furthermore, even though on the UALoE switch side they are using Broadcom serdes switches, due to the weakness of the AMD's in house 224G serdes which is used on the GPU side, in addition to retimers, AMD switch tray requires tons of flyover cables which will lead to slow rack production ramp and as seen in GB200 switch tray are prone to errors due to the tight tolerances required.
-
Developing story…. NVDA: Mercedes, Nvidia, and Uber to partner on large-scale commercial robotaxi deployment. interesting
Also Amazon in talks to invest 50 billion in openai