Nvidia News
-
Developing story…. NVDA: Mercedes, Nvidia, and Uber to partner on large-scale commercial robotaxi deployment. interesting
Also Amazon in talks to invest 50 billion in openai
-
AMD reported last night, proving yet again , no they aren't taking on Nvidia. $5.5B in data centre and they guided for contraction not growth. Compared to Nvidia DC at $60B. 11X bigger and importantly twice, yes twice the net margin. AMD nets 25.7% and Nvidia is at 57%(121%).
Some thoughts. It confirms the pattern, Nvidia owns 85% of the accelerated compute market.
If one takes the view that AMD is fairly valued then Nvidia market cap should be 22X more give or take-this is based on fundamental analysis. 11X more revenue and double the net margin(22X). AMD cap is $392B, which would put Nvidia at $8T. Just one way to view the disparity is valuations. One is clearly over valued on a relative basis.The only way that scenario breaks down is if AMD accelerates faster than Nvidia and drives margins much higher-management have not presented anything to suggest it. Imo AMD shareholders ...“You can’t handle the truth!” – A Few Good Men.
The other qualitative factor is the disparity between the CEOs. Huang tells it straight and always delivers. The man has immense integrity. Su on the other hand talks a good talk and that's where it ends. Here are some examples:
-
Su has repeatedly framed AMD as closing in on Nvidia’s dominance in AI compute, with major partnerships (OpenAI, Oracle, hyperscalers) and annual AI chip rollout plans.
Reality-Nvidia still dominates the AI GPU market with ~90 %+ share, and AMD’s share remains small. Analysts regularly point out that while AMD has some traction with MI300/MI450 generation, it hasn’t dented Nvidia’s leadership materially yet. Market share gains are slower than the rhetoric suggests. -
Implicit promises around product timing — e.g., Helios AI system roadmap
Reality- These integrated server-scale AI systems are harder to ship on schedule, and delays or performance gaps have been points of concern among analysts. Execution risk is non-trivial and timelines often get pushed.
-
-
NVIDIA Pushes Further Ahead in AI Infrastructure with 1.6T Silicon Photonics Deal
NVIDIA’s collaboration with Tower Semiconductor on 1.6-terabit optical modules is another clear signal that NVIDIA is moving faster — and widening the gap — in AI infrastructure, not just AI compute.
The technology at the centre of the deal is silicon photonics, which replaces electrical data transfer with light-based communication inside and between data centres.
As AI clusters scale, performance is no longer limited by GPU compute but by how fast data can move between GPUs. Copper wiring runs into hard limits around heat, power consumption and distance. Optical links solve this by delivering far higher bandwidth, better power efficiency and far greater scalability.
Tower’s silicon photonics platform allows these optical components to be built directly on silicon, enabling 1.6 terabits per second of throughput per module — roughly double the data rate of earlier solutions.
For NVIDIA, this is critical plumbing for large “AI factories” where thousands or millions of GPUs must act as a single system. Faster interconnects mean higher GPU utilisation, faster model training and lower operating costs.
This matters in competitive terms because NVIDIA is attacking the networking bottleneck aggressively and early. It is not just selling GPUs; it is building the entire AI system — compute, networking and now optical infrastructure — as a tightly integrated stack.
AMD, by contrast, is behind in this layer of the AI stack. AMD does have silicon photonics efforts underway, including recent acquisitions and internal R&D aimed at co-packaged optics. However, these moves are earlier-stage and defensive. AMD is building capability; NVIDIA is already announcing and deploying products at scale. There is no comparable AMD photonics networking platform in production today matching NVIDIA’s 1.6T-class roadmap. (on time as planned and announced 18 months ago)
That means this deal does more than advance NVIDIA’s technology — it extends the time gap. While AMD works to close yesterday’s interconnect bottlenecks, NVIDIA is removing tomorrow’s.
In AI infrastructure, being early matters because hyperscalers design around what exists, not what might arrive later. (we have discussed this before)-Grab the pipeline now. Hyberscalers won't spend 100 billion on a slide deck promise(Su)
Bottom line: silicon photonics is becoming essential for next-generation AI, and NVIDIA is moving faster, deeper and more comprehensively than its rivals. AMD isn’t absent — but it is now further behind, and this deal reinforces NVIDIA’s lead at the system level, where the real long-term advantage is being built.
-
Nobody works harder for his shareholders. Nobody is more reserved. If Jensen says 2026 will be a very big year for the company, it will be very big. In 17 days we will see some colour on just how big.

-
From an interview over the weekend

This aligns with our early 2025 thesis that Nvidia would grow 50% per annum for 5 years CAGR.
It's no coincidence that C.C Wei(TSM) also talks about his roadmap to grow 50% 5 yr CAGR. -

Recap- Our previous view on quarterly revenue rhythm:
70/80/90/100 and this could be conservative if they get the supply.NB this is mag7 spend only, tier two and enterprise is additional.
Other segments: OEM/Gaming/visualisation/Auto amount to circa $27B/yr
They could earn in FY ending Jan 27, $200B. For comparison the top 5 earning companies in 2025, their earnings ranged from $96B(Saudi Aramco) to $128B(GOOG).
What is staggering is their growth rate is expected(by some) to be circa 50% per annum for the next 4-5 years.
-
Analyst Evercore raise target to $352 and issue research note....copied from X
NVDA GPUs in High Demand. 1) Blackwell lead-times are 12-26 wks and hyperscalers turning away customers bc they don't have enough. 2) Vera Rubin appears ahead of schedule - demand is healthy and expected to broaden to non-Cloud/non-LLM by EoY26. 3) NVDA appears to have locked down wafer capacity aggressively and is viewed as first in line for HBM. 4) NVDA long-term share expected to be 60%-70%, but near-term expected at 75%-85%. 5) Acqui-Hire of Groq improves competitive position, particularly as HBM market
tightens. NVDA Still Ecosystem of Choice. 1) ...due to robust and established software ecosystem, which makes it particularly attractive to enterprise customers. 2) it takes 12 months to switch hardware platforms and optimize software for a different hardware stack. 3) Many view LLM makers' strategy of optimizing compilers to run across heterogeneous
hardware fleets as a Herculean Labor -
Adam..What do you think we will see when these boys report on Wednesday?
