Micron Technology
-
By 2030, edge AI is projected to dominate total memory demand, far surpassing datacenter use. Even with conservative assumptions, billions of devices running sophisticated AI locally could consume 5–7 EB(exobyytes) of high-bandwidth memory (5–7 million TB), compared with roughly 1 EB in datacenter. Autonomous vehicles alone (not just cars, drones robots, tablets, phones)—potentially 100 million units worldwide—could carry 32–48 GB HBM-equivalent per car, while industrial and service robots, numbering perhaps 50 million, might each need 24–32 GB.
Consumer devices like AR/VR headsets, tablets, and smart home devices add billions more, though individual HBM demand is smaller, collectively accounting for several exabytes.
Annual additions will be substantial: roughly 10–20 million cars and 5–10 million robots per year, each increment consuming hundreds of petabytes of high-speed memory. Even with edge memory adoption tempered by cheaper stacked DRAM and LPDDR alternatives, the rate of growth is staggering, exceeding the production scaling plans of current HBM fabs. I would use the term 'forever constrained'The wedge of demand is steep: memory requirements increase not just linearly with new devices but also with the growing sophistication of AI models, which push per-device memory higher. As a result, planned memory fabrication capacity is unlikely to keep up, creating a structural bottleneck for ubiquitous, high-performance edge AI. Which, if correct would drive prices up for years as demand continues to exceed supply.
This is my take. Whilst today the market worries about the memory party to be over soon and im thinking even MUs planned $200B expansion plans will not be enough. This plan covers 10 years. I will speculate now that by the end of next year this 200B plan becomes $400B or more.
-
Micron Technology’s decision today to repurchase up to $5.4B of its senior notes is a clear positive signal about its financial strength and discipline. Using cash rather than issuing stock shows the company is generating solid cash flow and prioritising long-term balance sheet health over short-term optics.
By reducing higher-interest debt in the 5–6% range, Micron effectively locks in a risk-free return and lowers future interest expenses, which will support margins over time.
-
Just seen a report re Google developing chips that need less memory ..
The report indicated that this was behind the drops in Micron this week -
Hi C,
It's an algorithm not a chip and it's a nothing-burger. It has no impact on memory requirements whatsoever and it shows you just how ignorant the participants in the market are.
Google’s “Turbo” narrative is intellectually lazy. The leap from “better AI efficiency” to “less memory demand” ignores how technology adoption actually works. Efficiency lowers costs, which expands usage—basic economics. Dumping Micron Technology on that headline assumes AI growth is fragile and linear, when it’s explosive and compounding. It’s a textbook case of headline-chasing algos and shallow thinking masquerading as insight. No serious analysis, no nuance—just reflexive selling. If this is the market’s level of reasoning, it’s not pricing risk; it’s broadcasting confusion.
And if you want to get technical....
First, the “post rack-scale GPU” reality: once you’re deploying clusters at that level, memory bandwidth and capacity (HBM, interconnect efficiency, etc.) are hard constraints, not optional luxuries.
Software improvements don’t remove that—they just let you push the hardware harder. That typically increases utilisation, not reduces demand.
Second, the token growth point is the killer. If total tokens processed have exploded ~2500×, then a 6× efficiency gain is statistical noise. You’re still looking at orders-of-magnitude net growth in compute and memory demand. The denominator is moving way faster than the numerator.
Third, these optimisations aren’t new. Google and others have been shipping incremental efficiency gains for years—compiler improvements, sparsity tricks, better routing, quantisation, etc. The so-called “Turbo” angle isn’t some step-function event; it’s part of a continuous curve.
So the sell-off in Micron Technology assumes:
efficiency gains suddenly matter more than demand growth
and that this time is different from every prior cycle
That’s a weak assumption. In practice, efficiency gains + exploding demand = more total infrastructure, not less.Micron is so worried it's just about to buy another plant and repurpose it( 4 million sq feet) to accelerate its roadmap. And customers are signing 5 year supply agreements, scrambling to secure their supply chain, including Google who is a major customer. Rather than listen to the FUD look at the evidence. The company can only supply 50% of orders and > 80% margins and that imbalance is getting worse. It also completely ignores the edge device market which will be orders of magnitude bigger than data centre.
It's like the Deep-Seek moment we dont need these GPUs...oh wait.
-
Thanks Adam ….as always appreciate your insights

-
and right on cue. Morgan Stanley today said ...

Morgan Stanley, in a client note reiterated their over weight rating and $520 price target on MU.
-
They must be on this Forum

-
Micron’s “monster” Q3 guidance, issued in mid-March 2026, projected revenue of approximately $33.5 billion with an 81% gross margin. At the time, this was beyond strong. The guide is the strongest growth in corporate history-but it just got even better.
Analysts and consensus models likely incorporated more conservative server DRAM contract price assumptions of around 10–15% QoQ for Q2 2026 (April–June calendar, microns quarter 3 covers March through May). Why, because Trendforce provide market data on what customers are paying/bidding.
Yesterday TrendForce revisions have dramatically upgraded that outlook to roughly +45% QoQ for server DRAM prices. This meaningful positive surprise implies higher average selling prices (ASPs) than previously modelled, particularly as new contracts roll into Micron’s fiscal Q3/Q4 2026 and Q1 2027.The impact could be substantial: elevated server and HBM pricing would lift revenue beyond current forecasts while expanding already-record margins further, thanks to the favourable product mix and limited near-term supply growth. Operating expenses are largely fixed whether they deliver $33B or $40B for that matter.
Modelling that ASP change we are looking at $38B revenue and $25 eps. This is one quarter not a year with a stock now at $357!
Worst case scenario-prices stabilise, best case they keep rising for the next year. They aren't going to fall and will likely rise a bit more but if we model flat ASP and just look at MUs bit growth of 25% thats a base of $100 EPS for 12 months and a 25% growth rate going fwd. With a PE of about 3.
-
Official Nr's on memory for the April-June Q
Q2 2026
Quarter on Quarter(not annual changes) ASP-
PC DRAM prices: revised up from +10~15% to +40~45%
-
Server DRAM prices: revised up from +10~15% to +43~48% The big one
-
Mobile DRAM (LP5X) prices: revised up from +13~18% to +58~63%
-
eSSD prices: revised up from +15~20% to +68~73% The big one
-
TLC/QLC NAND prices: revised up from +15~20% to +60~65% The big one
-
Overall NAND Blended ASP: revised up from +18~23% to +70~75%
so when I said +45% above- it looks like it's actually a lot more-nice!
Looking at last Q, $24B and a $33.5B guide. We know bit growth is limited to +7% QoQ maybe a bit more but that is still a lot so to get to 33.5B we can infer the assumed ASP rise built into their model notwithstanding any change in mix.

Given prices are up a lot more than 30.45% a massive beat is a given. It wouldn't surprise me if they report close to $40B this quarter and $24-$25 eps. Last years Q3 revenue was $9.7B. $1.62 eps.
-
-
Micron taking a bit of a bloody nose again today ….