General News
-
I think given the broader market we are doing well and if we ignore what the score board has to say(this month-although it looks fine to me) im the happiest in some time as to the execution of our bigger holdings. And what I mean by that is, how the companies are operating and making progress. I'm not aware of any tech portfolios which are positive in 2026 with the exception of ours.
What would you prefer. The hypothetical, stocks go on a 15% run in 4 weeks when the fundamentals don't support it(too fast, limited basis), or tread water all the while multiples contract and the fundamentals improve. One scenario suggests short term exuberance and the other long term growth. I believe we are squarely in the latter camp.
I don't make predictions generally but 2026 is an inflexion point. The daily/monthly moves don't matter, it's their worth when you get to your destination that matters. And that isn't some frivolous platitude. History, our history has supported that.
You only need to look at some of ARK funds and those of other very new entrants to see 'weeeee....oops'.
Of course past performance is no indication of future returns.
-
Think about this fact.
If markets were perfectly and instantly accurate about the future, it would be nearly impossible to consistently outperform them. But markets are driven by human behaviour, and humans are emotional, biased, impatient, overconfident, and sometimes irrational.
That’s where opportunity comes from.
And using MU as an example, I believe they will earn $150 in the next 21 months(not 24) and even if it's only $100 it is completely illogical that stock holders are willing to sell at $411 today. Emotion at work. That is why taking a myopic view, 'well it's up 300% in 12 months =sell' is completely the wrong move without first determining why imo. The why is they got it completely wrong.
Mutiples do not stay this low for long. We are watching developments closely and yes it's a focus because when management say the company is performing at its best in almost 50 years, opportunities like this warranty the investment in DD because the fly-wheel potential returns are worth it.
-
LAM Research CEO , a direct KLAC competitor quoted as saying yesterday......
Rampant AI demand for memory is fueling a growing chip crisis
“We stand at the cusp of something that is bigger than anything we’ve faced before,” Tim Archer, CEO of chip equipment supplier Lam Research Corp., said at a chip conference in Seoul last week. “What is ahead of us between now and the end of this decade, in terms of demand, is bigger than anything we’ve seen in the past, and, in fact, will overwhelm all other sources of demand.”
-
Palo Alto reported tonight. It's a good business but over valued imo. We sold it in Feb 25 in the 190s for one simple reason. It wasn't generating the growth its multiples suggested, i.e it looked very expensive. You are paying today 45X earnings for a 15% growth rate(act 3!). Why hold PEGs over 2 when there are others at half the price and lower, other things being equal. On this one important metric to put that into perspective. If Nvidia had these multiples it would be trading at over $1,000 and MU would be $2,000+. If fact we used some of the PANW proceeds to buy MU. PANW down 15%, MU up 340% since.
Nvidia moves up after hours as Meta signs a multi year deal to further integrate additional Nvidia systems within its ecosystem. One week tomorrow...earnings and the guide!
-
AMD has agreed to supply up to 6 GW of AI compute to Meta over roughly five years. To secure that commitment, AMD issued a performance-based warrant for up to 160 million shares.
At current prices, that’s roughly $35–$40+ billion of stock, representing about 10–12% dilution if fully vested and exercised.
The shares only vest as shipment milestones are met, but if AMD delivers the full 6 GW, the dilution becomes real.Nvidia’s total AI deployment footprint over the same period is widely expected to be closer to 9–10× larger when you aggregate hyperscalers, sovereign AI projects, enterprise, and cloud providers.
So the contrast is stark:
AMD: 6 GW tied largely to one mega-customer, secured with heavy equity incentives.
Nvidia: materially larger global deployment, dominant software ecosystem, and no need to offer double-digit dilution to win core business.Bluntly, AMD is using meaningful shareholder dilution to force its way into relevance in a market where Nvidia remains an order of magnitude larger in overall scale. Whether that gamble pays off depends entirely on execution.
The deal speaks volumes about the quality of their product and the parties relative bargaining power. The real winner being Meta who could end up getting half the servers for free. I look forward to AMDs first bleeding edge racks being released into the wild. Vera Rubin will be there to say hello.
For context my math suggests Nvidia will deploy 150GW over the same time period.
-
A lesson here on why paying sky high valuations rarely works out. It's a good business but extremely overvalued, even after tanking almost 40% in a matter of months. When you pay these prices there is zero margin of safety and management must execute to perfection. They rarely do. And why we sold PANW which was trading at a multiple of 50, a PEG of almost 3 with growth rates falling.
CrowdStrike shares reached 12‑month highs near $565 before a sharp decline. They now trade around $350–$385, down roughly 35–40% from those highs. Revenue growth remains around 20% year‑on‑year, which is respectable for a large cybersecurity firm but nowhere near the hyper‑growth that once justified its stratospheric valuation. At its peak, the stock was trading on price‑to‑earnings multiples of 150–200, reflecting expectations of perpetual explosive growth.
Even after the fall, it still carries a premium valuation compared with peers, despite only mid‑20s growth.
The launch of Anthropic’s Claude Code Security has sparked concerns that AI could automate parts of cybersecurity, threatening CrowdStrike’s business model. I have no opinion on this and it doesn't matter-the market doesn't like the idea and it's ripping the stock to pieces. The point being, buying stocks like this carried outsized risk where the potential return doesn't justify it.
The combination of an historically obscene valuation, slower growth, and the perception that AI could disrupt the business has weighed heavily on sentiment. The fundamentals are solid, but investor psychology has shifted, leaving the stock far below its highs and the market wary of future risks.
-
A bit of investment news and comedy wrapped into one:
In early 2026, Russia slapped Google with a 91.5‑quintillion‑ruble fine—roughly 1.017 quintillion USD—a number so absurd it’s pure comedy. To put it in perspective, it would take Google over 6.7 billion years of earnings or 9,200 years of the entire world’s GDP to pay. The absurdity sparked ridicule worldwide. Unsurprisingly, Google responded by closing its Moscow office.

At first glance I thought wow the ruble has really tanked in value. It had but not that much

-
OpenAI confirmed on Friday that it has raised a massive $110B in its latest funding round, with $50B coming from Amazon and $30B apiece from Nvidia and SoftBank .
"Today we’re announcing $110B in new investment at a $730B pre-money valuation. This includes $30B from SoftBank, $30B from NVIDIA, and $50B from Amazon," OpenAI said in a statement. "We’ve also signed a strategic partnership with Amazon and secured next generation inference compute with NVIDIA. Additional financial investors are expected to join as the round progresses."
Big bets being places. It's not as circular as many might think. Most of that cash will go to one company. I wonder what Altman showed Andy to get that sort of investment. I don't think it was an AI video featuring cats.

-
From what ive read it's a big leap forward in ability. All big name CEOs have commented on it
Now, the lab versions OpenAI is running with all this new funding are probably on a whole different level—think of them more like an IQ of 200 vs GPT 5 IQ 140. They can handle huge, multi-step problems that require combining logic, memory, maths, language, vision, even code.
They could plan projects, predict complex systems, design experiments, or coordinate multiple agents at the same time.
The usefulness comes down to real-world problem solving. Where public GPT‑5 is excellent for everyday tasks—writing an essay, helping with code, summarising documents—the lab models might: help scientists design new drugs, optimise supply chains globally, simulate economic or climate scenarios, or even run advanced robotics tasks.
In short: public GPT‑5 is smart, but the lab versions are the ones likely showing frontier-level reasoning, memory, and creative problem-solving—the kind of AI that could tackle tasks humans find extremely challenging or slow.
I believe the following is a realistic example based on reliable sources I read:
Drug discovery and design.
With public GPT‑5, you could ask it to summarise research papers on a disease, suggest plausible molecular targets, or draft a report on clinical trial data. It’s helpful, but a human scientist still has to do the heavy lifting: designing molecules, simulating their behaviour, and predicting side effects.
Now imagine the lab model: it could ingest millions of molecular structures, biochemical pathways, patient datasets, and research papers simultaneously, then design entirely new compounds, simulate their interactions, predict toxicity, and optimise for effectiveness—all in a fraction of the time a team of experts would take. It could even propose multiple variations, rank them by likelihood of success, and adapt its suggestions based on real-world lab results.
The difference is like going from a super-intelligent research assistant to an autonomous research team that can plan, iterate, and predict outcomes across disciplines. Public GPT‑5 gives you ideas; the lab model starts doing the actual work, making discoveries that would otherwise take years.Anthropics CEO described the leap as going from working with a good PHD student to working with a country of Nobel prize winners. He means working with genius level AI agents all working on the same task (millions of them all working independently).
From what ive been reading and hearing GPT 3, 4,5 increments in smarts which we have seen publicly Vs The lab version is a leap from 5 to 10!
I think we will see something very impressive in the next 6 months. The funding is to scale out the compute so OpenAI can prepare for the huge influx in enterprise use. And it would appear as though Amazon just got the contract to cloud serve the bandwidth to do it.
