General News
-
From what ive read it's a big leap forward in ability. All big name CEOs have commented on it
Now, the lab versions OpenAI is running with all this new funding are probably on a whole different level—think of them more like an IQ of 200 vs GPT 5 IQ 140. They can handle huge, multi-step problems that require combining logic, memory, maths, language, vision, even code.
They could plan projects, predict complex systems, design experiments, or coordinate multiple agents at the same time.
The usefulness comes down to real-world problem solving. Where public GPT‑5 is excellent for everyday tasks—writing an essay, helping with code, summarising documents—the lab models might: help scientists design new drugs, optimise supply chains globally, simulate economic or climate scenarios, or even run advanced robotics tasks.
In short: public GPT‑5 is smart, but the lab versions are the ones likely showing frontier-level reasoning, memory, and creative problem-solving—the kind of AI that could tackle tasks humans find extremely challenging or slow.
I believe the following is a realistic example based on reliable sources I read:
Drug discovery and design.
With public GPT‑5, you could ask it to summarise research papers on a disease, suggest plausible molecular targets, or draft a report on clinical trial data. It’s helpful, but a human scientist still has to do the heavy lifting: designing molecules, simulating their behaviour, and predicting side effects.
Now imagine the lab model: it could ingest millions of molecular structures, biochemical pathways, patient datasets, and research papers simultaneously, then design entirely new compounds, simulate their interactions, predict toxicity, and optimise for effectiveness—all in a fraction of the time a team of experts would take. It could even propose multiple variations, rank them by likelihood of success, and adapt its suggestions based on real-world lab results.
The difference is like going from a super-intelligent research assistant to an autonomous research team that can plan, iterate, and predict outcomes across disciplines. Public GPT‑5 gives you ideas; the lab model starts doing the actual work, making discoveries that would otherwise take years.Anthropics CEO described the leap as going from working with a good PHD student to working with a country of Nobel prize winners. He means working with genius level AI agents all working on the same task (millions of them all working independently).
From what ive been reading and hearing GPT 3, 4,5 increments in smarts which we have seen publicly Vs The lab version is a leap from 5 to 10!
I think we will see something very impressive in the next 6 months. The funding is to scale out the compute so OpenAI can prepare for the huge influx in enterprise use. And it would appear as though Amazon just got the contract to cloud serve the bandwidth to do it.
-
A quick update.
Taking yesterday and real time today, net growth is down approx 1.25%.
Regardless we feel this is a temporary blip which has no bearing on the health of the companies we invest in nor will it. It's nothing more than the ones which have risen the fastest 'short-duration stocks', getting pulled back more than others, regardless of their quality (a few exceptions).
There is plenty of company specific positive news coming out (Micron for example) which shows clear evidence of their business getting strong whilst their stock gets cheaper.
-
Realitime growth +2%. This is no indication it will remain same in 5 mins

-
We have spoken about this for a while. It's one thing having ambitions to compete. In a constrained market, what matters is having the components(supply chain). Most do not.

-
Nvidia & Texas Instruments collaborate to build physical AI (humanoid robots).
The collaboration combines Texas Instruments’ real-time motor control, precision sensing, radar and power-management technology with Nvidia’ advanced robotics compute platforms and simulation software to help developers design, test and deploy humanoid robots faster and more safely.
At the hardware level, TI’s mmWave radar sensors are being integrated with Nvidia’s Jetson Thor platform using Nvidia’s Holoscan Sensor Bridge. That matters because radar adds a layer of perception that cameras alone can’t provide. Cameras struggle in fog, glare, low light or when detecting transparent or reflective surfaces like glass. Radar works in those conditions and provides precise distance and velocity data in real time.
The broader goal is improved 3D perception, sensor fusion and low-latency decision-making. For humanoid robots, that means better balance, safer human interaction, more reliable obstacle avoidance and tighter motor control. It also addresses a core robotics bottleneck: synchronising high-performance AI inference with deterministic, real-time physical control systems. -
Headline CPI(US) holds at +2.4% Y/Y in February, as expected.
Solid gains today as a result
-
Breaking- US/Iran in talks to end hostilities.
Futures swung 700 points to +500
Oil plunges to $90 -
I get your message requesting this to be apolitical, but it does feel rather like market manipulation from the US President. No views on that?
@mikeiow I don’t think it “feels like” that - it’s blatantly obviously the case and is so for every idiotic initiative he embarks upon.
Have you seen the charts floating about which detail billions in trades in the few short minutes preceding his announcements? Both ways, buying and selling, depending on what the announcement is.
It’s insider dealing on an industrial scale.
