Comments from Oracle/Larry Ellison on building data centres
-
I picked up the following from the Oracle earnings call. It’s an important detail regarding the capital spend on AI infrastructure and the debate about monetisation and the contrarians argument that the spending will slow dramatically. Some of you will recall I have always maintained it will not for the simple reason, the goal is to achieve AGI which is many years away. Organisations will invest trillions in the pursuit of this goal. There is a race and it could be a winner takes all(most) of the spoils. It is not difficult to understand that the ones who solve the worlds biggest problems, the currently impossible, will generate significant returns. And it could quite possibly be around life science and the discovery of new materials which advance the human race. Larry Ellison explains the position in very clear and unequivocal terms in relation to the capital spending and the duration. I have often been asked what the duration will be and I have stated that it’s within our investment horizon of the next 5-10 years.
Question. Larry, how do you envision the market transitioning from the AI training phase to the AI inferencing phase? There's some debate out there on whether we have an imbalance or a bubble on the front end of the curve because training is compute intensive and then perhaps it recalibrates differently somehow for the inferencing stage, which might be less intensive? Or do you see the potential for high growth kind of all the way through both of these phases?
Lawrence Ellison
Well, a lot of people think that, Mark, I send a kid to college and then I'm done. They're done training? I got four years of training, and then I can put the kid to work and they'll be doing inferencing. And that's not true. This race goes on forever, to build a better and better neural network. And the cost of that training gets to be astronomical. When I talk about building gigawatt or multi-gigawatt data centres(AK, Note XAI’s latest 100k H200 cluster draws in the region of 150MW of power, 1 GW is 6.7X bigger), I mean these AI models, these frontier models are going to -- the entry price for a real frontier model from someone who wants to compete in that area is about $100 billion. Let me repeat, around $100 billion. That's over the next four, five years for anyone who wants to play in that game. That's a lot of money. And it doesn't get easier.
So there are not going to be a lot of those. I mean we -- this is not the place the list who can actually build one of these frontier models(AK MSFT, Meta and Alphabet and perhaps AWS). But in addition to that, there are going to be a lot of very, very specialised models. I can tell you things that I'm personally involved in, which are using computers to look at biopsies of slides or CAT scans to discover cancer. Also, there are also blood tests for discovering cancer. Those tend to be very specialised models. Those tend not necessarily use the foundational the Groks and the ChatGPTs, the Llamas and the Geminis, they tend to be highly specialised models. Trained on image recognition on certain data, I mean, literally millions of biopsy slides, for example, and not much other training data is helpful.
So that goes on, and we'll see more and more applications like that. So I wouldn't -- if your horizon is over the next five years, maybe even the next 10 years, I wouldn't worry about, hey, we've now trained all the models we need and all we need to do is inferencing. I think this is an ongoing battle for technical supremacy that will be fought by a handful of companies and maybe one nation state over the next five years at least, but probably more like 10. So this business is just growing larger and larger and larger. There's no slowdown or shift coming.
Mark Murphy
Thank you very much.
Lawrence Ellison
Let me say something that's going to sound really bizarre. Well, I probably -- you'd probably say, well, he says bizarre things all the time. So why is he announcing this one? It must be really bizarre. So we're in the middle of designing a data centre that's north of the gigawatt that has -- but we found the location and the power place. We look at it, they've already got building permits for three nuclear reactors. These are the small modular nuclear reactors to power the data centre. This is how crazy it's getting. This is what's going on.(AK-we heard MSFT are doing the same, having hired nuclear scientists and are planning now to build a 1 million GPU DC at the end of next year. The power draw on this cluster would be circa 1.5GW. This is equivalent of 1M houses power consumption).
Operator
Your next question comes from the line of Raimo Lenschow with Barclays. Your line is open.
Raimo Lenschow
Just a question more on the database side on the agreements that you just announced today or that you have in place and now added with AWS. So now that we have all the hyperscaler agreements in place, how do you think about that migration movements from database workloads that are at the moment running on-premise or on cloud customer to the public cloud? I mean how should we think about that momentum? Thank you.
But we expect that private clouds will greatly outnumber public clouds as companies decide they want the Oracle Cloud behind their firewall in their data center with no neighbors. And we -- because we've gotten our data center -- our data centers are so automated and they're scalable, and they're all identical in terms of function, we're organized. So we can -- actually, we have 162 data centres now. I expect we will have 1,000 or 2,000 or more data centres -Okay. Interesting. Very interesting.
Question
Thank you so much and congratulations on the quarter. Very impressive both the quarter and the guide. We've seen a lot of focus on the model training side, but less on applications and inferencing in the rest. You guys have a lot of expertise in the market and in the industry. You already have traditional AI sprinkled throughout all the Oracle products and capabilities. But where do you see the monetisable value of GenAI on the app side? How long do you think it's going to take for GenAI to be a meaningful revenue, not just for Oracle, but software in general, on the app side, not on the training side? Thank you.
Lawrence Ellison
But let me start with health care everything from us helping doctors diagnose different diseases. When someone goes in to get a sonogram, and I've seen the nurses and the technicians and the doctors actually measure the baby skull and measure the baby's spinal cord to see how -- it's utterly ridiculous. The computer should do all of that. And if there's an umbilical wrapped around the fetus the computers should discover all of that, and now it should all be recorded. The doctor could get assistance from a computer doing all of this stuff. Looking at the plaque and coronary arteries, all should be done that way.
Already, we've delivered when a doctor visits a -- gets ready to visit a patient, we prepare a summary for the doctor. We use AI to look at the electronic health records, the latest labs that might've been just a few hours ago. And let the doctor know whether there's stability or disease progression or whatever the doctor needs to know prior to the consultation with the patient. That summary is created by AI, human readable summary. Then AI listens to the consultations between the doctor and the patient. This is already delivered. This is already out there. They'll deliver -- they'll listen to the consultation of doctors with the patient. If the doctor orders a prescription, the AI checks to make sure the prescription is accurate and enters the prescription. The AI updates the electronic health records. The AI transcribes and distributes the doctors' orders, all from listening to the conversation. The doctor then gets a draft at the end of the conversation that the doctor can quickly review and approve. And then the prescriptions are filled and the orders are executed and the electronic health records are updated. We're already doing all of that. But I can go on. In health care, we need so many things from reading of X-rays to just the user interface.
Mark Moerdler
Larry, I think you are the first person to explain it that way. Thank you. It makes a lot of sense.
-
Assuming a machine can become an expert in any particular field, the use cases are vast. We already have smart hospitals. cameras monitor patients, in fact monitor everyone for certain illnesses based on certain bio markers, monitor vulnerable patients for falls, converse with patients about their treatment, meds, GE, Canon, Siemens MRI etc type scans to look for disease, Brainomix, developed by Oxford University scans patients for stroke assessment.
The next decade is going to be exciting to see the medical advances. At its core is training on big data. Machines can now see, speak and move
-
Do not fancy having a robot operating on me, might cut off the wrong bits!