Nvidia's Huang Says He's Surprised AMD Offered 10% of the Company in 'Clever' OpenAI Deal 53
Nvidia CEO Jensen Huang said Wednesday that he's surprised Advanced Micro Devices offered 10% of itself to OpenAI as part of a multibillion-dollar partnership announced earlier this week. From a report: "t's imaginative, it's unique and surprising, considering they were so excited about their next generation product," Huang said in an interview with "CNBC's Squawk Box."
"I'm surprised that they would give away 10% of the company before they even built it. And so anyhow, it's clever, I guess." OpenAI and AMD reached a deal on Monday, with OpenAI committing to purchase 6 gigawatts worth of AMD chips over multiple years, including its forthcoming MI450 series. As part of the agreement, OpenAI will receive warrants for up to 160 million AMD shares, with vesting milestones based on deployment volume and AMD's share price.
"I'm surprised that they would give away 10% of the company before they even built it. And so anyhow, it's clever, I guess." OpenAI and AMD reached a deal on Monday, with OpenAI committing to purchase 6 gigawatts worth of AMD chips over multiple years, including its forthcoming MI450 series. As part of the agreement, OpenAI will receive warrants for up to 160 million AMD shares, with vesting milestones based on deployment volume and AMD's share price.
Didn't think anyone could.... (Score:5, Funny)
Re: (Score:2)
You know the CEO of AMD is his cousin, right?
Re: (Score:3)
Re: (Score:2)
I meant his cousin should, in theory, due to similar family upbringing/environment/genetics/whatever, have some of the same abilities as he does.
Re: (Score:2)
There's enough randomness in the universe that neither genetics nor environment are a reliable predictor of individual behavior.
Re: (Score:2)
Really, you believe environment/genetics/upbringing can't counteract randomness, and that explains how there's 8 pairs of parent/kin science Nobel prize winners?
Re: (Score:2)
I'm not surprised at all (Score:3)
Open AI is huge and getting them to use AMD software will be a huge leg up in the entire industry.
It's less about hardware and more about software adoption that leads into hardware sales. The only surprising thing is AMD figured that out.
Re: (Score:3)
Amd's biggest problem is getting people to use their software so they can get a foot in the door and take over from cuda.
This is indeed AMD's biggest problem.
However the main problem with AMD is... AMD.
AMD desparately want to replace CUDA so people can buy an AMD GPU instead of an NVidia one and run their existing code on AMD GPUs. What they really care about is the expensive, high margin datacentre or pro GPUs.
This is deeply stupid for a number of reasons.
First let's accept the obvious truth that currentl
Re: (Score:1)
Instructions:
1. Buy any (yes, any) Radeon 9xxx or 7xxx card and install it
2. If you use Windows, simply follow those simple instructions here: https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Frocm.docs.amd.com%2Fproj... [amd.com]
3. If you use Linux, follow those simple instructions here: https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Frocm.docs.amd.com%2Fproj... [amd.com]
All in all, it will take about 5-10 minutes for the installation to run, and you can use it...
Nvidia on the other hand ... Well, lets hope you won't need the external driver from Nvidia, or else ... prepare for some real pain.
Re: (Score:3)
1. Buy any (yes, any) Radeon 9xxx or 7xxx card and install it
That's already several steps behind where NVidia are. About the oldest card which will work well with Pytorch is something from the 2080 architecture. I didn't upgrade it for the latest project. Still works well. But even so, the last few times I checked, it was CDNA official and "lol good luck but maybe" for RDNA. I'm glad they've changed that for the better, though it's not enormously obvious.
3. If you use Linux, follow those simple instructions
Re: (Score:2)
Next time read the supported hardware list. This reads like an ad for NV based on tropes from a decade ago.
Re: (Score:2)
Next time read the supported hardware list.
As opposed to somehow scoring literally any GPU made after about 2018. You are not bolstering your point here.
This reads like an ad for NV based on tropes from a decade ago.
10 years or 10 days? If AMD have finally fixed their shit (and I do hope so, but they haven't with their supposed AI chips that's for sure) it's recent, much more than 10 years.
Trawling through reddit, it seems like there was a spate of "OMG pytorch/rocm actually works now" from about a year ag
Re: (Score:2)
You really need/want a pseudo-modern dGPU with 32GB VRAM or higher to do anything. Just because you can sort of run an LLM on an old 8GB NV card doesn't mean that you'll get much done that way. AMD is not going to prioritize older hardware when all the money is on newer cards.
You can run a local instance very well on a 7900XTX, 9070XT, or 395+ and do very well. Many people already do, without particular difficulty.
Re: (Score:2)
(side note, the 7900XTX is 24GB and 9070XT is 16GB, but you can still do some decent work with them. Ryzen AI+ 395 Max is sold in configs up to 128GB for less than the price of a 5090, making it potentially a better deal, depending on your needs)
Re: (Score:2)
Yeah if you want to run large transformer models a lot, you'll want something a bit beefier than a 2080Ti. With that said, there's a lot more out there than large transformer models.
AMD is not going to prioritize older hardware when all the money is on newer cards.
Their priority has ALWAYS been to prioritize the cards where the money is. The strategy hasn't worked for them, an has lead to a market where they're a minor player and everyone is pissed off with NVidia.
The problem is they're/were expecting peopl
Re: (Score:2)
That's not true given UDNA, they just realised too late.
Just like they realised too late that using worse-CUDA as their main programming language was not a good idea.
Re: (Score:2)
At the end of the day it's about selling chips, lots of them. Better drivers and libraries are one way to do that, but just selling a shitload of chips is another way to do it.
The bit GPU-compute growth opportunity at the moment is AI, largely LLMs (but also photo/video generation), and hooking yourself into the success of OpenAI is not a bad way to do that. From OpenAI's POV this gives them an ability to play AMD and NVIDIA off against each other in terms of price, but that is really also the opportunity f
Clever (Score:1)
Re: (Score:2)
Re:Clever (Score:5, Insightful)
Clever, because they know it is all monopoly money once the AI crash comes.
Not just once the crash comes, but this sort of thing will help cause the crash.
Cory Doctorow talked about some of it in a recent post [pluralistic.net] that really helped me grok what I've been feeling about the inevitable AI bubble collapse.
The data-center buildout has genuinely absurd finances – there are data-center companies that are collateralizing their loans by staking their giant Nvidia GPUs as collateral. This is wild: there's pretty much nothing (apart from fresh-caught fish) that loses its value faster than silicon chips.
That barely scratches the surface of the funny accounting in the AI bubble. Microsoft "invests" in Openai by giving the company free access to its servers. Openai reports this as a ten billion dollar investment, then redeems these "tokens" at Microsoft's data-centers. Microsoft then books this as ten billion in revenue.
That's par for the course in AI, where it's normal for Nvidia to "invest" tens of billions in a data-center company, which then spends that investment buying Nvidia chips. It's the same chunk of money is being energetically passed back and forth between these closely related companies, all of which claim it as investment, as an asset, or as revenue (or all three).
AMD giving 10% of itself to OpenAI is almost entirely symbolic as OpenAI will turn around and "spend" that money on something like GPU hardware, probably AMD products. AMD is just paying itself via OpenAI and both companies will book it as revenue to try and obscure the colossal losses on AI spending.
It's all monopoly money laundering, and it's going to crash hard.
Re:That's just clever. (Score:3)
Two economists are walking in a forest when they come across a pile of shit.
The first economist says to the other “I’ll pay you $100 to eat that pile of shit.” The second economist takes the $100 and eats the pile of shit.
They continue walking until they come across a second pile of shit. The second economist turns to the first and says “I’ll pay you $100 to eat that pile of shit.” The first economist takes the $100 and eats a pile of shit.
Walking a little more, the first
Re: (Score:2)
Doesn't this ignore that Microsoft also has a cost accounting wise when they give away access to the servers?
It's described as Microsoft is creating a revenue increase. But they also have a debt of the same size.
Re: (Score:2)
Certain companies like OpenAI *may* crash, but AI itself is too compelling to ever crash. Even when if the algorithmic capability plateaus out at the current state there'll be people working on it, utilizing it. There are still MANY domains of knowledge that haven't been properly added to AI. For example, I recently asked it for some CNC machining code and it spit out garbage, whereas it is objectively capable of writing complex algorithms and functions in Python or C. Obviously that's because the nerds wor
Re: (Score:1)
Still gonna crash.
Great Technique (Score:2)
Re: (Score:2)
Like someone cutting their wrists and gushing blood all over a stairway to make the bully coming after them slip... You gotta respect it.
If you don't, you're taking your bullying far too seriously.
It's just stock dilution. (Score:5, Informative)
Re: (Score:2)
However, I'm not familiar with any cases of a company sweetening a sale by adding 10% of their company in warrants to the sale.
It's the corporate equivalent of turning tricks. That being said, this may have been a great move on AMD's part.
Sure, they dilute their stock by a frankly huge fucking amount, but they're also taking a stab at NV's jugular while doing it.
Re: (Score:2)
Corporations do it all the time, and it's 10 percent over years with milestone requirements to receive those warrants - not only purchase requirements but also stock price minimum requirements, the last tranche of warrants requires an AMD $600 trading price to activate.
No, companies do not do this all the time. I can't recall a single company negotiating a $10 billion sale by offering to give $30 billion in free money/stock. No sane company does this. This smacks of extreme desperation. That's why Nvidia stock didn't tank or even drop because it's obvious that this sale was not an arms-length sale. If we eventually see a $10 billion arms-length GPU sale, that means that Nvidia's dominance for AI accelerators will have been breached, and that would prompt a massive dr
Re: (Score:1)
Yes, companies don't do it all the time
But for AMD - it worths every penny. If OpenAI cannot meet those milestone, those share won't worth anything, so AMD doesn't lose anything from this deal
Re: (Score:2)
The vesting of the warrants AMD is giving to OpenAI is dependent on AMD stock reaching certain target prices, with the final tranche only vesting at a $600 AMD stock price vs today's $225.
I doubt too many AMD shareholders are going to be crying about 10% dilution if the stock goes up by 200%.
Re: (Score:3)
AMD stock is just over $200 today, so if this deal can catapult it to $600 then shareholders are not going to be crying about the 10% dilution.
Re: (Score:2)
Even on GPUs that ROCm works on, it's a fucking pain in the ass.
Sure, part of that is the open source projects that use ROCm, the catnip-fuelled tabby cat of moving targets that it is, but the other part is just fucking getting ROCm to build at all is a goddamn nightmare.
AMD has major software stack problems. They need to be fixed.
CUDA, fr
Re: (Score:1)
It *WAS* a pain.
It had lot of issues.
I can tell from using it both through Ubuntu and WSL that ANY radeon 7XXX and 9XXX will run the ROCm stuff without issues, and yes, it also works on mobile GPU's (even the 7600M)
Re: (Score:2)
Just went through the nightmare of getting it running on my AI Max mini-PC.
Still is a pain.
I'll grant them that it's "new", but come the fuck on.
Re: (Score:2)
Some speculate it's a corporate culture problem over there.
They have a long and distinguished history of shitty software going along with their hardware.
Either way- it's definitely a completely solvable problem.
Lisa Su tdecided to visit the guys who posted the scathing and honest analysis of the AMD MI ecosystem (SemiAnalysis [semianalysis.com]), the result of which was Anush Elangovan acknowledging (for the first time, rather than defensively gaslighting everyone who complained) that the software was a steaming
Re: (Score:2)
AMD is actually moving a lot of MI-series hardware, and they would continue to do so without this OpenAI deal. They aren't in any trouble.
Re: (Score:3)
I wouldn't say they're in trouble, but as it stands right now- AMD datacenter GPUs are a failed product.
They're very eager to not make them a failed product, I imagine.
It's in the cloud ... (Score:2)
That gives "it is in the cloud" a new meaning. Oh, wait: it went up in smoke! Hmmmm ....
Will it travel in time? (Score:2)
Remember when 1.21 gigawatts was a lot of power? Now we're talking about 6!
Re: (Score:2)
6 gigawatts isn't a quantity of chips! (Score:2)
Anyone else annoyed at this tendency to measure things in ridiculous units?
They couldn't say how many chips, or FLOPs, or even TOPS... no, let's say how much power the chips will consume.
Re: (Score:2)
I wonder how many horse power that is?
Better watch out; it go boom! (Score:2)
Who want some Huang?
No surprises (Score:2)
AMD made a major stock deal to pick up Xilinx in 2022. They're more than happy to fritter away shares of stock that were nearly worthless 10 years ago.
Jiggawatts (Score:3)
In other news, we are now measuring chips in units of "gigawatts". About as smart (or less) as measuring light bulbs by wattage instead of lumens. When newer light bulb tech came along (fluorescent, LED), the advertised wattage figures became fake and now must be supplemented with the actual wattage and luminescence in the fine print.
The same would seem to apply to chips. If they want to measure AI chip deliverables in wattage, AMD could simply unload their warehouses full of old Bulldozer chips to meet the requirements. That would consume the requisite wattage without generating much useful work. Instead, they should measure chip delivery commitments in TeraFLOPS, or Giga-inferences per second, or whatever performance criteria is critical to their needs. Measuring deliverable of anything by wattage is dumb for anything other than electrical supply. Especially for chips, where every generation is practically guaranteed to deliver more performance per watt.
Re: (Score:2)
Yeah - definitely strange way to talk about chip/compute quantities, as if they are electric heaters where the power used is a feature not a negative.
The only rationale for this I can think of is that AI datacenters are being discussed in the same way - a 1GW datacenter, etc - which I suppose makes somewhat more sense since power demand is becoming a critical factor.
So, I guess you need 1GW-ish of chips to build a 1GW datacenter. I just asked Gemini about this, and it said that GPU power usage might be appr
Desperation (Score:2)
Right now OpenAI accepts your money only if they wish to do so.
When big players throw 100-300B around, AMD can only watch.