Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
AI AMD

Nvidia's Huang Says He's Surprised AMD Offered 10% of the Company in 'Clever' OpenAI Deal 53

Nvidia CEO Jensen Huang said Wednesday that he's surprised Advanced Micro Devices offered 10% of itself to OpenAI as part of a multibillion-dollar partnership announced earlier this week. From a report: "t's imaginative, it's unique and surprising, considering they were so excited about their next generation product," Huang said in an interview with "CNBC's Squawk Box."

"I'm surprised that they would give away 10% of the company before they even built it. And so anyhow, it's clever, I guess." OpenAI and AMD reached a deal on Monday, with OpenAI committing to purchase 6 gigawatts worth of AMD chips over multiple years, including its forthcoming MI450 series. As part of the agreement, OpenAI will receive warrants for up to 160 million AMD shares, with vesting milestones based on deployment volume and AMD's share price.
This discussion has been archived. No new comments can be posted.

Nvidia's Huang Says He's Surprised AMD Offered 10% of the Company in 'Clever' OpenAI Deal

Comments Filter:
  • by zurkeyon ( 1546501 ) on Wednesday October 08, 2025 @09:54AM (#65711960)
    Out maneuver him with his US Gov and Intel Deals. Those crafty humans! ;-D
    • You know the CEO of AMD is his cousin, right?

      • You realize when you are in the positions they both are, beholden to shareholders and the FTC, the fact they are "Cousins" means a flat nothing... Right? Do you honestly think that relationship somehow sways what either of them do in those positions? Openly? In public? With a Trillion dollars on the line? That would be clear and evident Fraud/Espionage... And I STRONGLY doubt that either of them go to the family BBQ on the weekends to discuss each others business strategies for trillion dollar companies. Lo
        • I meant his cousin should, in theory, due to similar family upbringing/environment/genetics/whatever, have some of the same abilities as he does.

          • There's enough randomness in the universe that neither genetics nor environment are a reliable predictor of individual behavior.

            • Really, you believe environment/genetics/upbringing can't counteract randomness, and that explains how there's 8 pairs of parent/kin science Nobel prize winners?

      • Just like the European leaders in WWI!
  • by rsilvergun ( 571051 ) on Wednesday October 08, 2025 @10:22AM (#65712016)
    Amd's biggest problem is getting people to use their software so they can get a foot in the door and take over from cuda.

    Open AI is huge and getting them to use AMD software will be a huge leg up in the entire industry.

    It's less about hardware and more about software adoption that leads into hardware sales. The only surprising thing is AMD figured that out.
    • Amd's biggest problem is getting people to use their software so they can get a foot in the door and take over from cuda.

      This is indeed AMD's biggest problem.

      However the main problem with AMD is... AMD.

      AMD desparately want to replace CUDA so people can buy an AMD GPU instead of an NVidia one and run their existing code on AMD GPUs. What they really care about is the expensive, high margin datacentre or pro GPUs.

      This is deeply stupid for a number of reasons.

      First let's accept the obvious truth that currentl

      • by hetz ( 516550 )

        Instructions:
        1. Buy any (yes, any) Radeon 9xxx or 7xxx card and install it
        2. If you use Windows, simply follow those simple instructions here: https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Frocm.docs.amd.com%2Fproj... [amd.com]
        3. If you use Linux, follow those simple instructions here: https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Frocm.docs.amd.com%2Fproj... [amd.com]

        All in all, it will take about 5-10 minutes for the installation to run, and you can use it...

        Nvidia on the other hand ... Well, lets hope you won't need the external driver from Nvidia, or else ... prepare for some real pain.

        • 1. Buy any (yes, any) Radeon 9xxx or 7xxx card and install it

          That's already several steps behind where NVidia are. About the oldest card which will work well with Pytorch is something from the 2080 architecture. I didn't upgrade it for the latest project. Still works well. But even so, the last few times I checked, it was CDNA official and "lol good luck but maybe" for RDNA. I'm glad they've changed that for the better, though it's not enormously obvious.

          3. If you use Linux, follow those simple instructions

      • Next time read the supported hardware list. This reads like an ad for NV based on tropes from a decade ago.

        • Next time read the supported hardware list.

          As opposed to somehow scoring literally any GPU made after about 2018. You are not bolstering your point here.

          This reads like an ad for NV based on tropes from a decade ago.

          10 years or 10 days? If AMD have finally fixed their shit (and I do hope so, but they haven't with their supposed AI chips that's for sure) it's recent, much more than 10 years.

          Trawling through reddit, it seems like there was a spate of "OMG pytorch/rocm actually works now" from about a year ag

          • You really need/want a pseudo-modern dGPU with 32GB VRAM or higher to do anything. Just because you can sort of run an LLM on an old 8GB NV card doesn't mean that you'll get much done that way. AMD is not going to prioritize older hardware when all the money is on newer cards.

            You can run a local instance very well on a 7900XTX, 9070XT, or 395+ and do very well. Many people already do, without particular difficulty.

            • (side note, the 7900XTX is 24GB and 9070XT is 16GB, but you can still do some decent work with them. Ryzen AI+ 395 Max is sold in configs up to 128GB for less than the price of a 5090, making it potentially a better deal, depending on your needs)

            • Yeah if you want to run large transformer models a lot, you'll want something a bit beefier than a 2080Ti. With that said, there's a lot more out there than large transformer models.

              AMD is not going to prioritize older hardware when all the money is on newer cards.

              Their priority has ALWAYS been to prioritize the cards where the money is. The strategy hasn't worked for them, an has lead to a market where they're a minor player and everyone is pissed off with NVidia.

              The problem is they're/were expecting peopl

      • That's not true given UDNA, they just realised too late.

        Just like they realised too late that using worse-CUDA as their main programming language was not a good idea.

    • At the end of the day it's about selling chips, lots of them. Better drivers and libraries are one way to do that, but just selling a shitload of chips is another way to do it.

      The bit GPU-compute growth opportunity at the moment is AI, largely LLMs (but also photo/video generation), and hooking yourself into the success of OpenAI is not a bad way to do that. From OpenAI's POV this gives them an ability to play AMD and NVIDIA off against each other in terms of price, but that is really also the opportunity f

  • Clever, because they know it is all monopoly money once the AI crash comes.
    • OpenAI is basically built on scammy software. Someone could come along at any minute and come up with something better, remember Deepseek R1? Stable diffusion was long before OpenAI and ChatGPT, yet nobody went bonkers for AI. Somehow a scam started around the time the LLMs got going. The video/picture generation aspect of AI is far beyond the text generation, yet it only "blew up" when OpenAI released chatGPT? IMO this AI is way overhyped but at least Nvidia is in a position that physical hardware is good
    • Re:Clever (Score:5, Insightful)

      by nmb3000 ( 741169 ) on Wednesday October 08, 2025 @01:15PM (#65712464) Journal

      Clever, because they know it is all monopoly money once the AI crash comes.

      Not just once the crash comes, but this sort of thing will help cause the crash.

      Cory Doctorow talked about some of it in a recent post [pluralistic.net] that really helped me grok what I've been feeling about the inevitable AI bubble collapse.

      The data-center buildout has genuinely absurd finances – there are data-center companies that are collateralizing their loans by staking their giant Nvidia GPUs as collateral. This is wild: there's pretty much nothing (apart from fresh-caught fish) that loses its value faster than silicon chips.

      That barely scratches the surface of the funny accounting in the AI bubble. Microsoft "invests" in Openai by giving the company free access to its servers. Openai reports this as a ten billion dollar investment, then redeems these "tokens" at Microsoft's data-centers. Microsoft then books this as ten billion in revenue.

      That's par for the course in AI, where it's normal for Nvidia to "invest" tens of billions in a data-center company, which then spends that investment buying Nvidia chips. It's the same chunk of money is being energetically passed back and forth between these closely related companies, all of which claim it as investment, as an asset, or as revenue (or all three).

      AMD giving 10% of itself to OpenAI is almost entirely symbolic as OpenAI will turn around and "spend" that money on something like GPU hardware, probably AMD products. AMD is just paying itself via OpenAI and both companies will book it as revenue to try and obscure the colossal losses on AI spending.

      It's all monopoly money laundering, and it's going to crash hard.

      • Two economists are walking in a forest when they come across a pile of shit.

        The first economist says to the other “I’ll pay you $100 to eat that pile of shit.” The second economist takes the $100 and eats the pile of shit.

        They continue walking until they come across a second pile of shit. The second economist turns to the first and says “I’ll pay you $100 to eat that pile of shit.” The first economist takes the $100 and eats a pile of shit.

        Walking a little more, the first

      • Doesn't this ignore that Microsoft also has a cost accounting wise when they give away access to the servers?

        It's described as Microsoft is creating a revenue increase. But they also have a debt of the same size.

    • Certain companies like OpenAI *may* crash, but AI itself is too compelling to ever crash. Even when if the algorithmic capability plateaus out at the current state there'll be people working on it, utilizing it. There are still MANY domains of knowledge that haven't been properly added to AI. For example, I recently asked it for some CNC machining code and it spit out garbage, whereas it is objectively capable of writing complex algorithms and functions in Python or C. Obviously that's because the nerds wor

  • It's rare to see such a good example of damning with faint praise. It even managed to cast aspersions on their next generation product all while seemingly heaping praise on them.
    • I think the sentiment is honest.

      Like someone cutting their wrists and gushing blood all over a stairway to make the bully coming after them slip... You gotta respect it.
      If you don't, you're taking your bullying far too seriously.
  • by Fly Swatter ( 30498 ) on Wednesday October 08, 2025 @10:35AM (#65712050) Homepage
    Corporations do it all the time, and it's 10 percent over years with milestone requirements to receive those warrants - not only purchase requirements but also stock price minimum requirements, the last tranche of warrants requires an AMD $600 trading price to activate.
    • Corporations do dilute stock all the time.
      However, I'm not familiar with any cases of a company sweetening a sale by adding 10% of their company in warrants to the sale.

      It's the corporate equivalent of turning tricks. That being said, this may have been a great move on AMD's part.
      Sure, they dilute their stock by a frankly huge fucking amount, but they're also taking a stab at NV's jugular while doing it.
    • Corporations do it all the time, and it's 10 percent over years with milestone requirements to receive those warrants - not only purchase requirements but also stock price minimum requirements, the last tranche of warrants requires an AMD $600 trading price to activate.

      No, companies do not do this all the time. I can't recall a single company negotiating a $10 billion sale by offering to give $30 billion in free money/stock. No sane company does this. This smacks of extreme desperation. That's why Nvidia stock didn't tank or even drop because it's obvious that this sale was not an arms-length sale. If we eventually see a $10 billion arms-length GPU sale, that means that Nvidia's dominance for AI accelerators will have been breached, and that would prompt a massive dr

      • by hetz ( 516550 )

        Yes, companies don't do it all the time

        But for AMD - it worths every penny. If OpenAI cannot meet those milestone, those share won't worth anything, so AMD doesn't lose anything from this deal

      • The vesting of the warrants AMD is giving to OpenAI is dependent on AMD stock reaching certain target prices, with the final tranche only vesting at a $600 AMD stock price vs today's $225.

        I doubt too many AMD shareholders are going to be crying about 10% dilution if the stock goes up by 200%.

    • AMD stock is just over $200 today, so if this deal can catapult it to $600 then shareholders are not going to be crying about the 10% dilution.

  • That gives "it is in the cloud" a new meaning. Oh, wait: it went up in smoke! Hmmmm ....

  • Remember when 1.21 gigawatts was a lot of power? Now we're talking about 6!

    • If you travel back a couple of centuries, your 6 jigowatts will only be worth about 8 million horses.
  • Anyone else annoyed at this tendency to measure things in ridiculous units?

    They couldn't say how many chips, or FLOPs, or even TOPS... no, let's say how much power the chips will consume.

  • Who want some Huang?

  • AMD made a major stock deal to pick up Xilinx in 2022. They're more than happy to fritter away shares of stock that were nearly worthless 10 years ago.

  • by Dawn Keyhotie ( 3145 ) on Wednesday October 08, 2025 @02:11PM (#65712626)

    In other news, we are now measuring chips in units of "gigawatts". About as smart (or less) as measuring light bulbs by wattage instead of lumens. When newer light bulb tech came along (fluorescent, LED), the advertised wattage figures became fake and now must be supplemented with the actual wattage and luminescence in the fine print.

    The same would seem to apply to chips. If they want to measure AI chip deliverables in wattage, AMD could simply unload their warehouses full of old Bulldozer chips to meet the requirements. That would consume the requisite wattage without generating much useful work. Instead, they should measure chip delivery commitments in TeraFLOPS, or Giga-inferences per second, or whatever performance criteria is critical to their needs. Measuring deliverable of anything by wattage is dumb for anything other than electrical supply. Especially for chips, where every generation is practically guaranteed to deliver more performance per watt.

    • Yeah - definitely strange way to talk about chip/compute quantities, as if they are electric heaters where the power used is a feature not a negative.

      The only rationale for this I can think of is that AI datacenters are being discussed in the same way - a 1GW datacenter, etc - which I suppose makes somewhat more sense since power demand is becoming a critical factor.

      So, I guess you need 1GW-ish of chips to build a 1GW datacenter. I just asked Gemini about this, and it said that GPU power usage might be appr

  • Right now OpenAI accepts your money only if they wish to do so.

    When big players throw 100-300B around, AMD can only watch.

"Don't try to outweird me, three-eyes. I get stranger things than you free with my breakfast cereal." - Zaphod Beeblebrox in "Hithiker's Guide to the Galaxy"

Working...