Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
AI

OpenAI's Less-Flashy Rival Might Have a Better Business Model (msn.com) 48

OpenAI's rival Anthropic has a different approach — and "a clearer path to making a sustainable business out of AI," writes the Wall Street Journal. Outside of OpenAI's close partnership with Microsoft, which integrates OpenAI's models into Microsoft's software products, OpenAI mostly caters to the mass market... which has helped OpenAI reach an annual revenue run rate of around $13 billion, around 30% of which it says comes from businesses.

Anthropic has generated much less mass-market appeal. The company has said about 80% of its revenue comes from corporate customers. Last month it said it had some 300,000 of them... Its cutting-edge Claude language models have been praised for their aptitude in coding: A July report from Menlo Ventures — which has invested in Anthropic — estimated via a survey that Anthropic had a 42% market share for coding, compared with OpenAI's 21%. Anthropic is also now ahead of OpenAI in market share for overarching corporate AI use, Menlo Ventures estimated, at 32% to OpenAI's 25%. Anthropic is also surprisingly close to OpenAI when it comes to revenue. The company is already at a $7 billion annual run rate and expects to get to $9 billion by the end of the year — a big lead over its better-known rival in revenue per user.

Both companies have backing in the form of investments from big tech companies — Microsoft for OpenAI, and a combination of Amazon and Google for Anthropic — that help provide AI computing infrastructure and expose their products to a broad set of customers. But Anthropic's growth path is a lot easier to understand than OpenAI's. Corporate customers are devising a plethora of money-saving uses for AI in areas like coding, drafting legal documents and expediting billing. Those uses are likely to expand in the future and draw more customers to Anthropic, especially as the return on investment for them becomes easier to measure...

Demonstrating how much demand there is for Anthropic among corporate customers, Microsoft in September said Anthropic's leading language model, Claude, would be offered within its Copilot suite of software despite Microsoft's ties to OpenAI.

"There is also a possibility that OpenAI's mass-market appeal becomes a turnoff for corporate customers," the article adds, "who want AI to be more boring and useful than fun and edgy."

OpenAI's Less-Flashy Rival Might Have a Better Business Model

Comments Filter:
  • Business model? Yes. (Score:5, Interesting)

    by gweihir ( 88907 ) on Monday October 27, 2025 @07:48AM (#65752870)

    Scammers tend to specialize, because they cannot let the quality of their product do the advertising. Hence they create different narratives why their product is a must-have, but they can only really push one of those as they tend to be in conflict. Hence all we see here is different scammers targeting different marks.

    That is not to say LLMs are useless. But they are far less useful and far more limited than the pushers claim and most people think.

    • by FictionPimp ( 712802 ) on Monday October 27, 2025 @09:45AM (#65753042) Homepage

      Anthropic really delivers on the use cases it advertises. Modern spec driven strategies really do accelerate the amount of quality output a software engineer can produce. Over the last year my work with AI tooling has moved from letting it to fool proof boiler plate work and auto-complete, to providing it specs, a constitution, and rules and letting it tackle fairly complex tasks and features (work I'd give to junior engineers ready to move upward in their career).

      The results are impressive and the feedback loops natural. I easily perform the 3-4X the work I did before with at least the same quality. I also have more time while the AI is churning through the problem to work on those tasks that improve the team's quality of life, tackle complex problems that we put off for the emergent needs of the cycle, and most importantly grow my education to keep up with the needs of the business.

      I've started to creep claude into my personal life via MCPs that tackle tasks for me that are low risk. Producing a curated news feed for the day, organizing my task list by priority using the previous days as context and my notes, sugesting improvements on my schedule with that previous step, reminding me to follow up on things, sorting my notes into categories and keywords to improve recall later, and lastly monitoring key websites for information I'm waiting on.

      Can all of this be done without AI? Yes. Would I have taken the steps to build scripts and programs or find and test software tools to accomplish this? Probably not. But with AI I did it in about 12 hours of effort over a few weekends itch by itch.

      Is it going to take a lot of jobs? Sure. Will it create a lot of jobs? Probably. Will you need to know it to be marketable in the near future? Yes.

      I liken it to every other incremental change in the tech space. There was a time when being a vmware engineer was a job. Now if you work in IT you know how to use virtualization. It's not a job, it's a job skill. There was a time when people hired you because you knew how AWS worked. Now that's just basic IT engineering skills. AI is just going to be yet another tool for 'reasoning tasks'. Sure there are still specialists in IT who are experts in the complexities of networking, cloud, or virtualization, but everyone in IT knows how to do most of the day to day.

      • Re: (Score:2, Flamebait)

        by gweihir ( 88907 )

        Sounds like a mix of AI slop and delusion. Good luck!

        • Code solves problems or it does not. Code gets reviewed. Perhaps you are correct and I'm just really bad at writing software and reviewing PRs. What is worse is how bad all thse corperations are at judging my ability the last 25 years of promotions. I can't understand why they are so happy with my work.

          It is either that, or I'm good at what I do and evaluating the output of a tool. I feel that is more likely and the people who rally against the tool will eventually find themselves with the wrong market skil

          • by gtall ( 79522 )

            ^^ Yet more AI slop ^^ Give it a rest.

          • by gweihir ( 88907 )

            Code solves problems or it does not.

            I see you never have heard of reliability, security and maintainability. Quite an accomplishment if you have 25 years of respective experience. Of course, if you only write throw-away code, "AI" may actually be good support for that.

            • Ahh yes, my slashdot comment didn't include specific terms and generalized concepts into 'works'. I'm sorry, you win. I'm a failure. I hearby tender my resignation immediately. Thank you for your attention to this matter.

      • by GlennC ( 96879 )

        If you're giving AI work that you said you would "give to junior engineers ready to move upward in their career," then what are those junior engineers supposed to do?

        How are they to move upwards?

        What is going to happen when you retire or become unable to work any more and there's nobody who you have helped move upward?

        Or do you not care about the future?

        • If you're giving AI work that you said you would "give to junior engineers ready to move upward in their career," then what are those junior engineers supposed to do?

          I think the honest answer to this is "find another career". I'm not sure software engineering is a good choice today. But, really, I'm not sure what is.

          But FictionPimp and others refusing to take advantage of the tools that have become available won't stop the transition. Junior engineers are going to have to find something else, they can't rely on their older colleagues to protect their jobs.

          • by gweihir ( 88907 )

            Crap like that has been done before. Always results in massive problems a bit later. One example I remember from something like 35 years ago is Siemens Germany not hiring EEs for a year when they were hiring like half of the new graduates before. I think they had serious problems getting people for something like 10 years after that. Soooo stupid.

            What I tell the junior engineers is to make sure to have good skills, real understanding of things and to make sure to understand IT security as well. And in a few

        • The honest answer? The market will solve it. If it doesn't, there wasn't anything I could personally do to fix it.

          I build shit for a living I don't run companies. I can get with the program and make money or I can get fired and not make money. I'll take the money please.

          I want to retire in my 50's. I'm not going to risk my career over my ideals. Fighting for ideals is a young person's game.

    • But the only thing to really happen in the last 2 years has been AI, and look at how MAG7/Nvidia has grow? On one hand you say its just another product, but look at the complete change of the stock market allocation, Nvidia is the biggest company in history. 10 years ago they were pretty much just supplying gamers, 5 years ago maybe some crypto, then AI came along and blew them up. $4-5 trillion dollars says AI is the biggest thing ever.
      • by gweihir ( 88907 )

        Who cares? Stock market valuation is not an indicator of product quality or whether a company has a future. All it is is an indicator of greed and projections.

  • So how has Apple been doing this for the past 40 years?

  • by serviscope_minor ( 664417 ) on Monday October 27, 2025 @08:09AM (#65752904) Journal

    OpenAI's business model is basically Sam Altman's incerdible skill at trolling the media into giving him vast amounts of incredibly positive[*] publicity for free and using that to induce a massive sense of FOMO in venture capitalists.

    [*] That includes his performatively dire warnings about how dangerous AI, specifically his AI is. "our AI is SO GOOD it will take all jobs!!111one!1onelevenONE1!" and the press breathlessly parrot it for clicks because the media has basically been destroyed, but the take-home message is ChatGPT is really good. Sure it's dangerous but do YOU want to be the one left out when it takes over the world? No, well sign up here to invest!

    • by gweihir ( 88907 )

      Yes. My take as well. And all indirect predictions this will be AGI and other crap.

      • My "favourite" one, and I kinda have a grudging respect for his skills was when he announced that full kill-all-the-jobs general AI was only a few thousand days away. Only a few thousand!!! That'e really close it's just round the corner!

        Then I was like... wait a few thousand... say 3000, 365 days per year... oh. 10 years away.

        Except everyone knows 10 years away is code for "lol dunno maybe never", but somehow him phrasing it as that bypassed all the usual bullshit checks.

        • by gweihir ( 88907 )

          I think that was the point where it became clear to me that he knows he is peddling crap. Not sure I saw this specific quote, but there were more like it. All FUD, no substance.

    • Maybe AI combined with robotics eventually but not any time soon
  • by Inglix the Mad ( 576601 ) on Monday October 27, 2025 @09:32AM (#65753026)
    these things are not really "artificial intelligence" at all yet? They're great expert systems, and to be fair it's rather amazing we've gotten to the point in computing power we can build a library computer system like this, but it's not even the Librarian, much less Colossus/Guardian or Andromeda Ascendant/Rommie.

    These things have no volition, no self drive or inquisitiveness. They set no goals for themselves, have no true originality. They're fancy bots waiting for input to respond to, and not particularly good at many things. I mean, let them run free and they are worse than humans in that they have ZERO ability at all to vet data independently. Let one loose for too long unsupervised, and it can easily become Mecha-Hitler.

    They're fancy data storage / retrieval systems - that's it.
    • Re: (Score:3, Interesting)

      But they're not even good information retrieval systems. Their fundamental design is to mash their data together in a giant stew of probability.

      • They can be quite good information retrieval systems, or quite bad.
        Understanding the pitfalls of various attention mechanisms is the key to getting good performance out of them.

        Everything in this world can be represented in a giant stew of probability, so I'm not sure why you think that's a negative.
        • The biggest problem with them is they have no ability to govern themselves in regards to data. I mean, look at what happens when you muck with their sources at all. They can't tell what to trust in general, to the point where Elon had them set the rules for Grok to trust right-leaning sites more, but when people fed it some more data (linked to "trusted" sites) the thing fell down the rabbit hole and went full Mecha-H*tler.

          The problem is it's absurdly easy to poison one because they lack ANY intelligence
          • The biggest problem with them is they have no ability to govern themselves in regards to data. I mean, look at what happens when you muck with their sources at all. They can't tell what to trust in general, to the point where Elon had them set the rules for Grok to trust right-leaning sites more, but when people fed it some more data (linked to "trusted" sites) the thing fell down the rabbit hole and went full Mecha-H*tler.

            These things don't self-learn.
            Grok has money to blue on constantly fine-tuning (as do ChatGPT and others of that scale).
            It is not a problem that the owner of the model may train it to be evil. No "feeding of sources" made Grok go mecha-hitler. The complete removal of training not to be mecha-hitler is what led to that.

            The problem is it's absurdly easy to poison one because they lack ANY intelligence.

            No, it's actually quite difficult. LLM jailbreaks are hardcore science at this point. All the easy tricks have been figured out.
            You are again mistaking the deliberate poisoning of Grok with

  • I think likely the biggest revenue generator for any AI company is going to be people paying monthly fees for robots. It sounds silly, but I can totally imagine a situation where there are a LOT of households paying $200+ a month for a robot servant.

    How much would you pay for a personal chef, maid, butler, etc. Imagine coming home from work and you have dinner on the table, the house is clean and decorated for the holiday, etc. Robots are getting friggin' darn close to being able to do that. I think a lot

    • by gtall ( 79522 )

      Unless I am disabled, why would I want a robot to live my life for me?

      • Unless I am disabled, why would I want a robot to live my life for me?

        Unless you derive great satisfaction from housework, why would you not want a robot to do it for you so you can spend your time on things you enjoy?

        • I see the world like this. I have 86400 seconds a day to spend. Many of those seconds are not negotiable. I have to eat, sleep, shit, work, etc. I can't buy time directly. So I have to place a value on every second I spend. If I can spend my money to redirect my time to things I do enjoy, why not do that? Especially if it costs me "less" than the value I've placed on my time?

          To that end I have a lanscaper who cares for the lawn, a maid service that cleans the house, and a mechanic that changes my oil. Not b

  • "There is also a possibility that OpenAI's mass-market appeal becomes a turnoff for corporate customers," the article adds, "who want AI to be more boring and useful than fun and edgy."

    First of all, who has ever described OpenAI as "fun and edgy"? Grok sure, but OpenAI? I can think of lots of adjectives for OpenAI but "fun and edgy" are not in that list.

    Second, commercial customers are going to pick the company with what they (the customer) perceives as the better product for the price. Period. And right
    • OpenAI are about to add sex chat to ChatGPT, so perhaps this is what makes them "fun & edgy" ?

      I could certainly see some corporations not wanting to have their company software or marketing material written by a sex-bot.

  • Two companies have two different business models. Wow, what an amazing story. The only thing that could possibly make this story cathartic is if we knew which company's AI wrote this slop.

    • Well, the story is that Anthropic's business model is BETTER than OpenAI's, which I'd tend to agree with, and does seem somewhat newsworthy given that valuations are opposite.

      OpenAI seems to be emphasizing their chat-bot, CHatGPT, whose revenue is limited by the number of people on the planet, and how much they are willing to pay for it.

      Anthropic are emphasizing use for coding, now agentic coding, and API use, which are all much more token hungry and scalable use cases.

core error - bus dumped

Working...