Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
AI

Enterprises Are Shunning Vendors in Favor of DIY Approach To AI, UBS Says 47

Established software companies hoping to ride the AI wave are facing a stiff headwind: many of their potential customers are building AI tools themselves. This do-it-yourself approach is channeling billions in spending towards cloud computing providers but leaving traditional software vendors struggling to capitalize, complicating their AI growth plans.

Cloud platforms like Microsoft Azure and Amazon Web Services are pulling in an estimated $22 billion from AI services, with Azure alone capturing $11.3 billion. Yet, software application vendors have collectively garnered only about $2 billion from selling AI products. Stripping out Microsoft's popular Copilot tools, that figure drops to a mere $450 million across all other vendors combined.

Why are companies choosing the harder path of building? Feedback gathered by UBS points to several key factors driving this "persistent DIY trend." Many business uses for AI are highly specific or narrow, making generic software unsuitable. Off-the-shelf AI products are often considered too expensive, and crucially, the essential ingredients -- powerful AI models, cloud computing access, and the company's own data -- are increasingly available directly, lessening the need for traditional software packages.
This discussion has been archived. No new comments can be posted.

Enterprises Are Shunning Vendors in Favor of DIY Approach To AI, UBS Says

Comments Filter:
  • it could be (Score:4, Interesting)

    by FudRucker ( 866063 ) on Wednesday April 09, 2025 @06:02AM (#65291757)
    building an AI system tailored to do a specific set of tasks would work better than a generic AI built by somebody outside the company,
    • by gweihir ( 88907 )

      Indeed. Narrow-purpose LLMs trained carefully may even work well enough to be used for real work.

  • The best utility is going to be from purpose built tools for a specific task. Copilot is fucking useless, even at something basic like summarising email, but a trained AI that can rapidly contextually search and reference something from thousands and thousands of pages of a manual, that is highly valuable. I've lost countless days refining deconvolution parameters, building models, and then iteratively applying them to create a sharpened image only for an AI deconvolution model to outperform me at the press

    • Knowing when not to reinvent the wheel is a valuable skill. Another valuable skill is to work a problem, go down the blind alleys, simplify the final solution and reformulate it into a simple one-line recipe which gets to the heart of the matter.
    • Chat GPT can't tell you how many r's are in raspberry.

      Says there are three 'R"'s. [chatgpt.com]

      • Yeah, it it still can't count the number of 'r's in strawberry. It just now has been trained on a bunch of stories about LLMs being unable to count the 'r's in strawberry.

        • Gemini 2.5 can, but that's because they taught it how to address that kind of question, not because it understands the question.
          • Incorrect. Gemini 2.5 can because it is a reasoning model. It requires no specific fine-tuning to be able to count letters in a word. That kind of job is literally the reason for making reasoning models.
            • You misunderstand the issue. It's an issue of tokenization, not reasoning. Human being wrote code so it could count the R's. So-called "AI" has not been trained to do so with any sort of reasoning.

              • You misunderstand the issue.

                Nope.

                It's an issue of tokenization, not reasoning.

                Wrong.

                Human being wrote code so it could count the R's.

                Laughably wrong, lol.

                So-called "AI" has not been trained to do so with any sort of reasoning.

                It didn't need to be trained to. It's an ability it acquired by understand the language needed to complete the task.

        • Wrong.
          Is there any way I can help you be less ignorant, and stop saying shit that is wrong?
      • by gweihir ( 88907 )

        That is because they added that data point...

        • Wrong.
          How can I help you be less ignorant, so that you stop saying wrong things?

          The advent of reasoning models has greatly expanded the computational power an LLM will use to answer its question.
          Previously, they were instructed to answer your question. They inferred the best they could, but if you asked them a question that they could not possibly compute within the context window (output tokens) and instructions given, then you were going to get a bullshit answer.

          Reasoning LLMs are trained to produce
      • Says there are three 'R"'s. [chatgpt.com]

        It does now, but largely because that specific problem was addressed with data after OpenAI got somewhat embarrassed about it. The point is that it definitely couldn't. In fact it made it 5 models in before that problem was fixed in o1 https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fprompt.16x.engineer%2Fbl... [prompt.16x.engineer]. But that's just one problem. It still regularly makes shit up as it goes. It still spits out basic math confidentially and incorrectly. It still hallucinates references that just never happened.

        Relying on general purpose AI to do anything ot

        • It does now, but largely because that specific problem was addressed with data after OpenAI got somewhat embarrassed about it.

          Wrong.

          The point is that it definitely couldn't. In fact it made it 5 models in before that problem was fixed in o1 https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fprompt.16x.engineer%2Fbl [prompt.16x.engineer]... [prompt.16x.engineer].

          That's your hint for how it was actually fixed.

          Non-reasoning models have limited computational skills. They have to rely on single-shot inference.
          o1 is a reasoning model. In fact any reasoning model solves this task easily, as well as other things people often erroneously claim LLMs cannot solve, like balancing parentheses.

    • Chat GPT can't tell you how many r's are in raspberry.

      Yes, it can. Any reasoning model can.
      Hell, they'll gladly tell you the position and count of each letter in the word. Go ahead- give it a shot.

      General purpose AI models are a dead end.

      No, they are not.

      I've lost countless days refining deconvolution parameters, building models, and then iteratively applying them to create a sharpened image only for an AI deconvolution model to outperform me at the press of a button and 20 seconds of processing.

      You sound like yet another claimed slashdot "AI expert" that seems to be operating on many-year old impressions of LLMs.
      What's leading to that phenomena, do you think?
      You guys just full of shit, or are you too proud to keep your knowledge up to date?

      • Umm. Last year half way though the year a model says a book sounds futuristic. Asked why. "This book was written in 2024 which is the future. [remember this was asked half way though 2024]. Knowing the year is the definition of a sanity check question if you get in a wreck et cetera. About a year ago I would say in not a "many-year old impressions of LLMs". As far a any reasoning model can tell in raspberry to the r's in raspberry that is an issue even at this moment. I'll quote a very popular reasoning mod

    • Co-pilot still cannot generate a 7 pointed star.

      It will bounce between 6 and 8 pointed stars as you keep correcting it.

      This tells me that it doesnt do any sort of symmetry-demanding stuff, because all it knows of symmetry is a bunch high probability text about it.
      • Co-pilot still cannot generate a 7 pointed star.

        Oh, that's interesting.

        It will bounce between 6 and 8 pointed stars as you keep correcting it.

        Seems to indicate it doesn't understand the geometry of a multi-point star.

        This tells me that it doesnt do any sort of symmetry-demanding stuff, because all it knows of symmetry is a bunch high probability text about it.

        Na, that isn't what it tells you.
        It tells you that it hasn't, in its training data, really digested the concept of what the geometry of a multi-point star is.
        LLMs are perfectly capable of doing that, this one just isn't.

        I don't have access to co-pilot, but I do quite a bit of local LLM experimenting.

        To test this, I asked rawdog [github.com] to "draw a 7-pointed star using matplotlib".
        It's setup to use Qwen2.5-Coder-3

        • Saying it doesnt make it true, and altering the conditions of the test (hey ai, have this other specific program do it) doesnt give you a free pass to say it.
          • What in the fuck are you talking about?

            Way to prove you were talking out of your ass, lol.

            I linked you to rawdog so that you could see what it was.
            All it is, is a system that satisfies requests by telling the LLM to solve them by writing a script in python, and this it runs that script automatically.
            matplotlib is just a library for displaying vertex and line plots.

            After that, I provided the second link for you to look at to demonstrate that the AI is indeed fully aware of angular symmetry.

            I had ho
  • LLM != AI (Score:3, Insightful)

    by gavron ( 1300111 ) on Wednesday April 09, 2025 @06:58AM (#65291811)

    AI doesn't exist.

    LLMs are in 2024 very sophisticated spellcheckers we sure loved in 2020.

    No AI.

    AGI isn't just a dream or a pipe-dream, it's a "you can't get from there to here."

    So say "AI" all you want. Just add Quantum and Blockchain and Neuro and get VC/PE funds and wast them.
    You'll be richer for it...but sadly there won't be AI.

    Gotta go, SkyNet's calling on line XAI..

    • by gweihir ( 88907 )

      Indeed. LLMs have no capability to analyze or understand anything. The newer "reasoning" models do not fix that. They just make it a bit harder to see, at the cost of a massive increase in effort.

    • You're just talking out of your ass. Confidently, yes, but still spouting horse shit.

    • The traditional definition of AI is a non-sentient system that does something an intelligent creature would do. "Artificial" refers to "fake" not "man-made". So yes, LLMs are very much AIs. So are spellcheckers. The automatic door openers at every grocery stories are AI systems. Anything that demonstrates fake intelligence is AI.

      The movie industry uses a different definition for AI. We're on a tech website, not a movie website. If I told you I upgraded my OS this weekend, you wouldn't be asking me fo

    • You get your definition of AI from a science fiction movie, not from the actual technical use of the term as it has been applied to any trained model over the past 15 years. LLM is objectively AI by industry definition of the phrase, just like AI models before it (yes we've been doing this shit long before you started reading about it in the news, it just didn't speak to us using english phrases).

    • It's the Vocabulary Cop again. You lost, go away! Usage determines real-world vocabulary, NOT professors and NOT Vulcans, and the real world calls it "AI".

    • You're right, but it doesn't matter. The fact is, LLMs (and other types of AI models) can do many very useful things, and resemble human interactions and intelligence in many ways. So you can be pedantic about definitions, but these AI tools are actually useful.

  • by fuzzyfuzzyfungus ( 1223518 ) on Wednesday April 09, 2025 @07:17AM (#65291827) Journal
    The genre of "AI Products" heavily tilted in the direction of low-effort garbage fires where the vendor just takes whatever they sold previously, tacks one of the more generic 'AI' operations('summarize this'/'expand this'/'here's a RAG chatbot that misleads users about the FAQs') on to some fields they probably should offer programmatically for integration purposes but may not; then a stiff price increase for this 'innovation'.

    There's also a fair amount of...outright marketing...where outfits whose tools absolutely do rely on some sort of machine learning(like spam filtration or EDR anomaly detection) have been moved by some mixture of frothy hype and a lazy interest in not focusing on the hard problem by talking about 'adding enhanced AI features'; but just adding a summarization somewhere rather than doing anything that seems to move the needle on what you are actually paying them for.

    There's a SOC I have to deal with that has gone hard in this direction: they were never very good at explaining why one thing got flagged and another didn't; or why a thing got flagged once when we knew that we had done the same thing on 500 endpoints. Now? They still either don't know or won't say when it comes to those important questions; but gosh if there isn't a summary of what some chatbot thinks about one of the binaries mentioned in the alert at the bottom of the email! 15-30% of them are internally inconsistent or at least somewhat nonsensical; most of the rest are just the blandest possible plagiarism of what virustotal and the top 10 or so Google hits say about the binary.

    If your 'AI product' can be replicated by me just ctrl+c-ing some text from your old product and pasting it into a chatbot if I feel like it; it's not really an 'AI Product'; it's you charging me silly amounts of money for a really low value integration(often inferior to just an integration interface since I can't even re-point it to a different backend if I have ether performance or data governance concerns). And that's a lot of the people who thought that putting 'AI' in would somehow reignite the golden era of Sasshole hypebeast growth.

    I assume that there are some outfits actually doing real work to try to fulfill actually nontrivial use cases; but the average 'AI Product' is not that; just the most half-assed exploitation of the fact that the software vendor has access to text in their program's UI or outputs and LLMs are good at delivering something that at least looks like a result if you hand it an arbitrary string and display whatever comes out. When being done as a box checking exercise it's at least an entire level beneath the people selling packaged reports for various products; who at least had to know about the structure of the data and so on to get meaningful results.
  • by Whateverthisis ( 7004192 ) on Wednesday April 09, 2025 @07:18AM (#65291829)
    We're not a traditional AI company, but we are building our own tools. Why? A few reasons, and it is all self-inflicted wounds from the bigger AI companies.

    1) Commercial AI tools are expensive, in either cost in cash or in the terms. We can use OpenAI for some software dev tasks very inexpensively but they require us in their terms and conditions to give them the right to use said code in their training sets. Devon from Anthropic does not require this, and while it's been helpful with several coding tasks the bill shot up quite quickly. As a result we use it for things we don't consider super proprietary or rote grunt work coding, but use our internal people for our proprietary data and the more specialized things.

    2) The AI companies are shooting themselves in the foot with their labor practices. Many are starting to backtrack saying " we want to make our coders more efficient!", but they started with they were going to eliminate jobs, and they are. When AI burst on the scene, many coders quickly trained up on AI because it was exciting. Now with the bigger AI companies talking the way they are, many of those coders are looking for other places, and many companies would rather have their own proprietary system and keep their data in-house. And for the coders, they get to be the lead on the project, rather than another cog in a big AI company that might fire them.

    3) The AI companies are shooting themselves in the foot with their data harvesting practices. The whole strategy has been "grab everything fast and create tools before the law catches up!". That strategy is not playing out so well now, the law is catching up and they will likely face a reckoning. But what moves faster than the law is companies' decisions to ensure their data is proprietary and can't be stolen.

    ANd that gets to the core issue that many AI companies seem to have missed; what are the core bottlenecks here that would allow the AI companies to control the market? It isn't AI programmers, and they screwed the pooch on that anyways. It's data centers and processing power, which is controlled by NVidia primarily (but not forever), and it's relevant, useful, proprietary data that is well organized, but it's not coders or algorithms either. A great AI tool with bad data in it's training set will return useless results, but a halfway decent AI tool with well organized, useful, relevant data will produce a valuable tool. The supposed customers of the AI companies will not hand over their data that is relevant for their products or markets, because in effect they'd be paying the AI companies while ceding control over their customers and markets. When there's now plenty of AI programmers on the market, it makes more sense to pay twice or three times as much to build it yourself if it means you control your data.

    • by ZiggyZiggyZig ( 5490070 ) on Wednesday April 09, 2025 @07:46AM (#65291857)

      I would also point the risk of using a vendor AI with internal business data and knowledge - what warranty do we have that it will not be used for further training and given to competition...

      • This is where we started with LLM vendors. We already had ML/data science teams for a few years. We didn't want any data leaking to unscrupulous actors, ie all of the big LLM vendors. We also don't want to pay rates that don't match up with what's being provided as a service. Since we understand how to build the same thing ourselves and provision and run them efficiently, paying OpenAI or Anthropic on a per token basis makes no sense.

    • Sounds like arguments against cloud, and we know Slashdot LOVES clouds.

  • Did "companies" smarten up after being repeatedly screwed by Microsoft?
    Have companies finally realized that 100% dependence on a single supplier is a bad idea?
    I already said it many times: Stargate is another classic grift to suck up delicious Saudi $$$.
    There will be no payback to the early leaders. Sam will get what he wants, but the investors, not so much.
    The big surprise is that these models are available in open source for whomever to use.
    The lastest stuff will remain in the labs, but, purposely trained
  • Privacy (Score:4, Insightful)

    by neilo_1701D ( 2765337 ) on Wednesday April 09, 2025 @10:51AM (#65292599)

    You use someone else's AI system, all your documents get snarfed up into whatever system they are using. And your clients might not be too happy that their documents are getting snarfed up as well.

    You roll your own, using publicly available models but running in your own datacenter (or a rather capable Mac Mini sitting on a desk), you get 90% of the performance and 100% of data privacy. Plus, a lot of companies with their own datacenters have overcapacity anyway, meaning they use existing resources to use AI without the massive costs involved.

  • Is there a reference for this? All searches just return this article.
  • when they mean "forced-down-your-throat copilot tools"?
    • Its the only LLM I use because its conveniently been installed on my system without my permission.

      Food for thought.
  • One is that most "AI software" has been just an existing solution with a system prompt. Hardly innovative, but a fair amount have been doing it. Software that uses AI to acheive a feature is more engaging, but those software are generally not called "AI software", because they existed before AI and just added some AI to it.

    The other thing as evidenced by my own organization is that a key executive declared as part of his bonus evaluation the availability of LLM to his employees.

    So they did a trial and found

Those who do not understand Unix are condemned to reinvent it, poorly. - Henry Spencer, University of Toronto Unix hack

Working...