Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Comment Re:Very incomplete analysis (Score 1) 59

Well said.

Most use cases of LLMs/GenAI are actually just process automation by another name. The pivot point of nearly all automation efforts is process control, especially in the handoffs between systems or between systems and humans. Because humans are so adept at handling exceptions, most automation projects I've seen deal with only the most common exceptions. Well-run projects think carefully about how to structure handoffs such that the exceptions don't eat up all the labor saved by automating the base case (which is easy to do). Poorly-run projects just focus on throughput of the base case, and leave exception handling for later, resulting in extremely cumbersome situations that either degrade quality or require nearly as much labor to handle as the pre-automation state. I think many enterprises are about to get a crash course in this, which will dramatically affect how their labor picture looks going forward.

Another area where the job loss analysis is pretty thin is that it assumes that the jobs that are linked to the so-called AI-exposed jobs (e.g upstream and downstream in the process) are implicitly assumed to stay the same. This is almost certainly false.

One example I know well from healthcare is clinical documentation and payment. There are a bazillion AI companies who make the claim that applying AI to clinical documentation "allows healthcare providers to focus more on clinical tasks". The latter part is mostly marketing fluff, supported by a few trial studies. But most of the assertion of saving labor is what people hope for or think should happen.

What really happens is that when AI documents something, the provider can code for those services and try to get paid more. That's the quickest way to get an AI rollout to pay for itself. But insurers don't just sit still, they adjust their payment rules and systems to deal with this, and now somebody on the provider side has to deal with THAT. The system has changed, but often toward more complexity rather than less effort.

I've never seen any of these job loss models try to account for that phenomenon.

Comment Re:Current LLM's (Score 1) 211

Yes, exactly.

If you want to automate something the automation has to not only be faster per unit task or output, but it also has to make up for the extra time of checking or re-doing something when the automated way failed. To do that, you usually need to constrain the parts of a problem where the automated approach will succeed nearly always and where failures can be identified and mitigated quickly. That requires building a bunch of process oversight stuff, which in turn requires a big investment in instrumenting the current and future process to identify the exceptions and handle them correctly before failures move downstream and become much hard to address.
Additionally, work outputs that have a lot of unpredictability, or require persuasion or consensus (such as defining what problem to solve), or situations where there's no pre-defined correct future state, only a series of choices and murky outcomes, are just hard to automate period.

LLMs not only have regular failures, they have highly unpredictable failures. Yet they're being sold as though than can automate anything.

The reason the "agentic OS" stuff is will fail is the same reason that we didn't automate away our daily work using VBScript - the automation will be clunkier and more annoying than just doing the steps on our own.

Comment Re: Clippy on steroids (Score 2) 26

No kidding. I don't know if you've ever tried using Explorer to search files in a directory for a filename, but it's unusable.

Everything from Void Tools does it in milliseconds. It does exactly what you'd expect - builds a list of filenames and searches them.

AFAICT there is nothing you can do in Explorer to make it only search the filenames - apparently it's necessary to search the web, the registry and everything else to find files by filename.
Can't wait for the agentic AI solution to ask Copilot what to do as well...

Comment Re:Damning (Score 2) 65

So true. Sadly these days the market can stay irrational almost indefinitely. The factors that helped constrain this irrationality, like actual government oversight and discipline of equities and debt, willingness to let large corporations fail, and investor discipline in discerning real growth from financial games have all been eroded.

I have no idea when valuations will fall, and I wouldn't want to be better on a market that has no basis in fundamentals at all.

Comment Re:Fixed that for ya (Score 1) 98

HR often has an Orwellian aspect to their communication. They say things in a way that sounds like they are there to help you, but they are really there to gatekeep. Not everyone can have the salary, promotion, office, etc that they want, and HR is there to control those things, and minimize the company's legal problems in doing so. The double-speak and gatekeeping make them incredibly frustrating to deal with.

On top of that they also know a lot of private info, from salary to disciplinary actions to disputes they got involved in, so they're often in a position of quite a lot of leverage.

Comment Tyler Cowen is an AI fanboi (Score 5, Insightful) 69

He is mostly writing for attention *now*, nothing to do with immortality.

The whole premise is ridiculous, like SEO slop dressed up as something intellectual. Odds are high that OpenAI's "authoritativeness" is Google PageRank. That means it will move as traffic moves or if one of them changes the rules.

If you want to write for immortality, figure out something to say that is meaningful across human lifetimes.

That's pretty hard to do, which is why only a few works become and stay "classics". The way to even have a shot is not to internet clout-seeking, it is true thought and creativity.

Comment Re:Radiologists spend time on patient communicatio (Score 2) 42

A lot is hanging on what's meant by "patient communication".

I worked for almost 10 years with pathologists, and there is a whole lot that goes into communication. A gastroenterologist friend once said to me "pathologists are the only ones in a hospital who make an actual diagnosis". This means that when a pathologist or radiologist is writing their report they are laying down some key markers for how a patient will be managed as much or more than strictly documenting a diagnostic finding. That might mean creating a defensible case (i.e. so that insurance will pay for it) for a surgeon or GI to do a procedure they think would beneficial, or even more complicated, provide a reason not to do a procedure now but leave the treating physician's options open for later. It's often a lot of bootstrap type thinking and trying to mesh incentive structures than looking at an image and writing a cut and dried set of words.
And then there are the questions from physicians, insurance, or patients about what the meaning of a particular written finding is, whether more imaging studies are needed, etc. Each of these handoffs requires attention and communication to avoid confusion and extra work.
And that's before doing insurance-related charting or paperwork, organizational bureaucracy, etc.

I'm not surprised that 1/3 of time is doing actual reads.

Nor am I even slightly surprised that AI has not replaced radiologists. What's truly remarkable is the number of times CS people can convince themselves that one particularly data intensive part of someone's work that lends itself to the algorithmic approach du jour is in fact the whole job.

Comment At least one interesting use case (Score 1) 66

It can be used for local messaging when people don't want to connect to the internet or be traced - protests or events of that nature.

I can imagine the routing considerations would get very complex with large numbers of people. My intuition is that discovery of routes to a particular user would be hard with non-persistent "server" nodes and could result in a lot of broadcast traffic. I presume he's thought about that, but I have to guess that the real world behavior will be hard to predict.

Another interesting thing to think about is very low energy devices that are only for this kind of use. The low energy requirements could enable building some hardware that is distinct from a phone - basically like a pager for when you don't want to be on the internet or traceable on the internet.

Comment Re:Over the target (Score 1) 73

This is a good perspective. It is almost certainly true that the capabilities of LLMs will continue to advance.

I also think a crash is very likely. That's because startups aren't really about inventing new technology, they are about finding new business models for some technology. I think a lot of the stuff that people are applying LLMs to is likely not to be particularly value-generating. That's in part because of structural problems with accuracy and hallucination, but also because an lot of human work is more than just doing information transformations - it's about actions, accountability, decisions, and relationships with other people.

There are some genuine automation use cases where LLMs do and will continue to excel. But I suspect that the ROI people expect won't really be there, because the effort for a human find and fix LLM errors will continue to be quite high.

Comment Re:why start now? (Score 2) 43

The threat of AI is making their content of no value at all. Join the club.

OK, then who will make the content that feeds the LLM?

Most the content that has been even lightly copy-edited, much less reviewed for clarity or coherence, comes from content creators who are making enough money to cover hosting, have a few editorial employees, and maybe pay a little to contributors. Those may be news sites (don't think CNN, think of Ratchet and Wrench or Tom's Hardware) or they may be Substacks, or YouTubers or even influencers, but somehow they're making enough money to make it worth their time.

The current business model and monopolism sucks in too many ways to, but there is money going to content creation and also allowing merchants to try to reach audiences.

It's really hard to see how "AI" stands up anything comparable, and that's before the bastards at OpenAI start paying for the content they stole from the rest of us.

Comment Re:Status quo has changed (Score 4, Insightful) 43

Perhaps be careful what you wish for.

The web's current advertising business model has a couple parts. A search engine shows an ad next to organic results and directs traffic to content creators who show ads (most of which happen to also be offered by the search engine company... what monopoly?).

The basic business model is that advertisers pay content producers and the platform takes a cut.

The search + display business model, together with the web making much easier ability to switch between content producers (primarily magazines and newspapers) blew apart the old print media model which was subscriptions + ads. Because of this, many publications struggled to get enough subscription revenue to keep the doors open and/or greatly consolidated. People don't want to pay for what they feel they can get for free. That's made advertising revenue paramount for most content producers, and leads to the nasty ad farms that I also detest.

The thing is that LLM search engines require content that is reasonably fresh, and the content producers have to make money somehow or they'll stop making content. Right now, LLM search engines are showing no ads whatsoever, and their responses are based on uhhh "uncompensated" content. They're also all operating at enormous losses right now, with "awesomeness" or "AGI" as the answer for how they will make money.

To replace the existing business model, the LLM search engines need to find a way to direct payments to content producers so that these people keep making content. And that's before the content producers win back payments for their "uncompensated" content. Maybe OpenAI and Claude think their fancy "reasoning agents" can synthesize the content and cut out the content producers. There may be some modest opportunities to do that, but I have a hard time believing they can cut out content producers altogether - nothing I've seen suggests that LLMs can translate meatspace into digital content in any way that makes sense, much less is interesting or compelling to a human audience.

That means that LLM search engines either need to get the advertisers to pay them directly and send the money downstream to content producers (e.g. through some form of licensing). Maybe they embed the display ads into the LLM results (a la paid search). Alternately - more realistically - they need vastly larger subscription revenues to license content and still make money. That in turn requires a large proportion of the people who used to be the free users in a freemium model to become paid subscribers.

Let's make the absolutely heroic assumption that OpenAI manages to capture paid subscribers at the same rate as Netflix (~75%). Netflix's revenues are ~$40B, while Google's are $350B - an order of magnitude difference. To get anywhere near the revenues that Google makes, the average OpenAI/Claude subscriber would need to pay some 10x what a Netflix subscriber does. I find it awfully hard to see who all those people paying $100+ a month are. 85% of Prime Video subscribers are ad-supported, and Prime Video is just an extension of Amazon's modestly profitable sales business and highly profitable cloud infrastructure business.

And that's without DeepSeek, LLaMa and everything else on HuggingFace competing with what OpenAI and Claude are producing.

It also means you should expect LLM search engines start inserting ads or even monetizing placement into responses pretty soon. But as long as the LLM response is the end of the query, it's hard to see how anyone wants to pay to be placed, or how paid content doesn't erode the idea the LLM "summarized what the internet says".

I find it hard to see an economic path forward for what OpenAI seems to want to do, much less plausible revenues to justify the hype and valuation.

Comment Re:Why not fix the basics? (Score 1) 67

I'm a big fan of VoidTools Everything. As far as I can tell Everything just makes an index of filenames and allows you to search it in both simple search terms and things like path-based search and regex. No shade on VoidTools, but it doesn't seem like a particularly difficult thing to create if you are willing to keep the use case simple and straightforward.

Every time I get a new Windows machine or a OS update I check if they have managed to make it possible to do what Everything does, and the answer is always no. The search in the Start menu insists on doing some bastardized combination of Bing searches, content searches, and something that mixes searching for application names with file names.

Even searches in Windows Explorer don't work in the simple way that Everything does, which I find totally baffling. Why on earth would I not be looking for a filename in a particular directory when I put something in Explorer's search box.

AFAICT it's impossible to do with Windows what VoidTools does simply and quickly. I presume that this is corporate politics playing out in my taskbar - some muckety mucks want more use cases for Bing, others want to promote their app, still others want to do something with Azure or AI or what have you.

At this point I'm quite sure none of the searches will ever make sense.

Slashdot Top Deals

!07/11 PDP a ni deppart m'I !pleH

Working...