Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Comment Re:Over the target (Score 1) 70

This is a good perspective. It is almost certainly true that the capabilities of LLMs will continue to advance.

I also think a crash is very likely. That's because startups aren't really about inventing new technology, they are about finding new business models for some technology. I think a lot of the stuff that people are applying LLMs to is likely not to be particularly value-generating. That's in part because of structural problems with accuracy and hallucination, but also because an lot of human work is more than just doing information transformations - it's about actions, accountability, decisions, and relationships with other people.

There are some genuine automation use cases where LLMs do and will continue to excel. But I suspect that the ROI people expect won't really be there, because the effort for a human find and fix LLM errors will continue to be quite high.

Comment Re:why start now? (Score 2) 41

The threat of AI is making their content of no value at all. Join the club.

OK, then who will make the content that feeds the LLM?

Most the content that has been even lightly copy-edited, much less reviewed for clarity or coherence, comes from content creators who are making enough money to cover hosting, have a few editorial employees, and maybe pay a little to contributors. Those may be news sites (don't think CNN, think of Ratchet and Wrench or Tom's Hardware) or they may be Substacks, or YouTubers or even influencers, but somehow they're making enough money to make it worth their time.

The current business model and monopolism sucks in too many ways to, but there is money going to content creation and also allowing merchants to try to reach audiences.

It's really hard to see how "AI" stands up anything comparable, and that's before the bastards at OpenAI start paying for the content they stole from the rest of us.

Comment Re:Status quo has changed (Score 4, Insightful) 41

Perhaps be careful what you wish for.

The web's current advertising business model has a couple parts. A search engine shows an ad next to organic results and directs traffic to content creators who show ads (most of which happen to also be offered by the search engine company... what monopoly?).

The basic business model is that advertisers pay content producers and the platform takes a cut.

The search + display business model, together with the web making much easier ability to switch between content producers (primarily magazines and newspapers) blew apart the old print media model which was subscriptions + ads. Because of this, many publications struggled to get enough subscription revenue to keep the doors open and/or greatly consolidated. People don't want to pay for what they feel they can get for free. That's made advertising revenue paramount for most content producers, and leads to the nasty ad farms that I also detest.

The thing is that LLM search engines require content that is reasonably fresh, and the content producers have to make money somehow or they'll stop making content. Right now, LLM search engines are showing no ads whatsoever, and their responses are based on uhhh "uncompensated" content. They're also all operating at enormous losses right now, with "awesomeness" or "AGI" as the answer for how they will make money.

To replace the existing business model, the LLM search engines need to find a way to direct payments to content producers so that these people keep making content. And that's before the content producers win back payments for their "uncompensated" content. Maybe OpenAI and Claude think their fancy "reasoning agents" can synthesize the content and cut out the content producers. There may be some modest opportunities to do that, but I have a hard time believing they can cut out content producers altogether - nothing I've seen suggests that LLMs can translate meatspace into digital content in any way that makes sense, much less is interesting or compelling to a human audience.

That means that LLM search engines either need to get the advertisers to pay them directly and send the money downstream to content producers (e.g. through some form of licensing). Maybe they embed the display ads into the LLM results (a la paid search). Alternately - more realistically - they need vastly larger subscription revenues to license content and still make money. That in turn requires a large proportion of the people who used to be the free users in a freemium model to become paid subscribers.

Let's make the absolutely heroic assumption that OpenAI manages to capture paid subscribers at the same rate as Netflix (~75%). Netflix's revenues are ~$40B, while Google's are $350B - an order of magnitude difference. To get anywhere near the revenues that Google makes, the average OpenAI/Claude subscriber would need to pay some 10x what a Netflix subscriber does. I find it awfully hard to see who all those people paying $100+ a month are. 85% of Prime Video subscribers are ad-supported, and Prime Video is just an extension of Amazon's modestly profitable sales business and highly profitable cloud infrastructure business.

And that's without DeepSeek, LLaMa and everything else on HuggingFace competing with what OpenAI and Claude are producing.

It also means you should expect LLM search engines start inserting ads or even monetizing placement into responses pretty soon. But as long as the LLM response is the end of the query, it's hard to see how anyone wants to pay to be placed, or how paid content doesn't erode the idea the LLM "summarized what the internet says".

I find it hard to see an economic path forward for what OpenAI seems to want to do, much less plausible revenues to justify the hype and valuation.

Comment Re:Why not fix the basics? (Score 1) 67

I'm a big fan of VoidTools Everything. As far as I can tell Everything just makes an index of filenames and allows you to search it in both simple search terms and things like path-based search and regex. No shade on VoidTools, but it doesn't seem like a particularly difficult thing to create if you are willing to keep the use case simple and straightforward.

Every time I get a new Windows machine or a OS update I check if they have managed to make it possible to do what Everything does, and the answer is always no. The search in the Start menu insists on doing some bastardized combination of Bing searches, content searches, and something that mixes searching for application names with file names.

Even searches in Windows Explorer don't work in the simple way that Everything does, which I find totally baffling. Why on earth would I not be looking for a filename in a particular directory when I put something in Explorer's search box.

AFAICT it's impossible to do with Windows what VoidTools does simply and quickly. I presume that this is corporate politics playing out in my taskbar - some muckety mucks want more use cases for Bing, others want to promote their app, still others want to do something with Azure or AI or what have you.

At this point I'm quite sure none of the searches will ever make sense.

Comment Re:"Hey! How can we monetise human relationships?" (Score 1) 129

Well said. The only consolation is that we can expect this to go about as well as the Metaverse.

One thing that's become more and more clear to me is that people need a classical liberal arts education more than ever. When you read philosophers, Western and otherwise, complicated pieces of literature, try to connect the dots in history, and generally struggle to understand how reality can be so complex, confusing, and contradictory, you end up with a whole different view of learning, thinking, and knowing than any of the twits trying to make artificial "intelligence" can grasp. And a whole lot more humility about what it really means to know something.

These companies are run by ignorant children who think that knowledge can always be turned into information, learning into computation, and that reality can be fully represented by data. They could not be more wrong.

Capturing it really means to be walking down a tree-lined street on a pleasant spring afternoon is probably better done by an excellent poem than zetabytes of data on light, temperature, sound, pulse rates, etc.

This article by Jill Lepore puts it extremely eloquently.
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.newyorker.com%2Fmaga...

I doubt Zuckerberg has spent any real time thinking about what it means.

It's heartbreaking to see the promise of technology and engineering for helping humans live better be hijacked to turn us into meaningless digital avatars of ourselves.

Comment Re:Could be useful (Score 1) 49

Sorry to say I think what you're imagining is not what you'll get.
The assumption that firms will provide the kind of data that allows the AI agent to compare truly like products is very tenuous.
The open web already theoretically permits this, with a few exceptions like travel: Kayak/Expedia/etc, a little bit in cars (though there's really only one entity selling each physical car).

Overall there are surprisingly few vertical websites that actually compare the same item across multiple vendors. Many sites struggle with aligning similar items that may have different features or other specification. Google itself has a long history of blocking companies that try to do this kind of vertical search. Amazon dropped most of its actual product comparison tables several years back in favor of Bayesian recommendation engines.

There are also browser plugins from entities like CapitalOne and others that purport to do this kind of like for like shopping; as with the so-called AI agents, these require publication of prices on like products (e.g. by comparable UPCs or SKUs or whatever). No matter what the technology, if various vendors don't want to compete directly on price, they will find ways not to do so.

On top of that, deciding when a product is similar enough to be comparable is surprisingly subtle.

Additionally, the idea that AI can somehow shortcut the shopping process process - looking at options, thinking about what you value and don't value, and arriving at a decision that leaves you satisfied - is also mostly wishful thinking. Without seeing and thinking about the options, you are at great risk of being unsatisfied. In engineering terms, shopping is requirements elicitation. And we all know where skipping requirements definition goes.

Comment Has anyone tested effort to fully specify logic (Score 1) 15

Koomen's piece suggests that a "better use of AI" would be to quickly create email filters. The thing is, many widely used email clients already has the exact functionality that he describes accessible through a GUI rather than a prompt. And while his tossed off email filter idea is a short AI prompt, it almost certainly fails to specify elements of logic that are needed for the filtering system to work properly.

My guess is that by the time you fully specific the necessarily logic, the LLM prompt will be either less efficient or similar in efficiency. To get to that full specification you want compact and effective ways of stating the design the logic of the full system - which is exactly what modern coding languages and design tools like UML do! And if you want predictable, secure, maintainable code, my guess is that the LLM approach will be far more work.

This use case seems like an excellent example to do an empirical test. Has anyone heard of someone testing the effort to correctly and completely specify known functionality using LLMs versus conventional coding practices?

I think the proper conclusion from his article is not that LLM is trying for the wrong use case, but that in many many use cases LLMs offer little to no benefit over existing ways of doing things. But the venture industry is still trying to sell AI as game changing, so I'm not surprised that he's looking for a hook to make AI seem like it could actually be a better solution.

Comment Re: Benchmarks are meaningless (Score 1) 7

I'm not sure they're cheating, but I think the significance of the benchmarks is pretty overstated.
I suspect that even though the benchmarks are supposed to test something other than information retrieval and interpolation in practice they end up being amenable to being "solved" by information retrieval and interpolation.

But a lot of what makes us go is our ability to switch out of that mode and into other modes, like pure logic, or other modes that are typically disparaged like emotional states or interactions with others.

And of course there's the wildly narrow view of reality in terms of empirical truth as defined in some dataset of choice. Reality is a lot less measurable than that.

We don't go through our days dealing in a world of known or even latent data. Rather truth and reality have a mix of subjective and objective dimensions that we deal with in an ongoing way.

I just don't see these LLMs being able to operate like that.

I also suspect that the messy nature of reality means that this large dataset Boltzmann distribution thing will always be involve trading off an "average" kind of knowledge against more specific knowledge and perspective.

Comment Re:Yes and also No (Score 1) 163

I have had a similar experience with code generated by pre-AI low-code platforms.
I attribute that to the code generation framework adopting a relatively generic and inflexible approach to representing the problem and associated logic in code.

I'm not surprised that AI has a similar quality, since I believe that current LLMs really don't have much internal logic, regardless of what the likes of Sam Altman say.

Comment Glad Bloomberg had the guts to say it (Score 3, Interesting) 77

Most of the ideas about how tech would improve learning were hype-filled, speculative baloney. It's a testament to how hard it is to resist this kind of bs that schools are as deep into "educational" technology as they are now.

Another negative effect that Bloomberg didn't name but that I see with my daughter and the children of my friends and family- platform fragmentation. Most classes I took 30+ years ago were based on a textbook. Whether the textbook was good, bad, or indifferent, it was coherent. You read a chapter, did some exercises and then moved to another unit. If your teacher assigned some extra source material, s/he would photocopy them and give them you. Those were generally supplemental to the textbook, so if you didn't understand that extra material you knew you could go back to the textbook and try to figure it out. And a parent trying to help could read back through the textbook and refresh their memory enough to try to be helpful or could at least help coordinate with the teacher to figure out where the student was getting lost.

Now that textbooks are relics, teachers pick instructional material and exercises from a dizzying array of platforms. Some of these are licensed by school systems, so students have to go through some sort of SSO thing to get to them. Others are third party that require account creation, and still others are free stuff on the internet. It is very difficult for students to keep track of all of this even when the teacher is disciplined about posting/linking all the material in the primary course management system. For parents it is essentially impossible to follow what is going on.

And on top of that students are very good at figuring out how to use their computer for non-class uses, regardless of the filters on them. My daughter often emails me or my wife multiple times in a day. This is not good for her or us - she needs to just be in school and not communicating with her parents all the time.

I could not agree more with Bloomberg's idea of getting computers out of classrooms, except for very specific uses.

Comment Re:The inevitable end of the liberal arts degree (Score 1) 241

I think what you're essentially saying is that the teacher is being interesting or innovative enough to overcome the students' desire to use an internet service to shortcut their entire learning process. That's nice in theory, but pretty damn hard in reality.

Sure, plenty of teachers in secondary and post-secondary education could stand to broader their suite of teaching techniques. But actually doing that is very hard, and that's before all the students who do the least work come and do the most complaining about not getting the grade they wanted.

If you want to take this tack, perhaps you can talk about your own experiences motivating people to learn things they want to get a shortcut to?

Otherwise, it sure sounds a lot like Monday morning quarterbacking to me.

Comment Re:All levels of society (Score 2) 241

I guess I'm just an unreconstructed intellectual, because I can't understand how anyone could fail to see that teachers/professors give assignments so that you go through a process of learning.

The learning happens inside of your own brain, and if you don't do the learning your brain won't have either the knowledge in the assignment or the experience of having figured out how to gain and use that knowledge. When you know how to learn things that makes you useful for learning things as needed.

I guess people really either don't understand or believe that. It's one thing to not do everything a teacher assigns out of lack of motivation or time conflict, but with some understanding that as a student one is missing something.

The irony is that the ability to learn and think through many kinds of things is more needed than ever. LLMs are able to retrieve and compile information, but they cannot think through what makes sense. They especially cannot figure out what makes sense within an organization or take responsibility for the accuracy and utility of the information for the uses of LLM results.

Now more than ever it's the learning that makes people really valuable. Pretty depressing that so many people can't see that.

Comment Re:Remaining in academia (Score 1) 13

I left academia in 2003 with a "terminal masters" and I've never once regretted it. It was clear to me even 20 years ago that:

a) There was a simple capacity issue - there are far more candidates being generated than there are available slots in academia. The math of post-doctoral jobs and careers is pretty simple: 15% academia, a small percentage to government (getting really small these days), and the rest to industry. That's how the jobs shake out.

b) The established academics - mostly tenured - simply could not understand this and thought that a career as a scientist was academia or bust.

I saw clearly that this was a buzzsaw and my odds of getting a job in academia with a good salary, health benefits, and stability (much less tenure) were low. Instead there's a Hunger Games setup where people linger hungrily in postdocs and associate professor roles waiting for a shot at job-sustaining.

What is dismaying but not surprising is that this view is the exact framing of that nature article. This says to me that academia has done nearly nothing to take a more proactive approach to getting PhD students jobs. This says to me they continue to have no realistic idea about what jobs the students they have in their programs will actually do later in life, much less help them get set up on that path.

Comment Maybe they'll do this in the Metaverse (Score 4, Insightful) 14

Pure bollocks, as the Brits say. The basic idea, as best I could decipher after peeling away many layers of nonsense, is that instead of businesses just having Facebook pages somehow Facebook will make these into "agentic AI" that makes these business somehow more awesomer.

And I guess the business owners or workers will also somehow use WhatsApp or Instagram or Facebook to do ask AI to surf the web for them. "Hey WhatsApp, can you post content to my Instagram account that is not exactly the like content you made for all the other businesses that are like mine except in some other neighborhood and run by other people?" That definitely will not lead to an explosion of AI slop. Or maybe these business will finally truly see the future and use this agentic AI to run their business in the Metaverse.

My local coffee shop's or restaurant's Facebook page can all of a sudden can chat with me about varieties of coffee or post even more pictures on its Instagram feed and this interests me as a customer why?
The place still can only take order online if there is some sort of integration with their order system, which they either do or don't have regardless of Agentic AI. The data on the page will remain just as stale as it usually is with respect to hours, offerings, people, etc., unless the business spends more time updating the Facebook page than it does now - Agentic AI can only beg for more attention from the business owner.

This is a great example of a company solving its own problems rather than any customer's problems. A large business would almost certainly not want to entrust its external communications to some AI chatbot on its Facebook page, while an AI agent can't actually change that much about the business model or fundamentals for smaller businesses.

Slashdot Top Deals

"History is a tool used by politicians to justify their intentions." -- Ted Koppel

Working...