Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Damn (Score 1) 61

My latest vaccine shots had the 6G upgrade, to take advantage of the higher-speed web access when the networks upgrade, but if they're selling those frequencies to high-power carriers, then I won't be able to walk into any area that handles AT&T or Verizon. :P

Seriously, this will totally wreck the 6G/WiFi6 specification, utterly ruin the planned 7G/WiFi7 update, and cause no end of problems to those already using WiFi6 equipment - basically, people with working gear may well find their hardware simply no longer operates, which is really NOT what no vendor or customer wants to hear. Vendors with existing gear will need to do a recall, which won't be popular, and the replacement products simply aren't going to do even a fraction as well as the customers were promised - which, again, won't go down well. And it won't be the politicians who get the blame, despite it being the politicians who are at fault.

Comment Re:Over the target (Score 1) 73

This is a good perspective. It is almost certainly true that the capabilities of LLMs will continue to advance.

I also think a crash is very likely. That's because startups aren't really about inventing new technology, they are about finding new business models for some technology. I think a lot of the stuff that people are applying LLMs to is likely not to be particularly value-generating. That's in part because of structural problems with accuracy and hallucination, but also because an lot of human work is more than just doing information transformations - it's about actions, accountability, decisions, and relationships with other people.

There are some genuine automation use cases where LLMs do and will continue to excel. But I suspect that the ROI people expect won't really be there, because the effort for a human find and fix LLM errors will continue to be quite high.

Comment Re:why start now? (Score 2) 43

The threat of AI is making their content of no value at all. Join the club.

OK, then who will make the content that feeds the LLM?

Most the content that has been even lightly copy-edited, much less reviewed for clarity or coherence, comes from content creators who are making enough money to cover hosting, have a few editorial employees, and maybe pay a little to contributors. Those may be news sites (don't think CNN, think of Ratchet and Wrench or Tom's Hardware) or they may be Substacks, or YouTubers or even influencers, but somehow they're making enough money to make it worth their time.

The current business model and monopolism sucks in too many ways to, but there is money going to content creation and also allowing merchants to try to reach audiences.

It's really hard to see how "AI" stands up anything comparable, and that's before the bastards at OpenAI start paying for the content they stole from the rest of us.

Comment Re:Status quo has changed (Score 4, Insightful) 43

Perhaps be careful what you wish for.

The web's current advertising business model has a couple parts. A search engine shows an ad next to organic results and directs traffic to content creators who show ads (most of which happen to also be offered by the search engine company... what monopoly?).

The basic business model is that advertisers pay content producers and the platform takes a cut.

The search + display business model, together with the web making much easier ability to switch between content producers (primarily magazines and newspapers) blew apart the old print media model which was subscriptions + ads. Because of this, many publications struggled to get enough subscription revenue to keep the doors open and/or greatly consolidated. People don't want to pay for what they feel they can get for free. That's made advertising revenue paramount for most content producers, and leads to the nasty ad farms that I also detest.

The thing is that LLM search engines require content that is reasonably fresh, and the content producers have to make money somehow or they'll stop making content. Right now, LLM search engines are showing no ads whatsoever, and their responses are based on uhhh "uncompensated" content. They're also all operating at enormous losses right now, with "awesomeness" or "AGI" as the answer for how they will make money.

To replace the existing business model, the LLM search engines need to find a way to direct payments to content producers so that these people keep making content. And that's before the content producers win back payments for their "uncompensated" content. Maybe OpenAI and Claude think their fancy "reasoning agents" can synthesize the content and cut out the content producers. There may be some modest opportunities to do that, but I have a hard time believing they can cut out content producers altogether - nothing I've seen suggests that LLMs can translate meatspace into digital content in any way that makes sense, much less is interesting or compelling to a human audience.

That means that LLM search engines either need to get the advertisers to pay them directly and send the money downstream to content producers (e.g. through some form of licensing). Maybe they embed the display ads into the LLM results (a la paid search). Alternately - more realistically - they need vastly larger subscription revenues to license content and still make money. That in turn requires a large proportion of the people who used to be the free users in a freemium model to become paid subscribers.

Let's make the absolutely heroic assumption that OpenAI manages to capture paid subscribers at the same rate as Netflix (~75%). Netflix's revenues are ~$40B, while Google's are $350B - an order of magnitude difference. To get anywhere near the revenues that Google makes, the average OpenAI/Claude subscriber would need to pay some 10x what a Netflix subscriber does. I find it awfully hard to see who all those people paying $100+ a month are. 85% of Prime Video subscribers are ad-supported, and Prime Video is just an extension of Amazon's modestly profitable sales business and highly profitable cloud infrastructure business.

And that's without DeepSeek, LLaMa and everything else on HuggingFace competing with what OpenAI and Claude are producing.

It also means you should expect LLM search engines start inserting ads or even monetizing placement into responses pretty soon. But as long as the LLM response is the end of the query, it's hard to see how anyone wants to pay to be placed, or how paid content doesn't erode the idea the LLM "summarized what the internet says".

I find it hard to see an economic path forward for what OpenAI seems to want to do, much less plausible revenues to justify the hype and valuation.

Comment Depends (Score 1) 44

On exactly what the detector is capable of detecting. If they're looking, at any point, for radio waves, then I'd start there. Do the radio waves correspond to the absorption (and therefore emission) band for any molecule or chemical bond that is likely to arise in the ice?

This is so basic that I'm thinking that if this was remotely plausible, they'd have already thought of it. This is too junior to miss. Ergo, the detector isn't looking for radio waves (which seems the most likely, given it's a particle detector, not a radio telescope), or nothing obvious exists at that frequency (which is only a meaningful answer if, indeed, it is a radio telescope).

So, the question is, what precisely does the detector actually detect?

Comment Re:Don't forget Starlink (Score 1) 109

Back in the days of the Rainbow series, the Orange Book required that data that was marked as secure could not be transferred to any location or user who was (a) not authorised to access it or (b) did not have the security permissions regardless of any other authorisation. There was an additional protocol, though, listed in those manuals - I don't know if it was ever applied though - which stated that data could not be transferred to any device or any network that did not enforce the same security rules or was not authorised to access that data.

Regardless, in more modern times, these protocols were all abolished.

Had they not been, and had all protocols been put in place and enforced, then you could install all the unsecured connections and unsecured servers you liked, without limit. It wouldn't have made the slightest difference to actual security, because the full set of protocols would have required the system as a whole to not place sensitive data on such systems.

After the Clinton email server scandal, the Manning leaks, and the Snowden leaks, I'm astonished this wasn't done. I am dubious the Clinton scandal was actually anything like as bad as the claimants said, but it doesn't really matter. If these protocols were all in place, then it would be absolutely impossible for secure data to be transferred to unsecured devices, and absolutely impossible for secure data to be copied to machines that had no "need to know", regardless of any passwords obtained and any clearance obtained.

If people are using unsecured phones, unsecured protocols, unsecured satellite links, etc, it is not because we don't know how to enforce good policy, the documents on how to do this are old and could do with being updated but do in fact exist, as does the software that is capable of enforcing those rules. It is because a choice has been made, by some idiot or other, to consider the risks and consequences perfectly reasonable costs of doing business with companies like Microsoft, because companies like Microsoft simply aren't capable of producing systems that can achieve that kind of level of security and everyone knows it.

Comment Re:Honestly this is small potatoes (Score 1) 109

In and of itself, that's actually the worrying part.

In the 1930s, and even the first few years of the 1940s, a lot of normal (and relatively sane) people agreed completely with what the fascists were doing. In the Rhythm 0 "endurance art" by Marina Abramovi, normal (and relatively sane) people openly abused their right to do whatever they liked to her, at least up to the point where one tried to kill her with a gun that had been supplied as part of the installation, at which point the people realised they may have gone a little OTT.

Normal (and relatively sane) people will agree with, and support, all kinds of things most societies would regard as utterly evil, so long as (relative to some aspirational ideal) the evil is incremental, with each step in itself banal.

There are various (now-disputed) psychology experiments that attempted to study this phenomenon, but regardless of the credibility of those experiments, there's never really been much of an effort by any society to actually stop, think, and consider the possibility that maybe they're a little too willing to agree to stuff that maybe they shouldn't. People are very keen to assume that it's only other people who can fall into that trap.

Normal and sane is, sadly as Rhythm 0 showed extremely well, not as impressive as we'd all like to think it is. The veneer of civilisation is beautiful to behold, but runs awfully thin and chips easily. Normal and sane adults are not as distant from chimpanzees as our five million years of divergence would encourage us to think. Which is rather worrying, when you get right down to it.

Comment Re:Honestly this is small potatoes (Score 0) 109

Pretty much agree, I'd also add that we don't have a clear impression of who actually did the supposed rioting, the media were too busy being shot by the National Guard to get an overly-clear impression.

(We know during the BLM "riots" that a suspiciously large number of the "rioters" were later identified as white nationalists, and we know that in the British police spy scandal that the spies often advocated or led actions that were more violent than those the group they were in espoused, so I'd be wary of making any assumptions at the heat of the moment as to exactly who did what, until that is clearly and definitively known. If this had been a popular uprising, I would not have expected such small-scale disturbances - the race riots of the 60s, the Rodney King riots, the British riots in Brixton or Toxteth in the 80s, these weren't the minor events we're seeing in California, which are on a very very much smaller scale than the protest marches that have been taking place.)

This is different from the Jan 6th attempted coup, when those involved in the coup made it very clear they were indeed involved and where those involved were very clearly affiliated with domestic terrorist groups such as the Proud Boys. Let's get some clear answers as to exactly what scale was involved and who it involved, because, yes, this has a VERY Reichstag-fire vibe to it.

Comment Re:Honestly this is small potatoes (Score 2) 109

I would have to agree. There is no obvious end-goal of developing an America that is favourable to the global economy, to Americans, or even to himself, unless we assume that he meant what he said about ending elections and becoming a national dictator. The actions favour destabilisation, fragmentation, and the furthering of the goals of anyone with the power to become a global dictator.

Exactly who is pulling the strings is, I think, not quite so important. The Chechen leader has made it clear he sees himself as a future leader of the Russian Federation, and he wouldn't be the first tyrant to try and seize absolute power in the last few years. (Remember Wagner?) We can assume that there's plenty lurking in the shadows, guiding things subtly in the hopes that Putin will slip.

Comment Re:Good but insufficient (Score 1) 71

The spec it came up with includes: which specific material is used for which specific component, additional components to handle cases where there's chemically incompatible or thermally incompatible materials in proximity, what temperature regulation is needed where (and how), placement of sensors, pressure restrictions, details of computer network security, the design of the computers, network protocols, network topology, design modifications needed to pre-existing designs - it's impressively detailed.

I've actually uploaded what it's produced to GitHub, so if the most glorious piece of what is likely engineering fiction intrigued you, I would be happy to provide a link.

Comment Good but insufficient (Score 1) 71

I've mentioned this before, but I had Gemini, ChatGPT, and Claude jointly design me an aircraft, along with its engines. The sheer intricacy and complexity of the problem is such that it can take engineers years to get to what all three AIs agree is a good design. Grok took a look at as much as it could, before running out of space, and agreed it was sound.

Basically, I gave an initial starting point (a historic aircraft) and had each in turn fix issues with the previous version, until all three agreed on correctness.

This makes it a perfectly reasonable sanity check. If an engineer who knows what they're doing looks at the design and spots a problem, then AI has and intrinsic problem with complex problems, even when the complexity was iteratively produced by the AI itself.

Comment Re:Bollocks (Score 4, Interesting) 206

Natural NNs appear to use recursive methods.

What you "see" is not what your eyes observe, but rather a reconstruction assembled entirely from memories that are triggered by what your eyes observe, which is why the reconstructions often have blind spots.

Time seeming to slow down (even though experiments show that it doesn't alter response times), daydreaming, remembering, predicting, etc, the brain's searching for continuity, the episodic rather than snapshot nature of these processes, and the lack of any gap during sleep, is suggestive of some sort of recursion, where the output is used as some sort of component of the next input and where continuity is key.

We know something of the manner of reconstruction - there are some excellent, if rather old, documentary series, one by James Burke and another by David Eagleman, that give elementary introductions to how these reconstructions operate and the physics that make such reconstructions necessary.

It's very safe to assume that neuroscientists would not regard these as anything better than introductions, but they are useful for looking for traits we know the brain exhibits (and why) that are wholly absent from AI.

Slashdot Top Deals

Honesty is for the most part less profitable than dishonesty. -- Plato

Working...