Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Re:Sounds like an export tax. (Score 4, Insightful) 35

It's quaint that you think the United States is still a republic. It's a monarchy, and Trump's handlers are likely moving currently to make sure that when Vance succeeds him, that the Executive branch and a Congress that will be, through the use of naked force if necessary, remain filled with Republican paper tigers to complement the paper tigers in the Supreme Court, settles into the oligarchy the Framers always really intended it to be. The military will largely be used to recreate the American hemispheric hegemony. The National Guard and ICE will be used as foot soldiers within the US to "secure" elections.

The morons that elected that diseased wicked and demented man have destroyed whatever the hell America was. As a Canadian, I can only hope we can withstand this hemispheric dominance and the raiding of our natural resources to feed the perverse desires of the child molesters, rapists, racists and psychopaths that have already taken control of the US.

Doubtless, I will be downvoted by the remaining MAGA crowd here. You know, the guys that pretended they refused to vote Democrat because Bernie wasn't made leader, but are to a man a pack of Brown Shirts eagerly awaiting the time when they imagine they can take part in the defenestration of American society.

Comment Re:Linus is right, but this is really not news (Score 1) 79

Win9x and Win2k (and the other NT descendants) are fundamentally different operating systems. In general, NT had a much more robust kernel, so system panics were and remain mainly hardware issues, or, particularly in the old days, dodgy drivers (which is just another form of hardware issue). I've seen plenty of panics on *nix systems and Windows systems, and I'd say probably 90-95% were all hardware failures, mainly RAM, but on a few occasions something wrong with the CPU itself or with other critical hardware like storage device hardware. There were quite a few very iffy IDE cards back in the day.

The other category of failure, various kinds of memory overruns, have all but disappeared now as memory management, both on the silicon and in kernels, have radically improved. So I'd say these are pretty much extinct, except maybe in some very edge cases, where I'd argue someone is disabling protections or breaking rules to eke out some imagined extra benefit.

Comment Re:Since we know nothing about it (Score 4, Interesting) 72

We know it weakly interacts electromagnetically, which means one of the ways in which it is posited planets form, initially via electrostatic attraction of dust particles, isn't likely to work. This means dark matter will be less "clumpy" and more diffuse, and less likely to create denser conglomerations that could lead to stellar and planetary formation.

What this finding does suggest, if it holds true, is that some form of supersymmetry, as an extension fo the Standard Model is true. Experiments over the last 10-15 years have heavily constrained the masses and energy levels of any supersymmetry model, so it would appear that if this is the case, it's going to require returning to a model that some physicists had started to abandon.

Comment Re:Really? (Score 3, Informative) 289

understand (v):
interpret or view (something) in a particular way.
I'd love to see your proof that an LLM doesn't "understand" something.

This should be prety simple to explain using the hangman example. Ask an LLM to play a game of hangman. It will agree, and as it "knows" the rules, it will claim to pick a word and make you guess letters. I just tried this on GPT-5 and it chose a five letter word. I made a bunch of random guesses and some of them were correct, some of them incorrect (so it's not just accepting all guesses as valid), although it didn't penalize me for the wrongly guessed letters. Eventually, of the five letters I had revelead the 4 last ones and they were "ailm". Since I couldn't figure out what word it is, I guessed random letters until I said W and it told me I was correct and that the word was "wailm". No, that's not a word, and I asked the model if it thinks it is a word, to which it replied:

The word “wailm” does not exist in standard English dictionaries — it appears to be a nonword or typo.
I doublechecked across major sources like Cambridge, MerriamWebster, Collins, and Wiktionary, and none of them list “wailm” as a valid entry.

Why “wailm” showed up in our hangman game
          It looks like the letters you guessed happened to form a sequence that doesn’t correspond to a real English word.
          Hangman games usually rely on a predefined dictionary or word list. Since I improvised the word, it ended up being nonsensical.

(emphasis mine)

So it screwed up, in more ways than one actually, not just with the wod. The whole point of a game of hangman is that you're supposed to have a limited amount of guesses for the letters, but it gave me 3 guesses for the whole word, and an unlimited amount of guesses for letters, and it admited to improvising a word at random. So in reality, it has no idea of how the rules work or how to actually play the game, but still claims it can.

And it doesn't end there. It then suggested that it can setup a new round with a proper word list so that (and I'm quoting the model here): "so the solution is guaranteed to be real?"

I said yes. This time it accepted all of my guesses as instantly correct, forming the 6 letter word "Sairim", which is also not a proper English word, quothe the LLM:

Uh oh looks like we’ve hit another snag. “Sairim” isn’t a valid English word either — it’s actually the name of a county in Xinjiang, China, and also refers to Sayram Lake, a famous scenic spot. So technically, it is a real word, but it’s a proper noun rather than a common dictionary entry.

To keep the game fun and clear, I suggest we restart with a verified English dictionary word (like “planet,” “forest,” or “bridge”) so the solution is guaranteed to be familiar.

After I said yes, it gave me another 6 letter word to guess but again accepted all of my guesses as instantly correct, I guessed first A, then S, then P, then E, and then R and each time it congratulated me on being correct. filling out the word as to be "Sapper". Yeah, on 3rd try, it actually landed on a proper english word, but it wasn't actually playing the game in any real sense, because it's clear it didn't choose any word in advance for me to guess out (because it can't), but simply chose the lenght of 6 letters and then filled it out with my guesses to form any valid english word, because that's the best it can do.

This is all due to the way its memory works, and there are articles out there you can look up that go into detail about why it is this way. But the point is this: while an LLM will probably be able to give you a completely correct explanation of the rules of hangman, it cannot, due to it's technical limitations, understand those rules or play the game. Even when it knows it screws up and offers you advice on how to make it play better by giving it more context, it still fails at the task, because it doesn't actually understand it.

This is of course a slightly silly example, but that's on purpose to highlight the point. The models summarize information from a variety of sources. Because the internet has a vast amount of information (both accurate and total BS) this can often lead to convicing and even accurate reponses, or completely hallucinated/made-up stuff depending on where it's pulling the information from.. To say that it is thinking, that is, taking all that information and being able to apply it to make correct and sensible decisions instead of just rehashing it, is not accurate, at least not now (and likely not for the foreseeable future due to the way the models are built and trained). If it was actually able to understand the rules of hangman (something that a child can do) it would have got this correct instantly.

Instead of understanding or having the ability to tell me this is a task it cannot perform due to the way it's context handling works, it simply seeks to keep the conversation going. For the same reason if you ask an LLM to play chess, it will eventually start making moves that are illegal, because again, while it can explain to you what chess is and how it is played, it doesn't actually understand it nor is it capable of playing it.

So no. LLMs do not think or understand, they're gigantic and much more complicated versions of the text autocomplete feature on phones.

Comment Re:But it's already loaded! (Score 1) 69

Without knowing precisely how Explorer is structured, it's conceivable that there may be different dynamically-linked libraries and/or execution points for running the desktop and for the file explorer, in which case just having explorer.exe running in and of itself doesn't mean that new modules have to be loaded if explorer.exe process fires up. The solution could very well be to load the libraries involved in file browsing when the desktop opens.

Just guessing here. There was a time when there was a lot more horsepower required for GUI elements than folder browsing, but this is 2025, and explorer.exe probably uses orders of a magnitude more resources now than it did in 1995, because... well, who knows really. Probably to sell more ads and load up more data to their AI.

Comment Jesus Christ (Score 0) 69

That, on modern hardware, they have to preload a fucking file browser so that it pops up faster is just an indication of what a steaming pile of garbage MS is. They had sweet spots with Win2k-WinXP and with Win7, but their incoherent need to be a whole bunch of contradictory things --- with AI! has led what was a rather iffy OS and UI experience to begin with to become a cluster fuck of incoherence.

I do most of my day to day work on MacOS and Gnome, and fortunately the Terminal services version I have to RDP into is Server 2016, but every time I have to work with Windows 11 I'm just stunned by just how awful it looks and how badly it behaves.

Comment Re:No. (Score 1) 222

The capacity of the government of a large jurisdiction like California, or more particularly the US, could bankrupt someone like Musk, so I say, bring it on. Within a decade Musk would have abandoned all efforts, or, even better, be stone cold broke (frankly billionaires shouldn't exist at all, and we should tax the living fuck out of them down to their last $200 million).

We're too afraid of these modern day Bond villains when we should be aiming every financial, and probably every real, cannon straight at them and putting them in a sense of mortal danger every minute of their waking lives, so that they literally piss themselves in terror at the though that "we the people" might decide to wipe them out for good.

Comment Re: Hardware will be fine (Score 1) 56

Is utility in your eyes alone, or the eyes of all beholders?

I think he measn utility in the sense of economics, as in value for the companies. So not in the eyes of all beholders, but in the eyes of the shareholders. The major current problem for AI implementations is that while there are cases where it is useful (sometimes even highly so), it's not profitable because the cost to develop, train and run the models vastly exceeds the amount of money the providers are getting from it. Image/video generation is a good example of this, but the same applies to all current AI-implementations, including code. Ed Zitron has recently written about this breaking down some of the num,bers when it comes to costs and burn rates for the companies for example here:

As I've written again and again, the costs of running generative AI do not make sense. Every single company offering any kind of generative AI service — outside of those offering training data and services like Turing and Surge — is, from every report I can find, losing money, and doing so in a way that heavily suggests that there's no way to improve their margins.

In fact, let me explain an example of how ridiculous everything has got, using points I'll be repeating behind the premium break.

Anysphere is a company that sells a subscription to their AI coding app Cursor, and said app predominantly uses compute from Anthropic via their models Claude Sonnet 4.1 and Opus 4.1. Per Tom Dotan at Newcomer, Cursor sends 100% of their revenue to Anthropic, who then takes that money and puts it into building out Claude Code, a competitor to Cursor. Cursor is Anthropic's largest customer. Cursor is deeply unprofitable, and was that way even before Anthropic chose to add "Service Tiers," jacking up the prices for enterprise apps like Cursor.

My gut instinct is that this is an industry-wide problem. Perplexity spent 164% of its revenue in 2024 between AWS, Anthropic and OpenAI. And one abstraction higher (as I'll get into), OpenAI spent 50% of its revenue on inference compute costs alone, and 75% of its revenue on training compute too (and ended up spending $9 billion to lose $5 billion). Yes, those numbers add up to more than 100%, that's my god damn point.

Large Language Models are too expensive, to the point that anybody funding an "AI startup" is effectively sending that money to Anthropic or OpenAI, who then immediately send that money to Amazon, Google or Microsoft, who are yet to show that they make any profit on selling it. - -

As discussed previously, OpenAI lost $5 billion and Anthropic $5.3 billion in 2024, with OpenAI expecting to lose upwards of $8 billion and Anthropic — somehow — only losing $3 billion in 2025. I have severe doubts that these numbers are realistic, with OpenAI burning at least $3 billion in cash on salaries this year alone, and Anthropic somehow burning two billion dollars less on revenue that has, if you believe its leaks, increased 500% since the beginning of the year. Though I can't say for sure, I expect OpenAI to burn at least $15 billion in compute costs this year alone, and wouldn't be surprised if its burn was $20 billion or more.

At this point, it's becoming obvious that it is not profitable to provide model inference, despite Sam Altman recently saying that OpenAI was. He no doubt is trying to play silly buggers with the concept of gross profit margins — suggesting that inference is "profitable" as long as you don't include training, staff, R&D, sales and marketing, and any other indirect costs.

I will also add that OpenAI pays a discounted rate on its compute.

In any case, we don't even have one — literally one — profitable model developer, one company that was providing these services that wasn't posting a multi-million or billion-dollar loss.

(sources for the numbers and stats can be found hyperlinked in the post itself)

That's the core problem with the current trend. Not that AI can't be useful, but that the current business models the major players are using are fundamentally broken and no-one seems to have realistic path to profitability when factoring in how fast their costs are growing. OpenAI has currently has around 1,4 trillion (not a typo, trillions, not billions) of datacenter commitments set up for the upcoming next 8 years, and their yearly revenue is less than 2 % of that yearly (~20 billion), if we believe Altman's own figures which are probably overly optimistic, and they're nowhere close to getting enough external funding to cover those commitments.

Comment GTA got it right (Score 1) 56

This radio ad from GTA 5 sums it up perfectly:

The future is now!
The future is in the cloud!
Cloud computing!

GIRL: what's cloud computing?

Imagine a computer you share with everyone
imagine your private data spread around the world
being shared equally with everyone
it's the cloud

GIRL: I'm in the cloud!

It's utopia
nothing can possibly go wrong
imagine instead of your own computer
it's a giant one we all share together
your data is safe
it's in the cloud!

GIRL: Everyone is in the cloud!

Live life lsurrounded by the mists of time
with Cumulonimbus Computing!
The cloud is hard to describe
you can't see when you're in it
and when you get close, it disappears!

GIRL: Where did the cloud go?

Now when your data is damaged
you don't need to fire the IT department
you can fire the Internet!

GIRL: you're fired Internet!

We've taken the metaphor to extremes
because when you're in the cloud
the lightning won't strike
it's Cumulonimbus Computing!

GIRL: I'm really in the cloud!

You're in the clouds now!

Comment Re:With Science (Score 1) 95

Science? Really? There's a lot of soft-brained, unscientific and technophilic pseudo-religion in the article.

Let's work with the argument's load-bearing phrase, "exploration is an intrinsic part of the human spirit."

There are so many things to criticise in that single statement of bias. Suffice it to say there's a good case to be made that "provincial domesticity and tribalism are prevalent inherited traits in humans", without emotional appeals to a "spirit" not in evidence.

Comment It's 2025 (Score 5, Interesting) 71

It's 2025. We've known for a couple of decades that Win32/Win64 and Windows and its main ecosystem only work because various hacks into the kernel to make it all run more smoothly. Even the video driver architecture basically has built in restarts when buffers blow up.

It's a shitty proprietary operating system which somehow, every time they try to clean it up, it gets worse under and on top of the hood. I stopped using Windows for my own personal devices four years ago, and will not go back. Ubuntu, Debian and MacOS offer cleaner UIs, and even if the software libraries are a bit smaller, at least I'm not a prisoner to endless ads.

Christ I had to set up a Win11 laptop yesterday, and between setting up the OS and Edge I had to turn down "offers" and additional tracking functionality around seven or eight times. Actually more, because then I set up a non-privileged user profile, and had to do it all again. And that was Win11 Pro. I can only imagine how much worse the Home editions are.

Slashdot Top Deals

If the facts don't fit the theory, change the facts. -- Albert Einstein

Working...