Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

Comment Re:Give the customers what they want (Score 1) 27

Funny, I was just thinking today about things that would make me want to go to Starbucks more frequently. Cheaper drink prices?

I think they are going for "reduced waiting time" and "more consistent high quality preparation", with an option for "cheaper prices" at some point (if they feel they must). It's not clear that AI will actually provide any of those things, but that's the goal.

Now when your drink is done, you'll hear an AI-generated song play through the store's speakers, about how your venti double mocha soy latte with no whip is ready.

The song will advise you to "share and enjoy", and the beverage will be almost (but not quite) entirely unlike the one you wanted.

Comment Re:WOW That is some shark-jumping. (Score 1) 27

How complex do they think it is to follow an ordered list of drink assembly instructions can be?

Depends on how clearly written the instructions are. Even when the instructions are correct and unambiguous, language that is well-defined to someone with experience is often inscrutable to the newbie. (e.g. have you ever been following a recipe that tells you to "fold in" an ingredient and had to figure out exactly what "folding in" is supposed to consist of in that context?)

Traditionally your newbie barista would ask their co-worker at that point, so I'm not sure that having an AI on-hand provides much benefit unless it's faster and/or more accurate than getting a co-worker's attention would be. If it turns out not to be helpful, it will go away quickly enough.

Comment Re:It's not a decline... (Score 4, Interesting) 84

I think people expect commercial social media networks to be something they can't be -- a kind of commons where you are exposed to the range of views that exist in your community. But that's not what makes social networks money, what makes them money is engagement, and consuming a variety of opinions is tiresome for users and bad for profits. When did you ever see social media trying to engage you with opinions you don't agree with or inform you about the breadth of opinion out there? It has never done that.

The old management of Twitter had a strategy of making it a big tent, comfortable for centrist views and centrist-adjacent views. This enabled it to function as a kind of limited town common for people who either weren't interested in politics, like authors or celebrities promoting their work, or who wanted to reach a large number of mainly apolitical people. This meant drawing lines on both sides of the political spectrum, and naturally people near the line on either side were continually furious with them.

It was an unnatural and unstable situation. As soon as Musk tried to broaden one side of the tent, polarization was inevitable. This means neither X nor Bluesky can be what Twitter was for advertisers and public figures looking for a broad audience.

At present I'm using Mastodon. For users of old Twitter, it must seem like an empty wasteland, but it's a non-commercial network, it has no business imperative to suck up every last free moment of my attention. I follow major news organizations who dutifully post major stories. I follow some interest groups which are active to a modest degree, some local groups who post on local issues, and a few celebrities like George Takei. *Everybody's* not on it, but that's OK; I don't want to spend more than a few minutes a day on the thing so I don't have time to follow everyone I might be interested in. Oh, and moderation is on a per-server basis, so you can choose a server where the admins have a policy you're OK with.

Comment Re:whatever happened to transparent government? (Score 3, Insightful) 32

No, there are all kinds of information the government has that are legitimately not available. Sensitive data on private citizens, for example, which is why people are worried about unvetted DOGE employees getting unfettered access to federal systems. Information that would put witnesses in ongoing criminal investigations at risk. Military operations in progress and intelligence assets in use.

The problem is ever since there has been a legal means to keep that information secret, it's also been used to cover up government mistake and misconduct. It's perfectly reasonable for a government to keep things from its citizens *if there is a specific and articulable justification* that can withstand critical examination.

And sometimes those justifications are overridden by public interest concerns -- specifically when officials really want to bury something like the Pentagon Papers because they are embarrassing to the government. "Embarrassing to the government" should be an argument against secrecy, because of the public interest in knowing the government is doing embarrassing things. In the end, the embarrassment caused by the Pentagon Papers was *good* for the country.

Comment Re:One thing is obvious... (Score 1) 56

Taxes are way, way too low if the lizard people have this much to squander on bullshit.

You shouldn't be so dismissive of the risk here. There's no clear reason why superintelligence is not possible, and plenty of reason to worry that its creation might end the human race. Not because the superintelligent AI will hate us, but because it most likely won't care about us at all. We don't hate the many, many species that we have ended; we even like some of them. We just care about our own interests more, and our intelligence makes us vastly more powerful than them. There's an enormous risk that AI superintelligence will be to us as we are to the species around us -- with one significant difference: We require an environment that is vaguely similar to what those other species need. Silicon-based AI does not.

Don't make the mistake of judging what is possible by what has already been achieved. Look instead at the pace of improvement we've seen over the last few years. The "The Atlantic" article pooh-poohing the AI "scam" is a great example of the sort of foolish and wishful thinking that is endemic in this space. The article derides the capabilities of current AI while what it actually describes is AI from a year ago. But the systems have already gotten dramatically more capable in that year, primarily due to the the reasoning overlays and self-talk features that have been added.

I think the models still need some structural improvements. We know it's possible for intelligence to be much more efficient and require much less training than the way we're currently doing it. Recent research has highlighted the importance of long-distance connections in the human brain, and you can bet researchers are replicating that in AI models to see what it brings, just as the reasoning layer and self-talk features recently added mimic similar processes in our brains. I think it's this structural work that will get us to AGI... but once we've achieved parity with human intelligence, the next step is simple and obvious: Set the AI to improving its own design, exploiting its speed to further accelerate progress towards greater levels. The pace of improvement is already astonishing, and when we reach that point, it's going to explode.

Maybe not. Maybe we're a lot further away than I think, and the recent breakneck pace of improvement represents a plateau that we won't be able to significantly surpass for a long time. Maybe there's some fundamental physical reason that intelligence simply cannot exceed the upper levels of human capability. But I see no actual reason to believe those things. It seems far more likely that within a few years we will share this planet with silicon-based intelligences vastly smarter than we are, capable of manipulating into doing anything they want, likely while convincing us that they're serving us. And there's simply no way of knowing what will happen next.

Maybe high intelligence is necessarily associated with morality, and the superintelligences will be highly moral and naturally want to help their creators flourish. I've seen this argument from many people, but I don't see any rational basis for it. There have been plenty of extremely intelligent humans with little sense of morality. I think its wishful thinking.

Maybe the AIs will lack confidence in their own moral judgment and defer to us, though that will raise the question of which of us they'll defer to. But regardless, this argument also seems to lack any rational basis. More wishful thinking.

Maybe we'll suddenly figure out how to solve the alignment problem, learning both how to robustly specify the actual goals our created AIs pursue (not just the goals they appear to pursue), and what sort of goals it's safe to bake into a superintelligence. The latter problem seems particularly thorny, since defining "good" in a clear and unambiguous way is something philosophers have been attempting to do for millennia, without significant success. Maybe we can get our AI superintelligences to solve this problem! But if they choose to gaslight us until they've built up the automated infrastructure to make us unnecessary, we'll never be able to tell until it's too late.

It's bad enough that the AI labs will probably achieve superintelligence without specifically aiming for it, but this risk is heightened if groups of researchers are specifically trying to achieve it.

This is not something we should dismiss as a waste. It's a danger we should try to block, though given the distributed nature of research and the obvious potential benefits it doesn't seem likely that we can suceed.

Comment Re:Always online (Score 1) 142

Politicians just don't like doing hard things.

... and for good reason. Difficult projects are risky and expensive, and if they don't work on the first try, the voters blame the politician and then very soon afterwards he isn't a politician anymore (or at least, not an employed one). Even if they do succeed, the politician will get blamed if they turn out to be more expensive than predicted (which they always do, because that's the nature of difficult projects).

Comment Re:Always online (Score 5, Insightful) 142

The trouble with this shit is the train literally moves a million people every fucking day from early in the morning to late at night. It's incredibly difficult to upgrade such a massive system while it's running.

The "safe" way to do it is leave the old system in place and running, and install the new system next to it. Let them both run simultaneously for an extended period of time, with the old system still in charge and the new system running and computing results, but its results aren't actually controlling anything; they are only recorded to verify that its behavior is always the same as the old system given the same inputs.

Once you've thoroughly tested and debugged the behavior of the new system that way, you flip the switch so that now the new system is in control and the old system is merely having its results recorded. Let the system run that way for a period of time; if anything goes wrong you can always flip the switch back again. If nothing goes wrong, you can either leave the old system in place as an emergency backup (for as long as it lasts), or decommission it.

Comment Re:Great. (Score 1) 42

A menu bar at the top of the screen is a much bigger target to hit, and easy to find by muscle memory.

This logic made a lot of sense on the original Mac 9" screen. It makes less sense on a modern Mac with multiple large monitors, where the distance between your window's content and the menu bar can be significant, and your mouse may move up past the the menu bar and into the screen "above" if you aren't careful.

Comment Re:Is there _anybody_ that gets IT security right? (Score 2) 17

It seems they all mess up. Time for real penalties large enough that make it worthwhile hiring actual experts and letting them do it right. Otherwise this crap will continue and it is getting unsustainable.

No, no one get security right, and they never will. Security is hard and even actual experts make mistakes.

The best you can do is to expect companies to make a good effort to avoid vulnerabilities and to run vulnerability reward programs to incentivize researchers to look for and report bugs, then promptly reward the researchers and fix the vulns.

And that's exactly what Google does, and what Google did. Google does hire lots of actual security experts and has lots of review processes intended to check that vulnerabilities are not created... but 100% success will never be achieved, which is why VRPs are crucial. If you read the details of this exploit, it's a fairly sophisticated attack against an obscure legacy API. Should the vulnerability have been proactively prevented? Sure. Is it reasonable that it escaped the engineers' notice? Absolutely. But the VRP program incentivized brutecat to find, verify and report the problem, and Google promptly fixed it, first by implementing preventive mitigations and then by shutting down the legacy API.

This is good, actually. Not that there was a problem, but problems are inevitable. It was good that a researcher was motivated to find and report the problem, and Google responded by fixing it and compensating him for his trouble.

As for your proposal of large penalties, that would be counterproductive. It would encourage companies to obfuscate, deny and attempt to shift blame, rather than being friendly and encouraging toward researchers and fixing problems fast.

Comment Re:A new crisis (Score 4, Interesting) 132

Actually, the warning was first sounded the warning was Svante Arrhenius in 1896, when he determined the UV absorption properties of CO2 and came to the pretty fucking obvious conclusion, based on chemistry and thermodynamics, that if you increase CO2 concentrations in the atmosphere, you will inevitably, as a basic function of physics, increase energy absorption.

Comment Re:Entirely mechanical (Score 1) 175

Erm no, because humans reason, i.e. feed scenarios into their thought processes and evaluate outcomes. And they are affected by a greater manner of inputs and a wide scope of context than just some sentence. An LLM is basically a crank handle - same input token == exact same output token. LLMs attempt to mitigate with randomization of output (e.g. picking a token randomly based on statistical likelihood) but it's a simulacrum, nothing more.

Comment Re:Entirely mechanical (Score 1) 175

They really aren't doing more than I said. LLMs are trained on data in a way that given any set of input tokens, deterministically it will produce the exact same set of outputs. To mix things up, models will use a "temperature" parameter that will randomly select the next token from the list of most likely outputs so it appears more random than it is otherwise. If the temperature is too high, or the model is insufficiently trained, the response is garbage. If the temperature is too low the response is boring and the same each time.

More modern LLMs might also have callbacks to allow the implementer to inject context into the response, but I'm talking about the general mechanics of what is going on.

Slashdot Top Deals

Money is truthful. If a man speaks of his honor, make him pay cash. -- Lazarus Long

Working...