Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Comment Hallelujah! (Score 1) 8

Instant apps created a lot of complexity and awkwardness in the Android platform. It has consistently been painful to deal with and work around, and been especially challenging for the security team, for a feature with very little user or developer interest. Killing it is definitely the right call.

Comment Re:What happened to rule of law in the US? (Score 1) 109

Why is Congress not fighting in the courts to regain power?

They don't need to go to court, all they need to do is to pass legislation (and maybe override a veto). They don't really even need to take powers back from the president, just more clearly define what constitutes an "emergency". Trump's most egregious actions are justified under statutes that grant him exceptional emergency powers -- which makes sense. When an emergency happens you want the executive to be able to respond quickly, and Congress is never fast. But those statutes assume that the president will only declare an emergency when there's actually an emergency because. Until now that hasn't been an unreasonable assumption.

But right now the GOP controls Congress, and the GOP is utterly subservient to Trump. They're not going to stand up to him. In the 2026 election this is likely to change, but probably only in the House, while the Senate will remain under GOP control, so Congress will still not stand up to Trump.

That said, it's increasingly looking like the courts will step in and declare that Congress is not allowed to abdicate its responsibility. There are existing Supreme Court precedents that establish that Congress is not permitted to delegate its authority to the executive. Congress can allow the executive to define detailed regulations after Congress defines the broad strokes, but they can't simply turn whole chunks of their constitutional authority over to the executive, even if they want to. Given the makeup of the current Supreme Court this is less certain than we would like, but I think it will go the right way.

Comment Re:One thing is obvious... (Score 1) 66

Taxes are way, way too low if the lizard people have this much to squander on bullshit.

You shouldn't be so dismissive of the risk here. There's no clear reason why superintelligence is not possible, and plenty of reason to worry that its creation might end the human race. Not because the superintelligent AI will hate us, but because it most likely won't care about us at all. We don't hate the many, many species that we have ended; we even like some of them. We just care about our own interests more, and our intelligence makes us vastly more powerful than them. There's an enormous risk that AI superintelligence will be to us as we are to the species around us -- with one significant difference: We require an environment that is vaguely similar to what those other species need. Silicon-based AI does not.

Don't make the mistake of judging what is possible by what has already been achieved. Look instead at the pace of improvement we've seen over the last few years. The "The Atlantic" article pooh-poohing the AI "scam" is a great example of the sort of foolish and wishful thinking that is endemic in this space. The article derides the capabilities of current AI while what it actually describes is AI from a year ago. But the systems have already gotten dramatically more capable in that year, primarily due to the the reasoning overlays and self-talk features that have been added.

I think the models still need some structural improvements. We know it's possible for intelligence to be much more efficient and require much less training than the way we're currently doing it. Recent research has highlighted the importance of long-distance connections in the human brain, and you can bet researchers are replicating that in AI models to see what it brings, just as the reasoning layer and self-talk features recently added mimic similar processes in our brains. I think it's this structural work that will get us to AGI... but once we've achieved parity with human intelligence, the next step is simple and obvious: Set the AI to improving its own design, exploiting its speed to further accelerate progress towards greater levels. The pace of improvement is already astonishing, and when we reach that point, it's going to explode.

Maybe not. Maybe we're a lot further away than I think, and the recent breakneck pace of improvement represents a plateau that we won't be able to significantly surpass for a long time. Maybe there's some fundamental physical reason that intelligence simply cannot exceed the upper levels of human capability. But I see no actual reason to believe those things. It seems far more likely that within a few years we will share this planet with silicon-based intelligences vastly smarter than we are, capable of manipulating into doing anything they want, likely while convincing us that they're serving us. And there's simply no way of knowing what will happen next.

Maybe high intelligence is necessarily associated with morality, and the superintelligences will be highly moral and naturally want to help their creators flourish. I've seen this argument from many people, but I don't see any rational basis for it. There have been plenty of extremely intelligent humans with little sense of morality. I think its wishful thinking.

Maybe the AIs will lack confidence in their own moral judgment and defer to us, though that will raise the question of which of us they'll defer to. But regardless, this argument also seems to lack any rational basis. More wishful thinking.

Maybe we'll suddenly figure out how to solve the alignment problem, learning both how to robustly specify the actual goals our created AIs pursue (not just the goals they appear to pursue), and what sort of goals it's safe to bake into a superintelligence. The latter problem seems particularly thorny, since defining "good" in a clear and unambiguous way is something philosophers have been attempting to do for millennia, without significant success. Maybe we can get our AI superintelligences to solve this problem! But if they choose to gaslight us until they've built up the automated infrastructure to make us unnecessary, we'll never be able to tell until it's too late.

It's bad enough that the AI labs will probably achieve superintelligence without specifically aiming for it, but this risk is heightened if groups of researchers are specifically trying to achieve it.

This is not something we should dismiss as a waste. It's a danger we should try to block, though given the distributed nature of research and the obvious potential benefits it doesn't seem likely that we can suceed.

Comment Re:Neither are we (Score 1) 203

Even adjusting for "all movement is somewhat useful for the skill of driving", an AI model driving consumes training material way more than a human will ever see in their lifetime if they popped right out of a womb and drove for every waking and sleeping moment of their life, several times over. The amount of input and feedback about spacial navigation from just moving about is still a tiny amount by both amount of movement and hours of movement of the training data.

Same for text processing, not only does it consume more than a human will ever see, it will consume more text than a human will ever see, hear, conceptualize across many lifetimes.

Yes, the AI scenarios have a more narrow scope of material but the volume of it is still inordinately more than a human will consume no matter how much you credit somewhat different experiences as "equivalent".

Comment Re:Is there _anybody_ that gets IT security right? (Score 2) 17

It seems they all mess up. Time for real penalties large enough that make it worthwhile hiring actual experts and letting them do it right. Otherwise this crap will continue and it is getting unsustainable.

No, no one get security right, and they never will. Security is hard and even actual experts make mistakes.

The best you can do is to expect companies to make a good effort to avoid vulnerabilities and to run vulnerability reward programs to incentivize researchers to look for and report bugs, then promptly reward the researchers and fix the vulns.

And that's exactly what Google does, and what Google did. Google does hire lots of actual security experts and has lots of review processes intended to check that vulnerabilities are not created... but 100% success will never be achieved, which is why VRPs are crucial. If you read the details of this exploit, it's a fairly sophisticated attack against an obscure legacy API. Should the vulnerability have been proactively prevented? Sure. Is it reasonable that it escaped the engineers' notice? Absolutely. But the VRP program incentivized brutecat to find, verify and report the problem, and Google promptly fixed it, first by implementing preventive mitigations and then by shutting down the legacy API.

This is good, actually. Not that there was a problem, but problems are inevitable. It was good that a researcher was motivated to find and report the problem, and Google responded by fixing it and compensating him for his trouble.

As for your proposal of large penalties, that would be counterproductive. It would encourage companies to obfuscate, deny and attempt to shift blame, rather than being friendly and encouraging toward researchers and fixing problems fast.

Comment Re:Who cares? (Score 1) 203

And for normal users it is just a blackbox that does what they expect it to do.

The general point being made is that it does *not* do what they expect it to do, but it looks awfully close to doing that and sometimes does it right until it obnoxiously annoys people.

Most laypeople I've interacted with whose experience has been forced AI search overviews are annoyed by them because they got bit by incorrect results.

The problem is not that the technology is worthless, it's that the "potential" has been set upon by opportunistic grifters that have greatly distorted the capabilities and have started forcing it in various ways. It's hard to tell the signal from the noise when you have so many flim flam artists dominating the narrative.

Comment Re:Not artificial intelligence (Score 2) 203

Now the thing is, as a culture we greatly reward the humans that speak with baseless confidence and authority. They are politicians and executives. Put a competent candidate against a con-man and 9 times out of 10 the con-man wins. Most of the time only con-men are even realistically in the running.

Comment Re:Neither are we (Score 1) 203

it's somehow beyond any conceivable algorithm or scale we can possibly fathom.

It's at least beyond the current breed of "AI" technologies, even as those techniques get scaled to absurd levels they still struggle in various ways.

A nice concrete example, attempts at self driving require more time and distance of training data than a human could possibly experience across an entire lifetime. Then it can kind of compete with a human with about 12 hours of experience behind the wheel that's driven a few hundred miles. Similar story for text generation, after ingesting more material than a human could ever possibly ingest they can provide some interesting, yet limited results.

Comment Re:The question is... (Score 1) 349

This is a strong case for fixing the mechanisms that demand "full time" work, particularly benefits. Need to split especially health insurance off from employment status, one way or another. We need the flexibility to reduce working hours or years without being hit by the limitations of "part time work".

Also a good way to let some folks better assemble a 'full time' work life from multiple 'part time' jobs.

While more drastic measures may be premature, I do think it has always made sense to do something to break that "employer == path to health insurance" BS (as well as other benefits).

Comment Re:UBI can't work (Score 1) 349

The issue then is that if UBI is insufficient to live on, then it can't really replace welfare for those who can't get a job at all.

Also, in this hypothetical, where there are negligible "job opportunities", it's not like folks even have an option to augment with earned income.

I agree with the concern about "just cut checks" gives a lot of risk of the rich to change the practical value of the numbers being doled out compared to measures to assure actual access to the relevant goods and services directly.

Comment Re:It's not that (Score 4, Interesting) 349

The overall labor participation percentage in 1950 was 59%. Now it's 62%. The absolute max over the last 75 years was 67% around the year 2000.

Every generation laments the up and coming generation as hopelessly stupid and lazy. You can find writing to that effect dating back hundreds of years. It's like every generation forgets they were the "lazy and stupid" generation growing up.

Comment Re:telecom (Score 1) 77

YouTube needs to be regulated as a telecom provider. As such, it must be prevented from discriminating against content for any reason other than it being illegal.

Sure, if you want it to become an unusable cesspool. If you just hate YouTube and want to kill it, this is the way. Same with any other site that hosts user-provided content -- if it's popular and unmoderated it will become a hellscape in short order.

Slashdot Top Deals

The nation that controls magnetism controls the universe. -- Chester Gould/Dick Tracy

Working...