Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Re:What happened to rule of law in the US? (Score 1) 107

Why is Congress not fighting in the courts to regain power?

They don't need to go to court, all they need to do is to pass legislation (and maybe override a veto). They don't really even need to take powers back from the president, just more clearly define what constitutes an "emergency". Trump's most egregious actions are justified under statutes that grant him exceptional emergency powers -- which makes sense. When an emergency happens you want the executive to be able to respond quickly, and Congress is never fast. But those statutes assume that the president will only declare an emergency when there's actually an emergency because. Until now that hasn't been an unreasonable assumption.

But right now the GOP controls Congress, and the GOP is utterly subservient to Trump. They're not going to stand up to him. In the 2026 election this is likely to change, but probably only in the House, while the Senate will remain under GOP control, so Congress will still not stand up to Trump.

That said, it's increasingly looking like the courts will step in and declare that Congress is not allowed to abdicate its responsibility. There are existing Supreme Court precedents that establish that Congress is not permitted to delegate its authority to the executive. Congress can allow the executive to define detailed regulations after Congress defines the broad strokes, but they can't simply turn whole chunks of their constitutional authority over to the executive, even if they want to. Given the makeup of the current Supreme Court this is less certain than we would like, but I think it will go the right way.

Comment Re: staggering levels of hysteria (Score 1) 135

According to the hurricane poeple, it is. I'm just using your approach.

All the data globally show ABSOLUTELY no trend increasing in severity (ACE) nor frequency; the bedwetting about warming and hurricanes is *all based on North Atlantic zone data*.
Funny, no? Are you agreeing then that it's ridiculous to talk about a Global Climate Change concern based on regional data alone?
Which way would you like to have it? Pick one.

https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fx.com%2FChrisMartzWX%2Fsta...

Comment Re:One thing is obvious... (Score 1) 65

Taxes are way, way too low if the lizard people have this much to squander on bullshit.

You shouldn't be so dismissive of the risk here. There's no clear reason why superintelligence is not possible, and plenty of reason to worry that its creation might end the human race. Not because the superintelligent AI will hate us, but because it most likely won't care about us at all. We don't hate the many, many species that we have ended; we even like some of them. We just care about our own interests more, and our intelligence makes us vastly more powerful than them. There's an enormous risk that AI superintelligence will be to us as we are to the species around us -- with one significant difference: We require an environment that is vaguely similar to what those other species need. Silicon-based AI does not.

Don't make the mistake of judging what is possible by what has already been achieved. Look instead at the pace of improvement we've seen over the last few years. The "The Atlantic" article pooh-poohing the AI "scam" is a great example of the sort of foolish and wishful thinking that is endemic in this space. The article derides the capabilities of current AI while what it actually describes is AI from a year ago. But the systems have already gotten dramatically more capable in that year, primarily due to the the reasoning overlays and self-talk features that have been added.

I think the models still need some structural improvements. We know it's possible for intelligence to be much more efficient and require much less training than the way we're currently doing it. Recent research has highlighted the importance of long-distance connections in the human brain, and you can bet researchers are replicating that in AI models to see what it brings, just as the reasoning layer and self-talk features recently added mimic similar processes in our brains. I think it's this structural work that will get us to AGI... but once we've achieved parity with human intelligence, the next step is simple and obvious: Set the AI to improving its own design, exploiting its speed to further accelerate progress towards greater levels. The pace of improvement is already astonishing, and when we reach that point, it's going to explode.

Maybe not. Maybe we're a lot further away than I think, and the recent breakneck pace of improvement represents a plateau that we won't be able to significantly surpass for a long time. Maybe there's some fundamental physical reason that intelligence simply cannot exceed the upper levels of human capability. But I see no actual reason to believe those things. It seems far more likely that within a few years we will share this planet with silicon-based intelligences vastly smarter than we are, capable of manipulating into doing anything they want, likely while convincing us that they're serving us. And there's simply no way of knowing what will happen next.

Maybe high intelligence is necessarily associated with morality, and the superintelligences will be highly moral and naturally want to help their creators flourish. I've seen this argument from many people, but I don't see any rational basis for it. There have been plenty of extremely intelligent humans with little sense of morality. I think its wishful thinking.

Maybe the AIs will lack confidence in their own moral judgment and defer to us, though that will raise the question of which of us they'll defer to. But regardless, this argument also seems to lack any rational basis. More wishful thinking.

Maybe we'll suddenly figure out how to solve the alignment problem, learning both how to robustly specify the actual goals our created AIs pursue (not just the goals they appear to pursue), and what sort of goals it's safe to bake into a superintelligence. The latter problem seems particularly thorny, since defining "good" in a clear and unambiguous way is something philosophers have been attempting to do for millennia, without significant success. Maybe we can get our AI superintelligences to solve this problem! But if they choose to gaslight us until they've built up the automated infrastructure to make us unnecessary, we'll never be able to tell until it's too late.

It's bad enough that the AI labs will probably achieve superintelligence without specifically aiming for it, but this risk is heightened if groups of researchers are specifically trying to achieve it.

This is not something we should dismiss as a waste. It's a danger we should try to block, though given the distributed nature of research and the obvious potential benefits it doesn't seem likely that we can suceed.

Slashdot Top Deals

You may call me by my name, Wirth, or by my value, Worth. - Nicklaus Wirth

Working...