Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Re:Well, what a surprise. (Score 1) 120

we can just be patient and wait for him to inevitably fade like a fart in the wind

He is just a symptom, and a recent one at that. Him dying will not solve anything as the infrastructure which brought him to us is still there and waiting to foist someone else upon us. This isn't about him, he is a just a useless fool who is easy to manipulate.

Comment Re:94% of Trump's cases lose in lower courts (Score 1) 120

Do you think you'll still feel that the presidency is above the law, allowed to override or ignore Congress, when a Democrat is in the office? I think the theory of the all-powerful executive that Trump is pushing is clearly unconstitutional, but let's suppose it becomes the law of the land. Do you actually think you'll like that outcome?

Comment Re:94% of Trump's cases lose in lower courts (Score 1) 120

And 94% of them are overturned in favor of Trump when they get to the Supreme court, usually on the shadow docket with absolutely no reason given.

This really isn't accurate. Yes, SCOTUS has stayed a lot of injunctions, but I think most of the rulings on the merits -- where they actually do have to give some plausible reasoning -- will go the other way. I think the Roberts court wants to give Trump his way on basically everything, and I think they'll employ a lot of very twisty logic to justify what they can, issuing a lot of bad ruling along the way, but most of his actions are so wrong that they'll ultimately have to shut them down.

Comment Re:Nowhere near AGI (Score 3, Interesting) 94

we are basically still as far away from AGI as we ever were

Nonsense.

No one knows how far we are from AGI, and anyone who tells you they do is either deluded or lying. It's impossible to know until either (a) we achieve it or (b) we have a sufficiently well-developed theory of intelligence that we can explain it. And, actually, even knowing whether we've built AGI is difficult without the explanatory theory, because without the theory we can't even define what AGI is.

We might be decades away, or we might have already done it and just not noticed yet.

About the only thing you can say for certain is that there is no logical reason to believe that we won't build AGI eventually. Unguided evolution, which is just random variation and competitive selection, achieved it. Our own knowledge creation processes are also variation and selection, but because they operate at an abstract level without the need to modify a physical genotype and wait for phenotypic expression and outcome, they run many orders of magnitude faster. So we will succeed at creating AGI unless we collectively decide not to, and collectively decide to be very serious about enforcing a ban on AI research.

There similarly is no reason to believe that AI won't become superintelligent. Silicon-based intelligence has obvious advantages over the much less-capable substrate that evolution cobbled together. And even if that weren't the case, we would just devise better options. So, the only logical argument against superintelligence is that there is some law of physics that dictates an upper bound to intelligence, and that the peak levels of human intelligence have already achieved it. And even if there is an upper limit on intelligence, and we're it, we should absolutely expect our AIs to reach the same level BUT be orders of magnitude faster than we are, thanks to better miniaturization and faster signal propagation. Imagine the smartest people in the world, but make them able to think and communicate 1000 times faster. Could we even distinguish that from superhuman intelligence? And it seems far more likely that there is no upper bound on intelligence.

The author of TFA may be right that some people are using discussion of AGI and ASI as a way to amass political power now, but that doesn't change the underlying reality that AGI and ASI are almost certainly coming, even if we have absolutely no idea when. Personally, I think it's more likely that the author is uncomfortable thinking about the implications of the arrival of AGI and ASI and prefers to retreat into political theories that keep humans in the pre-eminent position, maintaining the comfortable view that we only have to be concerned about what humans do to each other.

Comment Re:Nowhere near AGI (Score 3, Interesting) 94

Welcome to the same story with AI since its inception. The same nonsense spouted since the 60's and before then, even.

"If only we had more processing/storage/nodes/money/training data/time/scale, I'm *sure* this statistical blackbox will magically become intelligent through some unexplained mechanism never once observed in all of existence."

It's always been the same. It's literally a superstition that has dogged AI and hindered AI research for decades. That there's some kind of "intelligence critical mass" beyond which a system collapses unavoidably into intelligence.

Well... now we know that's bollocks, finally.

Because we've never thrown so much money and resources at it, we've never had the whole of the planet using it and funding it and training it, we've never hit a point before where we'd RUN OUT of training data and now all potential new training data is actually corrupted by... AI output.

All that nonsense might FINALLY be laid to rest within the next few years and people would be so much more reluctant to try this same bullshit again, having cost us TRILLIONS this time around.

Now, maybe, just maybe, academics in the AI field can actually start to study... intelligence. With a view to developing... an artificial analogue to it. Rather than just bashing on statistical black boxes as if they're going to become the next messiah.

It's also been the same way, but with any luck this generation of AI will kill all that bullshit once and for all.

Comment Re:Robot vacuum cleaners - meh (Score 1) 65

A real vacuum cleaner just about maxes out a standard residential 120v 15a circuit, as anyone who remembers the incandescent bulb era can attest to. A circuit with a few lamps shared with a vacuum cleaner could easily end with you flipping a breaker or replacing a blown fuse.

When you look at the absolutely tiny lithium ion pack these robo-vacs come with, ...

Sitting on my kitchen table right now is a drone pack. It's 57,5Wh, smaller the batteries of most modern Roombas. It's 50C - thus it can output up to 2,9kW. And there's even higher packs available than that. Lithium ion cells can handle some truly high power outputs. It's *energy*, not *power*, that is their limitation. Run a pack at 50C and it'll be empty in a bit over a minute. That said, on hard floor surfaces there is absolutely no reason why you should be drawing more than 300-400W or so, and you can get by with well less than that. High powers are for like shag carpeting and the like. Also, the head matters more than the power (though of course contribute) - for a hard floor, for example, a fluffy roller head is ideal.

Comment Re:Rejected the AMZN Aquisition? (Score 2) 65

Facts. I used to have a Roomba for years, but as I live in Europe, it was getting increasingly hard to deal with modern features (like the self-emptying base which needs 120V power). I reluctantly switched to a Roborock when my power converter died, and just, wow, they're light years ahead of iRobot. I think iRobot has been coasting on its name for a while now.

Comment Re:It's all fun now, but ... (Score 1) 153

About a year ago, a friend of mine demo’s model 3 self driving to me. In a 10 minute city drive, he had to intervene once to keep us from hitting another vehicle, and a second time to stop from running down a pedestrian.

That's unusually bad, even for a year ago, but FSD has improved enormously since then. There were some huge updates around ~August that made it go from "Workable, but you have to watch it like a hawk" to "Really quite good, though still needs light supervision". I use FSD all the time and almost never have to intervene. It even passes the wife test now, meaning she uses it nearly all of the time, too, and I'd have said that would never happen.

OTOH, I used Waymo all last week for commuting around the bay area, and it was nearly flawless. There was one time it seemed to get confused because there was an emergency and there were sirens coming from multiple directions but none of the emergency vehicles could be seen. Apparently Waymo uses external microphones to listen for sirens. Anyway, it kind of stopped partway through a left turn through an intersection. It wasn't dangerous; all the human drivers were also slowing/stopping while trying to figure out where the emergency vehicles were, but it would have been better to continue through the intersection, then pull over. After about five seconds of hesitation, the Waymo did exactly that, but I'd have done it without the hesitation.

Comment Re:It's all fun now, but ... (Score 1) 153

An ICE doesn't come with a huge price tag after 8 years.

Neither does an EV. After 8 years an EV's battery pack will have degraded a little; perhaps it'll only have 85-90% of the range that it had when new (the 8-year warranty generally guarantees 80%). But the degradation curve is actually front-loaded; you lose the largest amount of range in the first year, less in the second, and so on. By the time it's 20 years old it will probably only have 75% of the range it had new. At 30 years, 70%, and so on.

Barring some manufacturing problem or catastrophic event, an EV battery should continue functioning long after an ICEV will need an engine replacement. The ICEV will maintain roughly its original range until it fails while the EV will lose a little range, but the EV will last longer.

Slashdot Top Deals

"It's a dog-eat-dog world out there, and I'm wearing Milkbone underware." -- Norm, from _Cheers_

Working...