Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Re:He might still be alive (Score 2) 101

When you mentioned "third partner" who cashed out early, I thought for a minute you were going to be talking about Ronald Wayne - what a life of bad decisions he made ;)

For those not familiar:

He got 10% of the original Apple stock (drew the first Apple Logo, made the partnership documents, wrote the Apple I manual, etc).
Twelve days later, he sold it for $800.
Okay, but he could still try to claim rights in court... nah, a year later he signed a contract with the company to forfeit any potential future claims against the company for $1500.
Okay, well, it's not like he had an opportunity to rethink... nah, Jobs and Wozniak spent two years trying to get him back, to no avail.
Okay, but he still had, like memorabilia he could hawk from the early days, like his signed contract. Nah, he sold that for $500 in 2016.
And that contract went on later to be sold for $1,6 million.
Okay, well, I'm sure he went on to do great things... nah, he ended up running a tiny postage stamp shop.
Which he ended up having to move into his Florida home because of repeated break-ins.
Which he then had to sell after an inside-job heist bankrupted him.

Comment Re:He might still be alive (Score 5, Informative) 101

Jobs committed suicide-by-woo. He didn't "turn away from traditional therapy because it can't keep up with rapidly advancing metastasis", he turned away from treatment for a perfectly treatable form of cancer for nine months to try things like a vegan diet, acupuncture and herbal remedies, and that killed him.

Steve Jobs had islet cell neuroendocrine tumor. It's much less aggressive than normal pancreatic adenocarcinoma. The five-year survival rate is 95% with surgical intervention. Jobs was specifically told that he had one of the 5% of pancreatic cancers "that can be cured", and there was no evidence at the time of his diagnosis that it had spread. Jobs instead turned to woo. Eight months later, there was signs on CT scans that his cancer had grown and possibly spread, and then he finally underwent surgery, it was confirmed that there were now secondary tumors on his liver. His odds of a five-year survival at this point were now 23%. And he did not roll that 23%.

Jobs himself regretted his decision to delay conventional medical intervention.

Comment Re: Your mouse is a microphone (Score 1) 37

I did some proof of concept tests with both Pointer Lock and PointerEvents, but both failed because you don't get *any* data if you're not moving the mouse, and only get (heavily rounded) datapoints when you do move the mouse. You'd need raw access to data coming from the mouse, before even the mouse driver, to do what they did.

You *might* be able to pull off a statistical attack, collecting noise in the fluctuations of movement positions and timing in the data you receive when the mouse *is* moving. But I can't see how that could possibly have the fidelity to recover audio, except for *maybe* really deep bass. And again, it'd only apply for when the mouse is actually moving.

Neat attack, but not really practical in the browser.

Comment Re:"very hard not to shop at Amazon" (Score 1) 115

I think the question was not Amazon vs Walmart but Amazon vs other online shops that also deliver to your doorstep, and do not cost you much more time.

That's still a lot more effort, especially since you have to vet each one to figure out if they provide good customer service in the event something goes wrong, and to be confident they won't steal and sell your credit card number (yeah, you aren't liable for the fraud, but getting a new card is a huge PITA). What could make this work well is the existence of a few online shopping aggregators that combine searching across all of the online stores and centralize payment. The problem is that in order to compete with Amazon any such alternatives would have to have enormous scale, which makes it a very difficult space to enter. Google tried with Google Shopping, but regulators immediately jumped in to stop them.

FWIW, my strategy is that for inexpensive stuff I just buy on Amazon, period, spending a little time to look for cheaper/better options than the "Amazon recommended". For pricier stuff, where it's worth spending a few minutes, I search on Amazon and also on Google, and if I find cheaper non-Amazon options I spend some time evaluating the different sites, unless they happen to be sites I've already bought from. For really expensive stuff I use other search engines and recommendation sites... and then almost always end up buying on Amazon because on those products pricing tends to be consistent, and it's a lot of money and if something goes wrong I trust Amazon to make me whole

Comment Re: Cheerful Apocalyptic (Score 1) 132

"Being a human" is in group/out group justification, again rooted in tribalism.

Yep. So what? All species are evolved to fight for survival, because any that doesn't evolve to fight for survival is likely to cease to exist. I'm human and want my species to survive. Should I instead want my species to be eaten by wolves, or ASIs?

The problem is that there is a portion of our species that is not interested in humanity's survival. Those people are an existential threat to the rest of us. That doesn't mean we need to exterminate them, but it does suggest that we shouldn't help them carry out their plans.

Comment Re:Cheerful Apocalyptic (Score 1) 132

Being a human, I'm against humans losing such a competition. The best way to avoid it is to ensure that we're on the same side.

Unfortunately, those building the AIs appear more interested in domination than friendship. The trick here is that it's important that AIs *want* to do the things that are favorable to humanity. (Basic goals cannot be logically chosen. The analogy is "axioms".)

The problem with the "trick" is that we (a) don't know how to set goals or "wants" for the AI systems we build, nor do we (b) know what goals or wants we could or should safely set if we did know how to set them.

The combination of (a) and (b) is what's known in the AI world as the Alignment Problem (i.e. aligning AI interests with human interests), and it's completely unsolved.

Comment Re:Subject (Score 1) 132

[...] consciousness in the universe will be superior if AIs supplant us.

Possibly. Now prove it. Since you're asking the human species to ritualistically sacrifice itself for the progression of intelligent machines, that shouldn't be asking too much.

I think you also need to prove that humans supplanting other less-intelligent species is good. Maybe the universe would be better off if we hadn't dominated the Earth and killed off so many species.

(Note that I think both arguments are silly. I'm just pointing out that if you're asking for proof that AI is better than humanity, you should also be asking for proof that humanity is better than non-humanity, whether AI or not. My own take is that humanity, like every other species, selfishly fights for its own survival. There's no morality in it, there's no such thing as making the universe better or worse off.)

Comment Re:What scares me is Venezuela (Score 1) 132

Seizing land is a counterproductive and foolish solution to that problem. Basically the whole world uses a different solution, which works pretty well: property taxes (though land-value taxes would probably be better). You just keep raising the taxes until leaving land idle becomes a money-losing proposition. The only way that doesn't work is if ownership of farmland is truly monopoly-dominated so there is no competition, in which case you might have to resort to trust-busting.

This is exactly why we have property taxes, to ensure that most property is put to productive use.

Yes, mass starvation is worse than land seizure, but land seizure is just about the worst possible solution to the problem, as evidenced by what has happened to Venezuela's economy since then. Seizure and collective ownership is guaranteed to produce horribly inefficient operations which might prevent outright starvation but will leave the populace on the edge of it. Seizure and redistribution to private ownership is slightly less bad, but will redistribute the land mostly to people who don't know how to use it effectively.

What would have worked much, much better would be actions that served to restore competition among farmers, starting with making sure they were all paying fair property taxes that were high enough to disincentivize leaving farmland fallow.

Comment Re:It's a purely economic decision. (Score 1) 132

What he means is "let's call it 'competition', so when AI is powerful enough to be our soldiers, weapons and lowly workers, we don't have to share whatever's being produced with the other 8 bn or so suckers; we'll just claim 'AI won in fair competition' and leave everyone else to starve".

Of course this isn't about replacing all of humanity with AI. Just the part that isn't made up of billionaires, and has to work for billionaires instead.

It's just a variation of Social Darwinism.

Why would superintelligent AIs obey the billionaires?

If you think it's because they'd be programmed to to it, you don't understand how we currently design and build AI. We don't program it to do anything. We train it until it responds the way we want it to, but we have no way of knowing if it's just fooling us. We can't actually define goals for the systems and we can't introspect them to tell what actual goals they have derived from their training sets.

Note, BTW, that the above is only one half of the problem called "AI alignment". In order to make sure AI will serve humanity (or a small segment of humanity; it's exactly the same problem either way) you need to be able to do two things. First, you need to be able to set the AI's goals, in a way that sticks. Second, you need to figure out what goal you can set that will achieve the subservience that you want. The difficulty in setting a "safe" goal for a powerful being is well illustrated in that old tales about genies and wishes, but modern philosophers have taken a hard, systematic look at this problem and so far no one has come up with any safe goal, not one, there's always some way it could go horribly wrong.

Comment Re:Bullshit (Score 1) 64

I think this brings the absurdity of the situation into a little more crystal clear of a picture.

It really doesn't do anything to illuminate the situation. It would if the ISS were built to generate/consume a lot of power, but it's not. Quite the opposite. It's designed to provide a reasonable amount of power to run life support and energy to run some experiments.

It may, of course, be true that orbital gigawatt data centers are a lot more than twenty years away, but the comparison with ISS doesn't tell us anything.

Comment That's just dumb (Score 2) 43

The whole point of AI is that it's supposed to be able to adapt to us, allowing us to give it direction in natural language and expect it to deal correctly with our ambiguities. While it's true that current-generation AI does require a learning curve, it's improving very rapidly, so any thing you learn about how to use it today will be obsolete next year. "Prompt engineering" shouldn't ultimately be a thing at all, and if AI development stalls out at some point so that it actually is a thing people have to do a decade from now, it will not be what it is today.

It makes sense to learn how to work around the idiosyncrasies and limitations of today's AI tools if you can use them to accomplish useful work today, but there's no point in learning those things in order to use the tools of 2035.

Slashdot Top Deals

"Nuclear war can ruin your whole compile." -- Karl Lehenbauer

Working...