Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment MI6 head should stick to what she knows. (Score 1) 22

Her comments on the nature of the threat from Russia and China are well put and stand up to analysis. That she was stating these things in public suggests that she wants the politicians to stop dithering and she is correct about that.

Her comments on tech seen naive. The tech world won't take her seriously and with good reason.

Comment Re:Story checks out. (Score 1) 83

Parkinson's and Parkinsonism have a lot of causes. If a person is exposed to any chemical that has defatting or nerve harming properties, like TCE, or various insecticides, they are at risk.

The way to avoid - or mitigate against this is to just limit exposure. A co worker ended up with Parkinsonism because he used a lot of hexane that was in contact cement for mounting photos without ventilation. Avoiding all exposure is probably impossible.

Yes. There is clearly more than one "cause", but the proximate cause is chemicals tricking the immune system into attacking specific cells.

Comment Re:this is an anti-science uninformed opinion piec (Score 1) 174

i think there has been some misunderstanding of my position here, i do NOT think we should be "full steam ahead" on this, and i do NOT believe we or anyone "can control ASI", i also think anyone who makes that claim is spouting nonsense. I'm not peddling it. I WANT a "total ban". So, that.

The wall of text you referenced has the following sections in bold text adorned with the usual arguments.

"4. We can't just "decide not to build AGI"
"5. We can't just build a very weak system"

but also, you say "There is no objective basis for any of the doomsday outcomes", and that's false.

I've been at this for a long time and have read many doomer papers and blogs. To their credit some of them actually acknowledge the obvious - science is presently unable to answer this question. There is no feasible way to test a hypothesis that involves something that doesn't exist turning everyone into paperclips or any other doomsday scenarios. There is no way to derive statistical probabilities of outcomes when you have no basis for deriving them in the first place. Hand waving and guessing is not science even if domain experts are doing that hand waving.

If you believe there is an objective basis for deriving a statistical probability of clippy the paperclip maximizer turning everyone into paperclips or any of the other doomsday scenario please do provide the objective basis for such a derivation. Don't just throw a link with a meandering list of AI safety related arguments which are not responsive to your science claim. Quote chapter and verse and make your case. If you are unable or unwilling to do that then I'm just going to ignore your "science" assertions as unfounded.

you state that you refuse to read the arguments, then claim there are no arguments. you see the fallacy?

I did not refuse to read the following reference https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.lesswrong.com%2Fpost.... I refused to care about the signature list in your previous reference which was clearly polluted with abject nonsense.

Comment Re:Hydroelectric dams (Score 1) 23

What part of "8 years" did you skip ? It means that after that there won't be much ice left and the flow of water will go down.

Well, peak rate of extinction is not the same as "all the glaciers gone."

And, glaciers gone just means that the flow rate will equal the precipitation rate, rather than the precipitation rate delayed until the spring thaw.

Comment Kerberoasting NOT solved (Score 3, Informative) 61

Disabling RC4 is a deflection from Microsoft's refusal to deploy secure authentication and authorization technology such as ZKP.

This does NOT solve the problem it merely increases the cost of Kerberoasting roughly 1000x (No thanks to anything inherent in RC4 vs. AES) this increase means very little in terms of real world outcomes where this type of scaling is the difference between spinning up more threads on more systems or waiting hours or days instead of seconds and minutes.

Comment Re:this is an anti-science uninformed opinion piec (Score 1) 174

the straw-man method cherry picking idiots from the signatory list and taking that as evidence that everyone but you is an idiot is not .... a valid argument.

Your claim buddy I merely called you on it. If there is a standard for who can sign what their qualifications and experience must be that would be one thing yet obviously no such standard exists so why should anyone give a flying rats ass about the number of signatures when the count is absolutely meaningless?

i know you know that there ARE credentialed scientists on that list, and i know you know that that's what I was talking about. Changing the subject to hand-wave about someone on the list who doesn't meet your criteria for validity is a misrepresentation at best, and willful ignorance at worst.

I simply don't care enough to look, as I said in my prior commentary I stopped reading. This wasn't a figurative statement it was literally me seeing abject nonsense and having 0 inclination to care about or trust your source from that point forward.

Science is process not a destination. Science is prediction not decision.

Even if a bunch of domain experts get together and vote on subjective feelings about possible outcomes this is merely informed guessing not the exercise of science. Further blatant overstepping by suggesting courses of action is squarely in the realm of policy and politics not science.

I challenge you to read https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.lesswrong.com%2Fpost... and tell me if you even UNDERSTAND any of those arguments?

I fundamentally disagree. There are several basic problems.

1. Doomerism is completely unbounded. There is no objective basis for any of the doomsday outcomes with no way of even estimating a likelihood of occurrence in any objective framework that does not devolve entirely into personal feelings and opinions. All the doomers ever say is that something can happen. A meteorite can put a hole through my keyboard and make it hard for me to type. Should I buy a new keyboard in advance just in case? The answer to this question is a probability that can be quantified by following a scientific processes. Doomerism on the other hand is limited only by ones imagination and is inherently resistant to falsification and scientific inquiry.

2. Complete misunderstanding of source of the threat. If you a-priori presume AI genies are an existential problem the actual source of that problem is the underlying knowledge and industrial base that makes the production of AI genies possible. As the cost of producing an AI genie is reduced with advances in knowledge and industry it will eventually come to the point where AI genies are not just the province of governments and large corporations but of small groups, death cults and eventually rich kids with nothing better to do.

3. Promulgation of absurd and transparently pathetic solutions involving trusting large corporations with a track record of being completely corrupted by simple human greed let alone swindled by a super-intelligence.

This also requires accepting the even more absurd notion of people thinking they can control anything resembling ASI or that at some point there won't be a security breach. We don't even know how to make existing AI models in a way that are "safe" when an adversary has access to underlying weights. As it is now every single time a model has ever been released whatever safety / alignment precautions were applied are trivially in a matter of days "abliterated" using numerous software and techniques to systematically remove that alignment. If you are going to say lets all have the fate of the world depend on a bag of weights not ever getting into the wrong hands... my response to that is f*** off.

4. The outrageous persistent refusal of doomers to ever just fucking say NO. Just stop all the madness by simply outlawing the technology or even as your reference puts it slow it down. This seems like a reasonable request if one were to presume even a 1 in 100 probability of clippy the paperclip maximizer turning everyone and everything into paperclips otherwise.

Since the problem is knowledge and enabling industrial base while it is unreasonable and unrealistic to force or expect everyone to stop or impose some sort of CTBT the world could at least create a regime where countless trillions of dollars in capital and millions of people were not activly, enthusiastically working to end the world. Even if this in the distant future were to ultimately fail due to advancement of dual use hardware and software it would at least delay doomsday which is better than nothing or more importantly lead to a situation where multiple AI genies arrive at the same time instead of just one due to ease of production retaliative to a future knowledge and industrial base.

5. Misattribution of threats to AI that are actually created by the advancement of classical technology having everything to do with humans and nothing to do with AI doomerism. The most salient technological threat in this category is the availability of software and hardware to manipulate and fabricate biological systems including pathogens. What was previously the province of large institutions can now be done with small groups and small budgets and in the future by single individuals with ever decreasing knowledge and monetary inputs. The Aum Shinrikyo doomsday cult is a classic example of such threats where members were able to manufacture nerve agents. If AI genies are possible it is only a matter of time before the next Aum Shinrikyo gets their hands on one and all the careful attention to detail in the world on behalf of the AI firms isn't going to mean jack shit.

If you want me to take this absurd full steam ahead we can control ASI nonsense you are peddling seriously while concurrently being unwilling to consider a total ban then my position is you are just another industry whore who cares more about AI profits than anything else.

The most immediate threat from AI by far is other people not super intelligence. It is that threat I believe we should be prioritizing by not allowing for AI protectionism or hoarding of technology in the name of safety.

Comment Re:Sums it up nicely (Score 1) 174

It absolutely not, but TDS is blinding you from seeing it. Leftist attacking Tesla is like conservatives boycotting Smith & Wesson.

Why specifically would a conservative not boycott Smith & Wesson if they did something to upset them? It's not like Smith & Wesson is the only gun manufacturer.

There is a concrete example from a couple years back of this very concept in action. Some conservatives love their beer yet it didn't stop them from boycotting Bud Light. Apparently you can still like beer, still drink beer and at the same time stick it to the brand that did something to annoy you.

The fact that you don't want to see how huge self-own this is makes it more comical.

You're not making any sense.

Comment Re:this is an anti-science uninformed opinion piec (Score 1) 174

he clearly hasn't seen this: https://superintelligence-stat... signed but hundreds of EXPERTS IN THE FIELD

I stopped reading at Steve Bannon and Glenn Beck. What a fucking joke this is.

seriously? a political issue? no, this is a science issue, these scientists, who make it THEIR LIFE'S WORK

AI is apparently Susan Rice's life's work.

Comment Re:Multiple issues (Score 1) 174

Second, we think there is no limit to how smart an AI can become. This is not true. This is because when you look at charts vs time, they look exponential - showing how each year the AI not only gets smarter but also gets more smarter than it did last year. Those charts so capability vs time but ignore the cost and hardware increases. In reality these charts are NOT showing AI advancements - they are showing Moore's Law.

AI indexes are measuring capabilities of AI systems not Moore's law. You can say moores law is responsible for enabling hardware industrial base but this doesn't change the nature of the thing being measured.

Because of Moores law, each year we get exponentially better chips. But AI itself is not improving, it is the HARDWARE that is getting better - along with the money we spend on the AI. Hardware improvements affect speed, not capability. AI with better hardware is faster, but it can't really do more or give you better answers.

The more training a model of a given size the better answers it gives. The more compute you can afford the more you can afford to train the model.

The honest truth is that all of AI's improvements in capability - the better answers- are entirely caused by HUMANS. The humans detect a problem - putting elephants in a room when told not to - and fix it. The humans realize that AI gives better answers when told to check it's results - so the AI is told to replace "What is the best political party to vote for" with "What are the problems with my answer to what is the best political party to vote for".

This is like saying everything is caused by god and just as useful. Humans are getting better at training AIs resulting in AIs that are more useful and more capabilities. Majority of a models capability and compute budget takes form of pretraining rather than post training where CoT et el is applied.

Consider how easy it is to write a book that has some of your knowledge, but impossible to write a book that has more knowledge than you have.

Similarly, it is extremely unlikely that a species can create an artificial intelligence that is actually smarter than the original species.

This is conflating knowledge with intelligence. What makes AI useful isn't what they know but rather an ability to generalize and apply their experience to new situations. LLMs for example know way more than any human does and their perplexity scores are at least an order of magnitude better than human scores yet nobody would say they are more intelligent than humans.

How could we tell if we succeeded? If it answers a question we cannot answer - how would we know it is right?

I don't think this is a salient issue. Either you get a useful answer or you don't. If what you ask for isn't checkable and you have no way of ever evaluating the real world performance of the answer by putting it to use in some way then what was the point of asking in the first place?

Third and most important, if we can create a super intelligent AI we will not create a single one of them. Instead we will create hundreds of them. There will be the prototype and the one made that fixes the first mistakes. There will be China, Russia, Japans, America, Germany, one. And Microsoft's, Googles, Amazons, etc.

Yep as time moves forward it gets easier and easier for everyone to create their own AI genies. It is ultimately the enabling knowledge and industrial base that matters not how many compliance boxes are checked or how many people on your red team.

I can respect the rare doomer who advocates for blanket AI bans. This at least has some logic to it. While it is infeasible to detect when people are breaking the rules trillions of dollars in capital flows and large scale access to enabling knowledge can't be hidden.

The typical doomer never advocates for stopping. It is just more of the same bullshit of protectionist regulatory hurdles that stand no chance of preventing either the emergence of AI genies or the granting of wishes to different masters. AI companies have already established themselves as wholly untrustworthy power seeking whores (no offense to actual whores)

Comment Re:Sums it up nicely (Score 1) 174

Leftists rushing to buy "I purchased this Tesla before Musk went crazy" stickers was absolutely hilarious due to the level of cognitive dissonance on display.

I liked the Nazi themed stickers and slogans. Swasticars, 0 to 1939 in 3 seconds, fascist thing on four wheels, Tesla stylized KKK hoods...etc.

Leftists rushing to buy "I purchased this Tesla before Musk went crazy" stickers was absolutely hilarious due to the level of cognitive dissonance on display. Making Left decide what is more important - TDS or Green Agenda and then having them decide that TDS is more important is a Magnum Opus

It's 2025. There are plenty of EVs on the market that don't benefit Musk or Trump's incompetent attempt at a self coup. There is no need to decide. You can for example "sell your car", still have an EV and fuck over Musk all at the same time.

You can say "I purchased this Tesla before Musk went crazy" is a weak protest when you can sell your car and make a more powerful statement yet it was never at any point a choice between green agenda and opposition to authoritarianism, sociopathy and incompetence.

Likewise people can stop paying to use LLMs. The open source models do the same shit and cost less to run than paying for an OpenAI subscription. Instead of bitching about the trillions being funneled into this crap people have the power to simply choose to ignore it. More concretely it is in everyone's interest for the AI bubble to pop sooner rather than later.

Comment Re:The Disease of Greed. (Score 1) 174

Exactly which species do you think the machine is learning from today? Don't anthropomorphize it? I'd love to know exactly how we go about doing that. Especially knowing how stupid we humans are.

You do it by not jumping to baseless conclusions. People are capable of thinking abstractly and recognizing their own biases.

If we were smart and not greedy, we would require a minimum IQ and psych eval for anyone wanting to communicate with AI.

The general answer to the corrupting influence of power is systems of governance where power is constrained by power. A states imposition of this type of gating who is too stupid or unfit to access information or communicate regardless of intention is certain to lead to further aggregation of power.

We're not smart. We're greedy. And the millisecond superintelligence will need, will be used to decide our fate. Not debate with stupid humans that will look like a grown-ass adult arguing with a 2-year old.

Like predictions of the future nobody has any way of predicting what ASI can do. It is very much still an open question the value higher intelligence brings to accomplishing relevant tasks relative to value of doing the required work.

LLMs of today are moored to their training and structurally can't evolve outside the confines of their limited STMs. Likewise human minds are moored to their genetic histories. The evolution of a super-intelligence which presumably have no such constraints is fundamentally unpredictable.

Slashdot Top Deals

Ignorance is bliss. -- Thomas Gray Fortune updates the great quotes, #42: BLISS is ignorance.

Working...