the straw-man method cherry picking idiots from the signatory list and taking that as evidence that everyone but you is an idiot is not .... a valid argument.
Your claim buddy I merely called you on it. If there is a standard for who can sign what their qualifications and experience must be that would be one thing yet obviously no such standard exists so why should anyone give a flying rats ass about the number of signatures when the count is absolutely meaningless?
i know you know that there ARE credentialed scientists on that list, and i know you know that that's what I was talking about. Changing the subject to hand-wave about someone on the list who doesn't meet your criteria for validity is a misrepresentation at best, and willful ignorance at worst.
I simply don't care enough to look, as I said in my prior commentary I stopped reading. This wasn't a figurative statement it was literally me seeing abject nonsense and having 0 inclination to care about or trust your source from that point forward.
Science is process not a destination. Science is prediction not decision.
Even if a bunch of domain experts get together and vote on subjective feelings about possible outcomes this is merely informed guessing not the exercise of science. Further blatant overstepping by suggesting courses of action is squarely in the realm of policy and politics not science.
I fundamentally disagree. There are several basic problems.
1. Doomerism is completely unbounded. There is no objective basis for any of the doomsday outcomes with no way of even estimating a likelihood of occurrence in any objective framework that does not devolve entirely into personal feelings and opinions. All the doomers ever say is that something can happen. A meteorite can put a hole through my keyboard and make it hard for me to type. Should I buy a new keyboard in advance just in case? The answer to this question is a probability that can be quantified by following a scientific processes. Doomerism on the other hand is limited only by ones imagination and is inherently resistant to falsification and scientific inquiry.
2. Complete misunderstanding of source of the threat. If you a-priori presume AI genies are an existential problem the actual source of that problem is the underlying knowledge and industrial base that makes the production of AI genies possible. As the cost of producing an AI genie is reduced with advances in knowledge and industry it will eventually come to the point where AI genies are not just the province of governments and large corporations but of small groups, death cults and eventually rich kids with nothing better to do.
3. Promulgation of absurd and transparently pathetic solutions involving trusting large corporations with a track record of being completely corrupted by simple human greed let alone swindled by a super-intelligence.
This also requires accepting the even more absurd notion of people thinking they can control anything resembling ASI or that at some point there won't be a security breach. We don't even know how to make existing AI models in a way that are "safe" when an adversary has access to underlying weights. As it is now every single time a model has ever been released whatever safety / alignment precautions were applied are trivially in a matter of days "abliterated" using numerous software and techniques to systematically remove that alignment. If you are going to say lets all have the fate of the world depend on a bag of weights not ever getting into the wrong hands... my response to that is f*** off.
4. The outrageous persistent refusal of doomers to ever just fucking say NO. Just stop all the madness by simply outlawing the technology or even as your reference puts it slow it down. This seems like a reasonable request if one were to presume even a 1 in 100 probability of clippy the paperclip maximizer turning everyone and everything into paperclips otherwise.
Since the problem is knowledge and enabling industrial base while it is unreasonable and unrealistic to force or expect everyone to stop or impose some sort of CTBT the world could at least create a regime where countless trillions of dollars in capital and millions of people were not activly, enthusiastically working to end the world. Even if this in the distant future were to ultimately fail due to advancement of dual use hardware and software it would at least delay doomsday which is better than nothing or more importantly lead to a situation where multiple AI genies arrive at the same time instead of just one due to ease of production retaliative to a future knowledge and industrial base.
5. Misattribution of threats to AI that are actually created by the advancement of classical technology having everything to do with humans and nothing to do with AI doomerism. The most salient technological threat in this category is the availability of software and hardware to manipulate and fabricate biological systems including pathogens. What was previously the province of large institutions can now be done with small groups and small budgets and in the future by single individuals with ever decreasing knowledge and monetary inputs. The Aum Shinrikyo doomsday cult is a classic example of such threats where members were able to manufacture nerve agents. If AI genies are possible it is only a matter of time before the next Aum Shinrikyo gets their hands on one and all the careful attention to detail in the world on behalf of the AI firms isn't going to mean jack shit.
If you want me to take this absurd full steam ahead we can control ASI nonsense you are peddling seriously while concurrently being unwilling to consider a total ban then my position is you are just another industry whore who cares more about AI profits than anything else.
The most immediate threat from AI by far is other people not super intelligence. It is that threat I believe we should be prioritizing by not allowing for AI protectionism or hoarding of technology in the name of safety.