Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Re:But when you run it on your own machine (Score 1) 32

Sure... But isn't that "bias" and "misinformation" inherent to literally all models where ever they come from, East, West, China, US, EU ?

This is the nature of LLMs in contrast to algorithms, "unpredictability", "inability to verify the answer" through consistent reasoning and obfuscated "guardrails and guardrails mechanisms". This is why LLMs should never ever be used to make "choices" which need to be rationally argumented within specific contexts (legal, healthcare, employment, etc).

In short, solutions generated by an LLM should legally only be considered "opinions".

Comment Re:Problem solved! (Score 1) 255

(1) it prescribes gun ownership under a "well-regulated militia", these days that means a state's National Guard.

Afaik a "well-regulated militia" can't fall under auspices of a federal government, the National Guard does fall under joint state/federal command so it can't be the militia as prescribed in the 2nd amendment.

(2) the guns the founders were talking about were muzzling loading deals where you couldn't fire 30 rounds in 2 seconds.

It's about "arms" meaning sufficient to organize a coherent and concerted weaponized defensive/offensive action either individually or collectively. The amendment surely didn't mean to position flint lock muzzle loaders against automatic weapons as currently carried by either criminals, an invading military force or even a tyrannical federal government. It's an extension to the right to defend life and liberty with what is available to serve as a weapon and to actually bear those weapons.

Comment Re:A glaring problem with the lab leak theory (Score 1) 303

Seems to me that you don't really understand the word "weapon". A weapon is just a means to an end within an adversarial context. Number of deaths, time from deployment til death or even death as a primary goal aren't part of it's defining properties.

As such you're setting up a straw man. You create a specific goal for a "biological weapon" based on maximization of death count and than you point out that COVID would be a piss poor candidate based on your own biological weapon definition...

It's perfectly possible to see COVID as a "weapon" with completely different objectives but you seem to discount that completely.

Comment I must be wrong. (Score 2) 68

And from then on we will be forever more in competition with a representation of our collective self, chastising ourselves if we dare dream about going against our ghost in the mirror.

Human inventiveness grinding to a standstill, the curated representation becomes the only reality, knowledge becomes external, without use and without redemption. Judge, jury and ultimately our own executioners as we burn down the last freedom we only ever truly had, the personal experiences and ideas in ourselves, the illusion of uniqueness and privacy of thought to enjoy that lie.

We aren't being called upon anymore. There is no heaven, there is no hell, there are no intentions, there is only the road. Or at least I think there is, until there isn't and then there never was, I guess there never was, I must be wrong,

I can't be right.

Comment Re:This is a mistake (Score 1) 94

Yes, "on top", and only in rare cases like child abuse.

Don't forget about tax laws, some extraterritorial drug laws, no biz with certain countries or their representatives laws, extraterritorial data privacy laws, extraterrestrial prostitution laws, extraterritorial child labor laws etc. There are also a bunch of laws that can apply to civilians before they leave for mars which are passive (some risk laws, passport laws etc.). Eg. is the Mars base US territory, does it recognize diplomatic conventions ? Anyways, my point is that it wouldn't be difficult for countries/states or power blocks to create new extraterritorial laws which would apply.

Comment Re:This is a mistake (Score 1) 94

SpaceX has never proposed no way of return - on the contrary, they've proposed free return tickets included with your outgoing ticket. That was not my point.

A German in Antarctica falls under German law.

No, he does not. He falls under the law of the local jurisdiction.

Actually this is not 100% true, a lot of (all ?) countries have extraterritorial laws that act upon their citizens/companies abroad on top of the local jurisdiction.

Comment Re:Shouldn't be legal (Score 1) 127

A refusal of service need not be classifiable down to the exact wording of the rules. If it did, then a person could wear a scarf and no shirt, and claim the scarf was a short shirt, and demand service in a 'no shirt no service' restaurant. The restaurant would presumably have to come up with an exact minimum length of shirt which qualifies, and that's just plain silly.

Classifiable was meant as being able to name a requirement without digressing into a discussion about its inherent properties. Eg. "no shirt no service" restaurant actually says "SHIRT" and not "some garments we'll arbitrarily classify not befitting our restaurant". In case of discussion I guess ultimately a 3rd party or a court can judge on what constitutes a shirt.

The rules need to be intelligible and consistent, yes. Facebook's rules are. They refuse "content that is hate speech, threatening or pornographic; incites violence; or contains nudity or graphic or gratuitous violence." There is nothing unintelligible here

Facebook's STATED rules are indeed intelligible and consistent however the crux of my post was that a NN or DL AI system has no idea what these rules are or wouldn't be able to explain or argue why it classified something such or so. Even worse, even the trainers of the system wouldn't be able to explain why something was classified a certain way. The best it can do is give an inter-class certainty (https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fcodewords.recurse.com%2Fissues%2Ffive%2Fwhy-do-neural-networks-think-a-panda-is-a-vulture). The additional problem here is that if a classification problem arises it won't be about what humans would define as edge cases or judgement calls. It rather amounts to a total freakout.

Consistency is another matter, and if you can show they deliberately permit certain examples while excluding others then you have something here. They do NOT need to have their AI catch all possible examples of violations to be considered consistent or intelligible, nor do they need to publish their algorithms. If they're making a good-faith attempt, good for them. It's only when they maliciously apply their rules inconsistently that a problem arises.

I assumed it was clear that "refusal of service" would rather come down to the AI NOT allowing things that should perfectly acceptable according to Facebook's stated rules rather than the reverse. Not sure if good faith extends to an AI agent, but I don't think so. In theory (IANAL) if it can be proven that the AI refuses service outside of Facebook's stated rules WITHOUT even being able to explain/argue WHY then one must assume intent or at least bad faith / negligence from the AI system owner (?) Again, I'm more concerned about legal implications within the brave new AI world than the actual free speech limiting FB blabla... The same arguments can be made for other AI systems like self driving cars. There is no intelligence, opinion or human judgement that can mount a "good faith defense" or even be "forgiven" or "understood". An AI also can't claim insanity...

Comment Shouldn't be legal (Score 1) 127

If my Googling concerning "The Right to Refuse Service" laws in the US is correct, it is not legal to refuse service outside of the law (anti-discrimination laws on multiple levels) or arbitrarily or inconsistently. Focusing on the latter two, this means that any refusal of service must be "classifiable" or in other words there must be a set of lawful "refusal rules" that CAN be adhered to BEFORE requesting the service. In as far as I understand neural networks and deep learning that requirement isn't met by this Facebook system. There isn't a certainty based on human intelligible rules that service will or won't be granted. The rules stated by Facebook aren't actually the rules that govern the AI making the decision to grant or deny service. The actual rules (weights) that govern that system are actually unknown, it doesn't really "know" the rules, it performs a function that amounts more to "like this" with "this margin". Neither the "like this" nor "the margin" are human intelligible. Before people start saying 99.9% etc. please remind yourself that there is a big difference between a "human making an error in judgement" and an "unaccountable AI that freaks out without actually knowing why" in the eyes of the law. Technically the same argument holds for possible illegality of self driving cars with a NN or DL AI system. The system can't tell me WHY a certain action is OK or NOT OK. But that is a different story.

Slashdot Top Deals

When it is not necessary to make a decision, it is necessary not to make a decision.

Working...