Comment Import tax (Score 1) 39
25% Import tax on foreign made movies incoming.
25% Import tax on foreign made movies incoming.
Sure... But isn't that "bias" and "misinformation" inherent to literally all models where ever they come from, East, West, China, US, EU ?
This is the nature of LLMs in contrast to algorithms, "unpredictability", "inability to verify the answer" through consistent reasoning and obfuscated "guardrails and guardrails mechanisms". This is why LLMs should never ever be used to make "choices" which need to be rationally argumented within specific contexts (legal, healthcare, employment, etc).
In short, solutions generated by an LLM should legally only be considered "opinions".
(1) it prescribes gun ownership under a "well-regulated militia", these days that means a state's National Guard.
Afaik a "well-regulated militia" can't fall under auspices of a federal government, the National Guard does fall under joint state/federal command so it can't be the militia as prescribed in the 2nd amendment.
(2) the guns the founders were talking about were muzzling loading deals where you couldn't fire 30 rounds in 2 seconds.
It's about "arms" meaning sufficient to organize a coherent and concerted weaponized defensive/offensive action either individually or collectively. The amendment surely didn't mean to position flint lock muzzle loaders against automatic weapons as currently carried by either criminals, an invading military force or even a tyrannical federal government. It's an extension to the right to defend life and liberty with what is available to serve as a weapon and to actually bear those weapons.
Seems to me that you don't really understand the word "weapon". A weapon is just a means to an end within an adversarial context. Number of deaths, time from deployment til death or even death as a primary goal aren't part of it's defining properties.
As such you're setting up a straw man. You create a specific goal for a "biological weapon" based on maximization of death count and than you point out that COVID would be a piss poor candidate based on your own biological weapon definition...
It's perfectly possible to see COVID as a "weapon" with completely different objectives but you seem to discount that completely.
And from then on we will be forever more in competition with a representation of our collective self, chastising ourselves if we dare dream about going against our ghost in the mirror.
Human inventiveness grinding to a standstill, the curated representation becomes the only reality, knowledge becomes external, without use and without redemption. Judge, jury and ultimately our own executioners as we burn down the last freedom we only ever truly had, the personal experiences and ideas in ourselves, the illusion of uniqueness and privacy of thought to enjoy that lie.
We aren't being called upon anymore. There is no heaven, there is no hell, there are no intentions, there is only the road. Or at least I think there is, until there isn't and then there never was, I guess there never was, I must be wrong,
I can't be right.
Yes, "on top", and only in rare cases like child abuse.
Don't forget about tax laws, some extraterritorial drug laws, no biz with certain countries or their representatives laws, extraterritorial data privacy laws, extraterrestrial prostitution laws, extraterritorial child labor laws etc. There are also a bunch of laws that can apply to civilians before they leave for mars which are passive (some risk laws, passport laws etc.). Eg. is the Mars base US territory, does it recognize diplomatic conventions ? Anyways, my point is that it wouldn't be difficult for countries/states or power blocks to create new extraterritorial laws which would apply.
SpaceX has never proposed no way of return - on the contrary, they've proposed free return tickets included with your outgoing ticket. That was not my point.
A German in Antarctica falls under German law.
No, he does not. He falls under the law of the local jurisdiction.
Actually this is not 100% true, a lot of (all ?) countries have extraterritorial laws that act upon their citizens/companies abroad on top of the local jurisdiction.
A refusal of service need not be classifiable down to the exact wording of the rules. If it did, then a person could wear a scarf and no shirt, and claim the scarf was a short shirt, and demand service in a 'no shirt no service' restaurant. The restaurant would presumably have to come up with an exact minimum length of shirt which qualifies, and that's just plain silly.
Classifiable was meant as being able to name a requirement without digressing into a discussion about its inherent properties. Eg. "no shirt no service" restaurant actually says "SHIRT" and not "some garments we'll arbitrarily classify not befitting our restaurant". In case of discussion I guess ultimately a 3rd party or a court can judge on what constitutes a shirt.
The rules need to be intelligible and consistent, yes. Facebook's rules are. They refuse "content that is hate speech, threatening or pornographic; incites violence; or contains nudity or graphic or gratuitous violence." There is nothing unintelligible here
Facebook's STATED rules are indeed intelligible and consistent however the crux of my post was that a NN or DL AI system has no idea what these rules are or wouldn't be able to explain or argue why it classified something such or so. Even worse, even the trainers of the system wouldn't be able to explain why something was classified a certain way. The best it can do is give an inter-class certainty (https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fcodewords.recurse.com%2Fissues%2Ffive%2Fwhy-do-neural-networks-think-a-panda-is-a-vulture). The additional problem here is that if a classification problem arises it won't be about what humans would define as edge cases or judgement calls. It rather amounts to a total freakout.
Consistency is another matter, and if you can show they deliberately permit certain examples while excluding others then you have something here. They do NOT need to have their AI catch all possible examples of violations to be considered consistent or intelligible, nor do they need to publish their algorithms. If they're making a good-faith attempt, good for them. It's only when they maliciously apply their rules inconsistently that a problem arises.
I assumed it was clear that "refusal of service" would rather come down to the AI NOT allowing things that should perfectly acceptable according to Facebook's stated rules rather than the reverse. Not sure if good faith extends to an AI agent, but I don't think so. In theory (IANAL) if it can be proven that the AI refuses service outside of Facebook's stated rules WITHOUT even being able to explain/argue WHY then one must assume intent or at least bad faith / negligence from the AI system owner (?) Again, I'm more concerned about legal implications within the brave new AI world than the actual free speech limiting FB blabla... The same arguments can be made for other AI systems like self driving cars. There is no intelligence, opinion or human judgement that can mount a "good faith defense" or even be "forgiven" or "understood". An AI also can't claim insanity...
When it is not necessary to make a decision, it is necessary not to make a decision.