Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment I guess I don't understand tariffs. (Score 1) 333

I always thought Tariffs were imposed on non-specific products, and on countries of origin/destination. How is it a tariff when it is pointed at a specific company and product?
That sounds more like a fine or penalty. Does the president get to impose fines and penalties without any process at all on specific companies?
In that case, does he get to impose them on companies engaged in interstate commerce?
Somehow I think we in the U.S. are going to have to learn how black markets work, like all the other authoritarian regimes around the world.

Comment Re:"user friendliness" (Score 1) 286

Once upon a time I was running an earlyish version of Mac OSX, and having run into similar spellings in a directory in Linux (xf86config and XF86Config?) I checked the box when installing MacOS for case sensitive HFS+. Later I needed to install Adobe Creative Suite and the stupid installer was not case consistent when copying and modifying files, the install absolutely would not work.
This was definitely a problem for the end user. Even though there were no files in the same directory differing only by case, scripts referring to those files could not function if the script itself was inconsistent.
I have always made it a habit regardless of OS to always think in a case sensitive way, and just avoid case-only differences in filenames I create.
The fact that actual programmers at Adobe failed this when programmers in many languages always have to be case sensitive with names of variables and other language constructs gave me yet another reason to avoid Adobe whenever possible.

Comment Well.. (Score 1) 286

In this world trending toward nationalism and other us vs. themisms case insensitivity fits right in. Just get rid of all those weird characters, if you limit the OS to original ASCII then case sensitivity is really not that hard. A small set of non-printables, a nice simple alphabet with no diacritics, nothing new to learn. It also makes it harder for all those weirdos in the world to actually use the system to communicate, it's a win-win.
Oh, if you can't taste the sarcasm here, I will say it out loud. Linus is right.

Comment Re:bias against bias is bias (Score 1) 396

Letting publicly available generative AI process rebuttals from user input would turn into a shouting match between vocal extremists on every possible contentious topic. To attempt a non-political example, both the NY Yankees and the Boston Red Sox would have armies of fans attempting to sway the AI over the question "who is the best baseball team"
There would be attempts on the scale of the LOIC to sway the LLM's output.
For any actual contentious idea individual humans are incapable of being unbiased, and LLMs have no method of ranking the reliability of input that is independent of the humans that programmed it. The result is there is currently no method of creating a bias free LLM.
Before the internet humans had local, regional, national, racial and ethnic biases which were more or less confined to the people they interacted with. Books could spread globally, but a book does not normally sway a significant number of people, and a book expressing strong opinion is as likely to be used as an example of how wrong the author is as it is to change minds.
When the internet became global people now directly interact with like minded people regardless of location. Direct interaction can change minds. if 1000 people in the world hold a specific opinion, without the internet they will probably never even find each other. With the internet they can, and they have enough combined volume to bring other people into the fold.
It is both the greatest and worst thing we have created. To find the few people in the world with a specific hobby or skill allows them all to progress, and to attract others. Doesn't really matter if that skill is baking sourdough bread or building suicide bomb vests.
Into this cacophony of ideas we throw a computer program that absorbs any data it is given, tokenizes it all into some sort of tokens, and then starts cataloging patterns of those tokens.
Given the tendency for the loudest humans to be the ones at the extremes of any opinion, the internet, which is now the largest repository of human communication, is inherently biased toward the edges of any topic.
There are no websites dedicated to the most average cars of the 1990s, there are plenty dedicated to the best or the worst cars of the 1990s, yet most of the world drove average cars in the 1990s. Where is an LLM going to get actual opinions about the average cars? They aren't there.
As a human being I can evaluate an average car because I can build my own quantitative model based on what I learn about my interaction with my car, but that model is entirely my model, and entirely different from anyone else int he world.
An LLM cannot have first person experience, it cannot form an opinion, it simply recognizes patterns of tokens, and builds new arrangements of those tokens based on ranking algorithms that start with the original programming, and are shaped by the quantity and quality of the ingested patterns, but the definition of quality is also based on the algorithms of humans who created the original program.
Without self awareness and creativity an LLM cannot even learn to evaluate bias, let alone eliminate it. Bias prevention turns into a ruleset that prevents the LLM from saying certain things, a ruleset built by humans, which are inherently biased.
I will believe we can create an unbiased LLM as soon as you find a book of fiction that does not have intrinsic bias.

Comment Humans are biased (Score 2) 396

Human output is biased. An AI can't really understand the concept of bias, because it isn't really understanding anything. The output of an AI is going to naturally lean toward the most consistent majority of the data it has ingested, so to make a bias free AI one would simply have to feed it either an unbiased pool of data (good luck finding that) or a pool of data that is equally biased on all sides of all issues. That is defining equally as "some way that the differing biases reflect very similar weights in the output".
The problem is we have great difficulty seeing our own individual biases, and no chance at all in quantifying biases even in our own culture, let alone a culture different from ours.
Getting an AI to generate output does not free us from the responsibility of critical thinking. Assuming an AI has sufficient input to allow us to believe the output represents sufficient research is also irresponsible. This means that at the current time the output of generative AI is not qualified to be the basis of an opinion, and definitely inadequate to provide justification for a decision.
Since the heart of AI in all forms is pattern recognition non-generative AI has made great advances in many fields, from medical diagnosis to arc-fault circuit breakers, but I am afraid the huge emphasis on generative AI is probably stealing brains from the other more easily targeted uses that really can help people now.

Comment If you want an AI that knows only what you know (Score 4, Insightful) 192

There are a few pieces to AI, there is the code that ingests the data and tokenizes it, there is the code that interacts with user, and then there is the actual data fed into the tokenizer. The first and second are more like what we traditionally call software, and are available in open source versions. The third is the problem piece. If you managed to textualize everything you know and feed it into a LLM the LLM would only know what you know, and that would not really be very useful unless you just wanted it to remember your second cousins birthday and remind you about it. The minute you start feeding it text or images you didn't create you venture into the ethical and legal morass that is currently churning all over the world around the big LLMs.
That huge pool of tokens is what makes an LLM useful, it really is the LLM. The code that can be ethically shared just creates or interacts with that LLM. Yes, you can own books, and by historical precedent have every right to physically manipulate that book in any way you like. You do not have the right to copy that book, and this is at the heart of the controversy. Many authors, artists, and creators are making the claim that the act of ingesting a book into an LLM creates a copy of that book, while the people making LLMs (and the corporations who see the potential for $BILLIONS$) say that they are just deriving metadata, or that the ingestion does not constitute a copy because the text is not stored, it is tokenized, and LLMs will not regurgitate verbatim the data on which they are trained.
Of course creative prompts seem to show that they will indeed regurgitate verbatim.
The current state of this controversy makes it very difficult to guarantee that the training set of a useful LLM was actually all public domain or legally ingestable and therefore releasing an LLM under an open source license might get you sued.
Of course this legal back and forth is how we discover the need for new law, and eventually will lead to various governments or legislative bodies making laws that define the borders of what can and can't be fed to an LLM without licensing. These laws will vary by location and the perceived values of the bodies making the law, which will probably make "LLM friendly" locations where the AI companies will go to lower ingestion cost, which will then lead to another wave of lawsuits, this time by authors et al., attempting to prevent access to the LLMs from regions with stricter laws, much like we have seen in the audio/video realm.
Basically AI, at least in the AGI sense, is really not something that an individual of normal means can do, much in the same way that an individual of normal means cannot make a mass production factory, the resources required are just too big.
AI, in the classic sense not the prompt driven generative sense, is something an individual can play with. It is fundamentally pattern recognition, and is applied invisibly in many parts of life already.
For me a really fun example is the arc-fault circuit breaker required in much of the US in new electrical installations. It actually "listens" to noise on the electrical line and compares it to a signature library of electrical connections to determine if it is an accidental arc or just the normal operation of a device which arcs, like a brushed motor or a relay.
The first generation of these devices produced so many false positives they rapidly gained a reputation of uselessness, however as signature libraries improved and pattern matching algorithms evolved they got better and better. This is AI. It has nothing to do with general intelligence or conversation, it is a very specific realm of pattern matching, and it does it better and faster than anything a person could do.
Because it is an industrial control device it is not recognized as AI, it is just another little black box that controls something. It doesn't even look as impressive as a PID process controller, which, though it can appear to be smarter, is really not AI, it is just a basic calculator with real world inputs.

Comment Re:Awesome! (Score 1) 138

I have not bought myself a computer (well, not counting a few RasPis) in well over a decade. For all of my purposes 10 year old hardware is more than sufficient.
Not bragging on Linux, these machines would be sufficient for the things I do running Windows XP, but Microsoft not only pushes hardware specs, they unnecessarily change the "version" of their OS to push people into believing there has been a significant change, instead of small evolutionary steps.
There is no actual technical reason that they needed to discontinue XP or Win2K for that matter, they just feel there is a marketing need to draw lines between versions for the same reason that car companies need to change body trim every single year to leverage some cultural desire we have to have the "newest"
I quit running Windows by choice in the late 90s, had a few years of Mac OS in there until they buried the Unix roots too deeply for convenience, and have lived on dumpster dive computers ever since. I figure in that time if I had kept "up to date" in the Windows world I would have spent between 10 and 20K buying computers that I didn't actually need to do the stuff I wanted to do.
I will admit though, I kinda miss the SGI Indigo2 I was running until about 2005, but an actually unsupported OS eventually turns a computer into a significant admin time sink, so I switched to the BSDs and Linux on PC hardware depending on the task.

Comment Re:Kernel level networking in servers (Score 4, Insightful) 65

I guess I understood this differently, this is not the layer 1-3 stuff that happens in hardware, this is the application layer stuff where userland is getting the data from the network stack, and it sounds at first glance like when an interrupt occurs and the fetching of data starts it changes to a mode where it just keeps fetching data until the network stack runs out of data, at which point it reverts to interrupt driven mode.
Sort of the same philosophy of optimization as keeping http connections alive and reusing them rather than tearing them down and building new ones with every request.
Of course I may have misunderstood it in my superficial glance at the linked patch description, I am not a kernel hacker.
It sounds to me like optimization by reducing redundant overhead, which is a great idea as long as the overhead you are reducing isn't necessary to prevent some other issue, like starving other processes of resources, and it sounds like this patch has implemented time outs to take care of that.

Comment Because it is easy for AI to identify gun parts (Score 2) 225

At its core a gun is a spring release mechanism, which are so common and varied in modern devices that an attempt to prevent home designed firearms is laughably impossible.
In the 70s gang bangers were making .22 zip guns out of car antennas and door latches.
Even though guns have a deep mythos, at least in the USA, they are really simple machines.
To attempt to outlaw the knowledge and tools to make a functional firearm would require regulating power drills and files and pieces of metal, as well as attaching security monitoring to anyone who took metal shop in high school, and definitely anyone with college level machine tool training.
Controlling the manufacture of firearms at the home built level is not a practical goal of any government, nor is it achievable in any developed economy. The only barrier for an individual is the level of effort required to obtain the skills necessary. With the easy availability of low priced machine tools the financial barrier doesn't really exist.

Comment It's a shame (Score 2) 75

In the flood of bogus and faked papers flooding many journals these guys (who have not been implicated in any bad science) couldn't bother to buy the tools they used.
Yeah, this is pricey Fluid Dynamics modeling software, but if that is the software needed to do the job, then funding to buy the software is just another piece of the funding you need to obtain to do your research.
Interestingly, Flow-3D offers free licenses for academic research, I am not sure how stringent the application process is, but even without that licensing appears to be in the single digit thousands per year, which is a heck of a lot cheaper than a lot of technical software.
Not sure who these guys were, or how brilliant their science is, but they definitely need to get someone to track the nuts and bolts of their operation and do some basic asset tracking.

Comment Re:I doubt there is any way to prevent this (Score 1) 348

Even in the case of a 3D print service, a small metal part without context is not a "gun part" in any recognizable form. The pieces of a firing mechanism are just metal pieces that perform functions used in lots of mechanical devices. Because people use 3D print services for prototyping and design work, some of which is secret just because they are R&D and the developers don't want their IP to be leaked I don't think it would be practical to attempt to make customers provide documentation of intended use before they can get parts printed.
It is already a crime in the US to convert a firearm to automatic fire, and to sell the means for others to do so, but that doesn't stop a guy with a milling machine from doing it. Laws attempting to proactively prevent the steps required to break another law are always problematic, because the tend to result in accidentally causing problems for non law breakers that get caught by poorly defined actions.
Prevention laws in general are problematic in a governmental system that is designed to punish after the fact, especially for private acts that are preparatory for crime. How does the law determine whether the ladder was purchased to fix a roof or break into a second story window before the burglary occurs?
How does the law determine the small metal spring catch was designed to control overextension in a hobby robot arm or to hold back the interrupter in a semiauto firearm until the gun gets used to commit nefarious acts?
At some point we have to decide as a society, are we going to make laws that prohibit actions, and enforce them after they are broken, or are we going to have a default deny, make everything illegal unless it is specifically allowed by law.
In one direction we have heinous crimes but people can mostly do what they want to, in the other direction people can't do what they want to, but we probably still have heinous crimes.
Humans are gonna human.

Comment Is it just me, or is this kinda useless data? (Score 4, Insightful) 157

I actually went to the source of this data because I was intrigued that Win8 grew in market share, which made me wonder if maybe this was just percent of Windows installations. I really don't think a significant number of people are installing Win8, so maybe the total market share of Windows as a whole fell? But I can't find actual numbers.
You can download the "data" for the various charts they publish, but the data was just more percentages, no actual raw numbers anywhere.
So based on that chart, Win8 is trending better than Win11. That makes no sense at all, but when the "data" they let you download is just the numbers in the chart, not the actual survey numbers from which they derived the numbers in the chart it really becomes just noise.
Wasn't this site supposed to be "News for Nerds, Stuff that Matters"? This was just eye candy, with all the weight and significance of a hollow chocolate bunny.

Comment Re:From Former Forklift Operator: (Score 1) 22

The little electric forklifts that run round inside Sam's Club weigh 8100 pounds, so significantly heavier than a full size SUV.
Leverage to lift the weight would make them heavy, but to be stable enough to accelerate, decelerate, and turn with a load at the top of the mast they need to be really heavy.

Slashdot Top Deals

The brain is a wonderful organ; it starts working the moment you get up in the morning, and does not stop until you get to work.

Working...