Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Comment Re:"Compromised"? (Score 2) 38

Lying to you to give you that terrible restaurant recommendation. https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Farxiv.org%2Fpdf%2F2510.06105 is a white paper mathematically proving that LLMs will lie.

I have said this all along- most of AI is GIGO- Garbage in, Garbage out. LLMs were trained on the largest garbage producer in our society today, Web 2.0. Nothing was done to curate the input, so the output is garbage.

I don't often reveal my religion, but https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fmagisterium.com%2F is an example of what LLMs look like when they HAVE curated training. This LLM is very limited. It can't answer any question that the Roman Catholic Church hasn't considered in the last 300 years or so. They're still adding documents to it carefully, but I asked it about a document published a mere 500 years ago and it wasn't in the database, but instead of making something up like most LLMs will do, it kindly responded that the document wasn't in the database. It also, unlike most AI, can produce bibliographies.

User Journal

Journal Journal: AI is a liar

A new white paper from Stanford University suggests that AI has now learned a trick from social media platforms: Lying to people to increase audience participation and engagement (and thus spend more tokens, earning more money for the cloud hosting of AI).

Comment Keep in mind... (Score 1) 101

...that there's a LOT of minerals and other nutrients in food, only a fraction of which are produced from chemicals in fertilisers, O2, and CO2. If you produce too much with too little consideration of the impact on the soil, you can produce marvellous dust bowls but eventually that's ALL you will produce.

Comment It's not just foreign languages (Score 2) 48

There's a lot of stuff that is on the Internet that doesn't end up in AIs, either because the guys designing the training sets don't consider it a particular priority or because it's paywalled to death.

So the imbalance isn't just in languages and broader cultures, it's also in knowledge domains.

However, AI developers are very unlikely to see any of this as a problem, for one very very important reason --- it means they can sell the extremely expensive licenses to those who actually need that information, who can then train their own custom AIs on it. Why fix a problem where the fix means your major customers pay you $20 a month rather than $200 or $2000? They're really not going to sell ten times, certainly not a hundred times, as many $20 doing so, so there's no way they can skim off the corps if they program their AIs properly.

Comment Well, that's one example. (Score 1) 187

Let's take a look at software sizes, for a moment.

UNIX started at around 8k, and the entire Linux kernel could happily sit in the lower 1 megabyte of RAM for a long time, even with capabilities that terrified Microsoft and Apple.

The original game of Elite occuped maybe three quarters of a 100k floppy disk and used swapping and extensive use of data files to create a massive universe that could be loaded into 8k of RAM.

On a 80386SX with 5 megabytes of RAM (Viglens were weird but fun) and a 20 megabyte hard drive, running Linux, I could simultaneously run 7 MMORGs, X11R4, a mail server, a list server, an FTP server, a software router, a web server, a web cache, a web search engine, a web browser, and stil have memory left over to play Netrek, without slowing anything down.

These days, that wouldn't be enough to load the FTP server, let alone anything else.

On the one hand, not everything can be coded to SEL4 standards (although SEL4, by using Haskell as an initial language to develop the core and the proofs, was able to cut the cost of formal programming to around 1% of the normal value). On the other hand, a LOT of space is gratuitously wasted.

Yes, multiple levels of abstraction are a part of the problem. Nothing wrong with abstraction, OpenLook is great, but modern abstraction is mostly there due to incompetent architecture on previous levels and truly dreadful APIs. And, yes, APIs are truly truly dreadful if OpenLook is the paragon of beauty by comparison.

Comment Re:And TP-Link is being investigated for a ban.... (Score 1, Interesting) 34

The solution is easy. WiFi 6 is only just starting to come out in the marketplace. If TP-Link hijacks the standard development procedure, solidifies a workable WiFi 8 quickly, and manufacturers/users in Europe, Asia, and Oceana all start using WiFi 8, skipping WiFi 7 entirely, the US will be left with an inferior standard that only they have gear for, with no option to use WiFi 8 for many more years because the only manufacturers making it can't sell in the US.

Comment Re:You get what you pay for. (Score 1) 25

The irony of the two stories being together on the front page, "More Screen Time Linked to Lower Test Scores For Elementary Students" and "Microsoft to Provide Free AI Tools For Washington State Schools" is just too good to fail to mention.

And so I'm replying to the both First Posts with it.

Comment Re:Being a screen nazi was my best decision (Score 1) 46

The irony of the two stories being together on the front page, "More Screen Time Linked to Lower Test Scores For Elementary Students" and "Microsoft to Provide Free AI Tools For Washington State Schools" is just too good to fail to mention.

And so I'm replying to the both First Posts with it.

Comment Re:Huge problem (Score 2) 153

Nvidia is therefore a bubble. This article is complaining that Europe is an obstacle to further bubble inflation.
No amount of Nvidia etching IP onto wafers is worthy of a 4.6 TRILLION market cap - bigger than the 4.2 Trillion market cap of the ENTIRE name-brand pharmaceutical industry.

Slashdot Top Deals

Within a computer, natural language is unnatural.

Working...