Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Comment Re:Telling you about risks isn't a marketing pitch (Score 1) 44

If these companies really believe that what they are building is going to be massively detrimental to society (even if it may also have some benefits), then they would not be developing it at breakneck speed - they'd be lobbying government to regulate against it, especially since OpenAI and Anthropic were both founded to supposedly help mitigate the risks of AI. Of course OpenAI/Altman sold out long ago, but aren't Anthropic are still claiming to be a "public benefit" company?

This spiel of "Better watch out because we're building something that will fuck you in 2 years" is a bit like saying "Better build a nuclear defense because I'm gonna nuke you in 2 years". They want to come across as the good guys giving a heads-up to anyone who will listen, but they are in fact the bad guys accelerating the doom scenario they are "warning" about.

It's interesting to see China take a different approach at state level, and are emphasizing building the "brakes" for disruptive AI before building disruptive AI itself (which gives a different view of the capabilities of current SOTA AI, also available from DeepSeek).

While the American AI companies might genuinely be warning us that they are about to destroy 50% of American jobs, I think CNN maybe have the right take here - this is salesman talk - these AI CEOs are trying to scare corporate America into buying their product to avoid being left behind, as well as perhaps attract investrment.

Comment Re:How to tell if there is a real advance with AI? (Score 1) 30

> Gemini provided me an image of an empty room with text "no elephant" and additional texts "doorway too narrow" "room too small".

Bad answer really - too many assumptions. Why assume the room shouldn't be able to contain an elephant, as opposed to simply not having one in it (as requested), and why assume it's a particular kind of elephant (real vs toy/etc) that is being referred to.

Without any context, a simple empty room would seem best answer.

Comment Re:Nah - it's not as simple as that (Score 1) 238

I think most people drawing social security would be happy enough to get back what they has personally paid in during their working years, preferably inflation adjusted, even if not with the investment gains that this enforced retirement "investing" scheme nominally promised.

Of course the government squandered all the money retirees paid into the system, but that's not their fault - the government is still on the hook to provide the benefit they paid for, and of course their only source of money is taxes or the printing press.

Comment Re:Modal is good (Score 1) 105

OK, but what's the conflict with modeless? It's really just a design choice, but most editors, e.g. emacs, moved on from vi and used a modeless design because it's more intuitive.

There are plenty of non-printable key modifiers such as ctrl, alt, meta/windows to use if you want to use alpha keys for commands as well as function keys.

Comment Re:250 kB (Score 4, Interesting) 105

Indeed ... back in 1982 I wrote the editor for Acorn Computer's ISO-Pascal system, in 4KB, including regex find/replace. Written in 6502 assembler.

In fact, our whole Pascal system, including compiler, virtual machine interpreter, runtime libraries (incl. floating point, heap, I/O, etc), plus the editor, only took 32KB (2 x 16KB ROMs, only one mapped into the address space at a time).

Comment Let's give credit where it's due (Score 1) 51

> Sutskever "the brain behind the large language models that helped build ChatGPT" ...

Well, no ...

LLMs, i.e. today's AI, are all based on the Transformer architecture, designed by Jakob Uzkoreit, Noam Shazeer et al, at Google.

Sutskever, sitting at OpenAI, decided to play with what Google (Jacob, Noam) has designed, intrigued to see how much better it would get as it was scaled up.

ChatGPT - the first actually usable LLM was - came about by the addition RLHF, turning a purely statistical generator into one that was reasonable to interact with - following instructions, answering questions etc. RLHF seems to have been invented by Alec Radford, Dario Amodei (now of Anthropic), etc - Sutsekever's name isn't even on the paper ("Fine-Tuning Language Models from Human Preferences").

Comment I'd have to guess there's a downside (Score 1) 78

There's a reason why after millions of years of evolution we still on average need a lot more than 3 hours sleep ...

Of course we're continually exploring new genetic variations, so people exist with all sorts of less usual traits, whether regarded as positive or negative, but the reason these are unusual is because none of these genetic variations are outperforming the majority baseline they are competing with.

Have you ever seen a super high IQ (200+) person being interviewed ... and if, so, would you want to be genetically modified to have this "advantageous" trait ?! I'm sure there's a few high IQ folk that are more or less normal, just as there are a few over 7' tall who are healthy, but there's a reason we're not all like that !!

Comment No (Score 1) 105

Inevitably the answer to any article title comprised of a question is "no".

One day perhaps, if the distant future if/when we have truly animal like AI with emotions and feelings for others, capable of learning, etc, but for predict-next-word functions? Perhaps we should assign personhood to sort functions too?

Just because an LLM outputs human-sounding text (well duh, its a next-word predictor) doesn't make it any more like a person than the cat command when you do "cat mythoughts.txt".

Comment Re:Finally (Score 1) 163

Yeah, even with an EV with longer range I'd get range anxiety if relying on finding a charger along a longer trip... Until charging becomes quicker, and chargers more ubiquitous, it seems the best case for EVs is something that range-wise is good enough for your daily usage and can then be charged overnight at home.

Too bad the base model is a pickup vs "SUV" with second row of seats, but it seems a pretty decent spec for the price point for a US model (although I'd prefer a $25K BYD if it was available here, and without tariffs that double the price).

Comment A lame duck still quacks (Score 1) 99

Google seems to have deployed a quick fix to avoid these, but Gemini is still splainin.

That's a great expression! "A lame duck still quacks" means that someone who is in a weakened or powerless position, especially because they are leaving office or losing influence, can still make their opinions or demands known.

Think of a lame duck â" it's injured and can't move around easily, but it can still make noise. Similarly, even if someone's authority is diminished, they can still speak up and be heard.

Here's a breakdown of the meaning:

        Lame duck: This refers to someone whose power or effectiveness is limited because they are in their final period of office after a successor has been elected or appointed. They are seen as having less influence because their time is ending.
        Still quacks: This part emphasizes that despite their weakened position, they can still voice their opinions, make demands, or exert whatever remaining influence they have.

So, the expression highlights the fact that even those who are seemingly on their way out or have lost significant power are not necessarily silent or completely ineffective. Their voice can still carry some weight.

Comment Re:Continuous Adaptation (Score 2) 110

All intelligence is prediction. Crystalized intelligence is essentially one-step prediction - you've seen/done if before and remembered, so you can do it again. Fluid intelligence, aka reasoning, is multi-step prediction - chained what-if "tree search" (if one chain/branch of reasoning, then abandon that and try another).

The key to both types of reasoning is pattern recognition - the more patterns you've experienced (= more experience), the more capable you will be. Reasoning also depends on reasoning/logic patterns (what next step does this problem suggest), which is again something that benefits from experience.

So, no, you're not born with maximum fluid intelligence - you can gain fluid intelligence (reasoning capability) by practice and experience if you choose to.

Slashdot Top Deals

Kiss your keyboard goodbye!

Working...