Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

Comment Re:This was known, the interesting part is... (Score 1) 36

Yeah - only 5% of ChatGPT users pay for it.

OpenAI's focus, and revenue, is mostly from ChatGPT, as opposed to Anthropic's being from API use (coding + business use).

API use seems more scalable than interactive ChatGPT use, which is limited by number of humans on the planet, with users willing to pay evidentially being a tiny percentage of that. OpenAI already has 800M weekly users, so you are not going to see a 10x increase than that unless they are selling to aliens.

API use has almost no limit since it's driven by computer use, not humans, and other than trial accounts, almost all API usage is paid for.

Comment Re:This was known, the interesting part is... (Score 1) 36

Most of the talent at OpenAI ends up leaving... typically for Anthropic. Of course if you pay enough you'll get employees, but it seems that's more despite the management rather than because of it.

AFAIK there's only 3/11 of the original founders still there (Altman, Brockman & Wojciech Zaremba).

Comment Re:Translation (Score 1) 75

Well, no .. that's not what he meant, even if those are some of the outcomes of the spread of AI.

Huang is clearly trying to boost Nvidia's, and his own, fortunes, by appealing to this supposed AI race with China, to get US policy changed to his own benefit.

He's obviously positioning winning of this "race" by the US as a desirable outcome, so going to be focusing on whatever aspects are seen as a positive by the US lawmakers he is trying to influence.

Comment Re:TutTut Chicken Little (Score 3, Informative) 75

Actually, NVidia NOT selling to China (either because they don't want to buy - which is what they are now saying, or because of US restrictions) is going to ACCELERATE Chinese AI chip development out of necessity (even if imposed by Chinese government - maybe strategically accelerate to self-reliance).

The Chinese power advantage is real - huge amounts of hydro-electric, as well as solar. Trump is shooting US in the foot here by hampering Solar, and the AI co's themselves are talking about putting AI datacenters in space(!) which doesn't sound exactly cost competitive!

Comment Re:I believe in Commander Dong (Score 2) 29

> commander Chen Dong and crewmates Chen Zhongrui and Wang Jie

Please stop making jokes - this is a serious matter.

Commander Dong and his crewmates Me Hungrui and Wang Pee just want to get home for obvious reasons.

It reminds me of the plane crew Sum Ting Wong, Wi Tu Lo, Ho Lee Fuk and Bang Ding Ow. :)

https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3F...

Comment Re:But how did they taste? (Score 1) 9

Yeah, dinos ruled the earth for hundreds of millions of years... then basically evolved into chicken nuggets.

Maybe homo sapiens will have a similar fate - due to online life and lack of physical activity we'll evolve into a nice limbless brain snack for some future species to factory farm?

Comment Re:Self-accelerating decomposition (Score 5, Insightful) 96

Except the dark fiber was eventually used since it's generic and doesn't have any expiration date.

A datacenter full of rapidly obsolete kit, running on an uneconomical power source, may just sit unused unless the building shell can be repurposed for something else.

Comment Re:what in the actual F is this "story" (Score 1) 49

Well, the story is that Anthropic's business model is BETTER than OpenAI's, which I'd tend to agree with, and does seem somewhat newsworthy given that valuations are opposite.

OpenAI seems to be emphasizing their chat-bot, CHatGPT, whose revenue is limited by the number of people on the planet, and how much they are willing to pay for it.

Anthropic are emphasizing use for coding, now agentic coding, and API use, which are all much more token hungry and scalable use cases.

Comment Bad spin (Score 1) 25

No doubt those behind this "rebranding" of AI data centers as "AI factories" think this will improve public perception, but I'm guessing this will backfire.

If AI takes off to the extent the promoters think it will, and eliminates large number of jobs, or has other negative effects such as driving up electricity prices, or just causes societal disruption and enshittification, then branding these buildings as "where the enshittification comes from" seems like a bad idea.

Comment Of course (Score 1) 60

> Should Workers Start Learning to Work With AI?

Well, duh, yes ... AI is here to stay, and will only get better and more useful, even if it remains a very "jagged" capability - great for somethings, and entirely useless for others.

Part of the skill is therefore to understand and learn what AI (I assume we're mostly talking about LLMs) is useful for, and where it should be avoided, and many companies are currently making huge mistakes and trying to apply it in areas it is not suited to... Basically wishful thinking - it'd be like trying to launch a cross-atlantic airline passenger service after having seen the wright brothers first flight, when in fact you needed to wait another 40 years or so for the tech to mature before that was possible.

But, sure, there are use cases for AI today that make sense, and people should be learning what those are, and how to use it.

Comment Re:Tools have always augmented work (Score 1) 60

> Search and Summarize, Generate content

> Producing content has never been a problem, internet is full of it.

LLMs are not good at de novo content generation (vs summarization etc) in general, but one exception is coding - they are good at generating code for boilerplate and simple repetitive tasks, vibe-coded prototype or throwaway code where quality doesn't matter, and can also be used for more serious software development if used appropriately - not "write me a program to do X" vibe coding, but as a tool for each step of the development process from refining requirements ,brainstorming architectures, planning development, etc, etc.

As you say, LLMs are just a tool, not about to automate away anyone's job other than in the most extreme cases of completely mindless repetitive tasks like call center support jobs where they are just reading from a script.

One day (10-20 years or more away) we will have human-level AGI, which may or may not have anything to do with LLMs, and then everyone's job including all of management up to and including the CEO could be automated, but I doubt that is the way the future will actually play out.

Slashdot Top Deals

When all else fails, read the instructions.

Working...