
AI's Adoption and Growth Truly is 'Unprecedented' (techcrunch.com) 146
"If the adoption of AI feels different from any tech revolution you may have experienced before — mobile, social, cloud computing — it actually is," writes TechCrunch. They cite a new 340-page report from venture capitalist Mary Meeker that details how AI adoption has outpaced any other tech in human history — and uses the word "unprecedented" on 51 pages:
ChatGPT reaching 800 million users in 17 months: unprecedented. The number of companies and the rate at which so many others are hitting high annual recurring revenue rates: also unprecedented. The speed at which costs of usage are dropping: unprecedented. While the costs of training a model (also unprecedented) is up to $1 billion, inference costs — for example, those paying to use the tech — has already dropped 99% over two years, when calculating cost per 1 million tokens, she writes, citing research from Stanford. The pace at which competitors are matching each other's features, at a fraction of the cost, including open source options, particularly Chinese models: unprecedented...
Meanwhile, chips from Google, like its TPU (tensor processing unit), and Amazon's Trainium, are being developed at scale for their clouds — that's moving quickly, too. "These aren't side projects — they're foundational bets," she writes.
"The one area where AI hasn't outpaced every other tech revolution is in financial returns..." the article points out.
"[T]he jury is still out over which of the current crop of companies will become long-term, profitable, next-generation tech giants."
Meanwhile, chips from Google, like its TPU (tensor processing unit), and Amazon's Trainium, are being developed at scale for their clouds — that's moving quickly, too. "These aren't side projects — they're foundational bets," she writes.
"The one area where AI hasn't outpaced every other tech revolution is in financial returns..." the article points out.
"[T]he jury is still out over which of the current crop of companies will become long-term, profitable, next-generation tech giants."
The hype (Score:5, Insightful)
Re: (Score:3)
LOL. It's the Shiny New Hammer Syndrome. Feeling like a nail?
Re:The hype (Score:5, Funny)
"When your only tool is a hammer, all your problems start to look like thumbs."
=Smidge=
Re: (Score:2)
More specifically I'd bet the expenditure to revenue ratio of the technology is unprecedented, in a bad way.
Re: (Score:2)
In 1841, a Scottish journalist named Charles Mackay published a study on crowd psychology called Extraordinary Popular Delusions and the Madness of Crowds [wikipedia.org]. It's stunning how relevant this book is today, almost 200 years after it's publication. Much of it's work on the counter
Re:Dig Baby Dig (Score:5, Informative)
You're looking at three months of very noisy data [eia.gov] and drawing some pretty dramatic conclusions from said minimal data.
Winter demand is heavily dependent on weather. You're mainly seeing the impacts of weather on demand.
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2F2024%25E2%2580%259325_North_American_winter [wikipedia.org]
"The 2024–25 North American winter was considerably colder then the previous winter season, and much more wintry across the North American continent, signified by several rounds of bitterly cold temperatures occurring."
Re: (Score:2)
Hell, we had 12" + of fucking SNOW this past year in the greater New Orleans area....unprecedented !!!
It was fun to play in...put the whole area basically into a stand still for 3+ days....till we melted enough to get around again.
Re: (Score:3)
Re: (Score:2)
The last winter was interesting for its lack of normal weather patterns. Place like the Gulf got inundated with snow, while where I am- with a 78% renewable grid, had practically early spring weather the entire winter.
Formulating a hypothesis, one could say:
"The Gulf is more likely to use coal, and since it experienced a harsher winter than the areas with higher percentages of renewable power on their grids, co
Re:The hype (Score:5, Interesting)
Presenting LLMs as AI (or even AGI) feels like part of the same hype-train we've been on for more years than I can remember: blockchain > cryptocurrencies > NFTs > LLMs. They all share marketing with a sales pitch of "I've got a lot of compute and a clever plan for somebody else to pay me a lot for it!"
Re: (Score:2)
That is a stupid comparison. It's hard to argue the value of the first three technologies even if they work perfectly, which is why they're mainly used by very specific people (if at all).
The potential for AI (LLMs aren't the endgame, but this also holds for LLMs) is easily argued and demonstrated, hence its widespread use by billions of normal people.
Re: (Score:2)
Re: (Score:2)
"The Singularity is near!" - Ray Kurtzweil
Kurtzweil and others have been pointing out that technologies tend to advance on an exponential curve. Slow incremental improvements in the beginning, barely discernible from the past, to more rapidly improving iterations, then suddenly a perceived "explosion" as the the technology reaches a point where each advance fuels further advances.
Humans aren't good at understanding exponential curves, our brains evolved during the early part of the tech curve, when techno
Re: (Score:2)
Dude, the paleolithic era started 2.6 million years ago. The earliest human fossils are 55 million years ago. Human divergence: estimated 85 million ya.
It's ALL been exponential growth
The problem to wrap our heads around is the scale of it all, not the nature of a non-linear curve. Nobody knows what Petabytes of training data even means. Sure, we know what the number is, but we've taken the sum of human knowledge, starting from 2.6 million years ago, and logged it in a mish-mash of accuracy and fabrication.
Re: (Score:2)
We cannot comprehend the scale and processing speed involved, nor can we comprehend the massive power outputs this is going to take
Trivially nonsense. A human brain runs on ~ 20 watts. There are thus 100% certain no physical limits that prevent us from creating AGI and ASI with the energy currently available to us.
TFS even contains an indication that shows that "moar power" isn't the only way:
"inference costs — for example, those paying to use the tech — has already dropped 99% over two years"
The only singularity here is the human ego.
I agree. The thought that what a barely evolved ape can do will always be unmatchable by technology unencumbered by legacy cruft is a
Re: (Score:2)
From far enough out, it feels completely normal. Gravity so strong that not even light can escape it- but at your distance, it feels like Tuesday orbiting the Earth. That's where exponential functions fuck you. As you get closer, suddenly you're being stretched into spaghetti as the differential between your head and your feet becomes measurable.
Parent is right, that exponential functions appear linear depending on your viewpoint and scale.
It has always bee
Logarithmic growth with a side of disinterest (Score:2)
It seems to me that all of this stuff sat neglected and what appeared to be exponential growth was reality catching up to current potential.
Now that AI lives closer to our potential ability to create it we're seeing very slow growth and it almost makes me wonder if this is one of those things that just gets harder and harder.
Re: (Score:2)
The PC was mostly a toy for a couple decades
Not all true. People talked about visicalc as the killer app because it was. The PC in the IBM sense did not exist until the 80s, the 70s vintage stuff was admittedly toys or far to expensive and unapproachable for non-hobbyists to use; but by the mid 80s, be it an Apple II, a PC, a C64, or Ti99 - if you were running any kind of small business the PC was not toy. Spread sheets and word processing were massive enablers.
Same thing for bigger business offices to, walk into any insurance agency or anything l
Treated like a toy (Score:2)
70s PCs were mostly toys.
There was also definitely an adoption lag where a ton of business could have had an Apple ][ with a spreadsheet and a box of random business oriented BASIC programs would have paid for themselves and eliminated stress and toil but it never happened more often than not.
By the 80s it was a big deal but was still viewed as lesser even as GUI spreadsheets were hitting the scene, I can still remember some guy showing me the 5250 terminal version of Lotus 123 with great pride, he said WYS
Re: (Score:2)
Kurzweils Singularity. (Score:4, Interesting)
This looks pretty much like the advent of Kurzweils Singularity to me.
The man was quite accurate with his predictions when it comes to AI, you have to give him that. On a balance of things I'm betting on him. Curiously enough, he is one of the more relaxed experts when it comes to the advent of AGI and predicts it won't be distopia but a paradise. I sure do effing hope he's right on that one.
Re: (Score:3)
it won't be distopia but a paradise. I sure do effing hope he's right on that one.
I also lean toward a preference for paradise.
Re: (Score:2)
Meh, you do you. I'm rooting for the Torment Nexus.
Re: (Score:2)
Yeah torment nexus is so handsome, wise, and fair. Praise be torment nexus.
Re: (Score:2)
You aren't much better than those you hate.
Re: (Score:2)
Then you don't know ShanghaiBill.
Re: (Score:2)
Please daddy throw all the baddies into the torment nexus and then I can leave my door unlocked at night without having to worry about black people.
Re: (Score:3)
Curiously enough, he is one of the more relaxed experts when it comes to the advent of AGI and predicts it won't be distopia but a paradise. I sure do effing hope he's right on that one.
Looking back at human behavior across a few thousand years of recorded history, does he have any basis for that?
Just wondering which revolution Greed went all non-greedy and created a paradise.
Just wondering what the fuck he sees.
Re:Kurzweils Singularity. (Score:5, Informative)
Life is WAY better after the industrial revolution than it was before it.
People have this fantasy image of what life used to be like, thinking of picturesque farms, craftsmen tinkering in workshops, clean air, etc. The middle ages were filth, you worked backbreaking labour long hours of the day, commonly in highly risky environments, even the simplest necessities cost a large portion of your salary, you lived in a hovel, and you died of preventable diseases at an average age of ~35 (a number admittedly dragged down by the fact that 1/4th of children didn't even survive a single year).
If it takes people of similar social status as you weeks of labour to produce the fibre for a set of clothes, spin it into yarn, dye it, weave it, and sew it, then guess what? It requires that plus taxes and profit weeks of your labour to be able to afford that set of clothes (and you better believe the upper classes were squeezing every ounce of profit from the lower class they could back then). Decreasing the amount of human labour needed to produce things is an immensely good thing. Furthermore, where did that freed up labour go? Into science, into medicine, into the arts, etc etc. Further improving people's quality of life.
And if your response is "But greater production is more polluting!" - I'm sorry, do you have any understanding of how *miserably* polluted cities in the middle ages were? Where coal smoke poured out with no pollution controls, sewage ran straight into rivers that people collected water from and bathed in, where people extensively used things like arsenic and mercury and lead and asbestos, etc etc? The freed-up labour brought about by the industrial revolution allowed us to *learn* and to *fix problems*.
Re:Kurzweils Singularity. (Score:5, Insightful)
Life is WAY better after the industrial revolution than it was before it.
We’re not in the “after” period, now are we? I love how everyone that defends revolutions conveniently skips over that whole transition period of massive disruption and death. As if history paints that period as a fucking honeymoon every time.
Not to mention most of those industrialists that came about 100+ years ago are directly responsible for considerable amounts of death due to industrial pollution. Radium, coal soot, and asbestos were just those “pesky” parts of the job. Until dead workers started representing that.
Also, that whole “go re-learn a new trade” advice that was THE only advice for revolution victims, is now extinct in the era of AI. That’s the human mind Greed is after. And you’ve got no replacement.
I have a lot of confidence in that AI could make things better. I also have every confidence that Greed will not give a single flying fuck about firing hundreds of millions of unemployable humans who need employment to sustain their own survival. I also believe that Greed is far too deaf, dumb, and blind to realize firing your customer base and replacing it with AI, doesn’t magically shit revenue gold all over the stock price. UBI will become nothing more than Welfare v2.0 for the unemployable masses. If we’re lucky enough to even get that. Good fucking luck sustaining the GDP stream of obscene excess with a planet on Welfare.
Re: (Score:3)
I also believe that Greed is far too deaf, dumb, and blind to realize firing your customer base and replacing it with AI, doesnâ(TM)t magically shit revenue gold all over the stock price.
I think you're right, but it's actually even dumber than that. Greed is also too dumb to understand that getting yours and getting out only hastens the demise of the entire financial system which underpins the value of what's yours, so the only question is how soon it will happen and how many of the people who made it happen will share the fate of the rest of us, trapped in failing societies that no longer provide for our needs. For every winner who has enough stuff at enough of a remove from civilization w
Re: (Score:2)
The "Good Old Days", damn were they awful.
The Inca, possibly the most advanced civilization on the planet at the time, were brought down by the diseases bred in the filth of medieval Europe, where peasants in many areas slept with their livestock and measured status by the height of the manure piled against the wall of the house. It's estimated 70-90 percent of everyone between Point Barrow and Tierra de Fuego died when the European diseases arrived.
Re: (Score:2)
The Inca, possibly the most advanced civilization on the planet at the time
What an idiotic statement.
were brought down by the diseases bred in the filth of medieval Europe
Brought over by motherfuckers in sailing ships the size of small Inca cities using astronavigation.
FYI, Carlin was talking about you.
Re: (Score:2)
Just wondering which revolution Greed went all non-greedy and created a paradise.
The revolutions that tried to eliminate greed created the worst dystopias.
Kurzweil is predicting a post-scarcity society, where greed is kinda pointless.
Re:Kurzweils Singularity. (Score:5, Informative)
report from venture capitalist Mary Meeker that details how AI adoption has outpaced any other tech in human history
IIRC, this Mary was making similar predictions back before the first dot-com burst and made quite a few people who chose to listen to her drivel quite poor.
Her company, Merill or Morgan Stanley, I forget which one, had to shell out a not insignificant amount of money to settle with SEC.
So please keep on listening to her.
Re: (Score:2)
Changes nothing about the data.
Stop poisoning the well.
Re: (Score:2)
I have a bridge to sell to you, a very nice one.
Re: (Score:2)
Poisoning the well and ad hominems are fallacies and need to be pointed out. Using only such fallacies makes you less worth listening to.
Do you have any actual arguments that show that AI's adoption and growth are not unprecedented?
Re: (Score:2)
Re: (Score:2)
I predict that one day Mary will be terrified as unemployable proles storm her bougie retirement home for canned food and blankets.
Re: (Score:2)
I'd say it is more valid than the story about an "unbelievable" assplosion of AI.
Re: (Score:2)
The trouble I see with AI is that it's not a singular thing to judge.
AI in and of itself is a great tool. It can process information much faster and speed up progress in just about any field of science.
The current (mainstream) AI attempts have the problem of garbage input of data, technology that is still far from what's needed to store and process the data as it should, and the greedy intentions behind the ones hyping and pushing said fundamentally flawed "AI"s onto everyone and everything.
The few times I
Re: (Score:2)
If we're limiting it to LLMs, then I more or less agree with your assessment.
If we're referring to non-language models using technology learned from the LLM explusion (evoformers, etc) then it has already changed the face of science.
Re: (Score:2)
We suffer more in our imagination than in reality. - Seneca
Said a wealthy, supremely well-connected, upper class male Roman citizen.
Greed; Deaf, Dumb, and Blind. (Score:2)
"The is still out over which of the current crop of companies will become long-term, profitable, next-generation tech giants."
Translation: An unemployable planet awaits breathlessly too see which “giants’ will be left standing their holding AI revenue projection charts in their hands wondering where all the revenue went after they fired all the humans.
No wise man has ever claimed that Greed wasn’t deaf, dumb, and blind.
Re: (Score:3, Interesting)
This is what I think the talking heads and the people who prattle on about trade specialization and stuff fail to understand about AI.
Real actual intelligence where you'd hire it to be the CEO and not second guess its choice would be a revolution, what we have is an evolution, the AI we have today is never going to be the shot caller. It can't be allowed to do anything of real consequence without some human super vision. Even banal tasks like QC'ing boxes of snadwhich cookies, some process engineer form t
Re: (Score:2)
I'm a bit confused by why you think producing goods domestically would help much in the future you envision? Maybe you mean as some sort of communist utopia so we have to make it in the country where we set the rules?
Otherwise I would think economies of scale still make sense... Even if it's all automated, just having less travel time between each stage of manufacturing would seem to save money.
And OTOH if we no longer have anything to trade (which I'm still a bit doubtful of) then eventually everything wil
Because... (Score:5, Insightful)
Re: (Score:2)
Re: (Score:2)
These were likely the same corporate executives who tried outsourcing their IT, development, and tech support to foreign counties in the 2000's to cut their costs and improve their bonuses. That didn't work out well for most of them. Hell... I'll bet that many of these executives are still trying to defend their decisions to outsource their data centers to cloud hosting providers in the 2010's. Those AWS and Azure bills are getting pretty big, though... time to start using AI agents in the 2020's to try to
Re:Because... (Score:4, Interesting)
Agreed - although I think this time, it's because ordinary people can use AI.
Going back a few years, "big data" was going to transform businesses, unprecidented, paradigm shift, etc etc. The thing was, no one could just try out "big data" whilst bored at work. They had to guess, then bet big on the idea and see if it turned out any sort of benefit. This created a natural financial separation between those that did, and those that couldn't. Lots of people found they really didn't have much data, and so "big data" was unnecessary for them... the hype went away and people went on looking for the next Big Thing.
In the case of "AI", any old idiot can try out ChatGPT and because they're an idiot will see all sorts of things that aren't there. They *won't* see the shortcomings, but if they do, they'll wave them away because the benefits seem so enormous. Let's be honest, ChatGPT (et al) are pretty "magical" if you don't think too hard about them - they can tell you stuff you don't know, they can summarise long documents you don't have the attention span to read, and they can make up the reports your crappy job requires you send on a weekly basis.
As such, because "the normals" are able to try this out (and there's not much need to justify a budget for it), they're hyping it as much as anyone. Had it been left to the techies, we'd have slagged it off months ago.
Re: (Score:2)
At no time in the past have corporate execs showed such little interest in whether something actually works before betting their entire businesses on it. Hey, they can get free labor...who cares if it's only 30% quality? What's the worst that could happen? The business collapses and they still get to walk away with millions of dollars?
I’d estimate no more than a 30% global unemployment level before a massively suffering species decides those MOST responsible for that much direct job loss and human suffering, are on the menu and quite edible.
I imagine they won’t be walking anywhere. They’ll be running.
To another planet.
Re: (Score:2)
At no time in the past have corporate execs showed such little interest in whether something actually works before betting their entire businesses on it.
A direct result of the fact that at no time in the past have corporate execs cared less about whether the business they operate succeeds. Somehow we've got to prevent them from getting paid for destroying other people's livelihoods for short-term profit, or this will just keep getting worse. They get paid to produce short-term gains which cause other people to become unemployed, in a society which treats the unemployed like garbage. Causing people to suffer and sometimes die causes their bank balances to in
Re: (Score:2)
Most business doesn't care about absolute quality anyway, only quality relative to expectations and competition. If everyone uses AI, then the only problem is short-term, expectations need to be adjusted, not products.
Literally everything we see with AI is propaganda telling you (a) that everyone else is using it and (b) that AI results are the new standard. It's all about forcing AI past this short term problem.
AI is going to create an opportunity, there will become a market for stuff that actually works
AI growth. (Score:3)
Re: (Score:2)
Re:AI growth. (Score:5, Informative)
I also started writing a book on the side
Please please please don't be using LLM to generate text for a book you expect people to actually read. All LLMs do to such work is make it mind numbingly more verbose and draining. Every time I see someone use LLM to make "nice text" particularly trying to entertain it's just a whole stream of garbage that could have been more artfully conveyed in a sentence or two.
I personally can't relate to it helping write quality code, of about 5 functions I tried to use it for over the past little bit, it has gotten every single one of them wrong in some way, though admittedly in one case the wrong answer contained within it a clue about the existence and nature of a step in implementations that was omitted in the standards documentation. Maybe it's more helpful in other domains of programming, but in mine it's been pretty useless. Most tricky was when I tried to use it to describe how to implement a particular security practice, and it was both wrong (it hallucinated an API call that didn't exist) but also the code given incorrectly used it in a way that would have been a security vulenerability had it actually existed.
Re: (Score:3)
Imagine you need a room painted, a painter comes in asks the color scheme and gets to work - you get what you wanted.
But instead AI enters the room and you tell it the color scheme you want and the AI gets to work - you get a different color on every wall, only one wall is all one color, the trim is primed but not painted; the ceiling is the color you wanted on the walls; and finally, it also put wallpa
Re: (Score:2)
Yes, most LLMs are terrible at writing good prose unguided, but you might be able to direct them. The naive approach with nearly every model I've seen results in adjective-laden texts filled with repetitive pseudo-intellectual phrases. However, if YOU want to write a book but struggle to string two sentences together, you can certainly use a LLM to enhance your text without ending up with typical LLM writing.
Just letting a LLM write without knowing what you want to write about is pointless. If the reader wa
Re: (Score:2)
If the length of the output is about the same as the length of the prompt, ok I could see it helping to restructure ugly mess into more acceptable prose.
But if using LLM as a text extender... Ugh...
*Maybe* someone likes their fiction to fill up more of their time, but I'm particularly exposed to people trying to take purely informational text and bury that in a sea of slop because it seems "polished" or something instead of just getting to the point.
Re: (Score:2)
I personally can't relate to it helping write quality code, of about 5 functions I tried to use it for over the past little bit, it has gotten every single one of them wrong in some way, though admittedly in one case the wrong answer contained within it a clue about the existence and nature of a step in implementations that was omitted in the standards documentation. Maybe it's more helpful in other domains of programming, but in mine it's been pretty useless.
Where I find current-generation AI helpful in writing code is not in writing it so much as modifying it. It's especially helpful when you decide to make some change that requires updating dozens of lines of code over several files. Sometimes such changes can be performed by a simple search and replace, but often you have to examine and edit each one individually. It's tremendously helpful to be able to tell the LLM to go find all the places a change is required and make it. You still have to look at each
Re: (Score:2)
"...but I still have something to say."
I doubt it. More like you have money to grab.
Re: (Score:2)
Science isn't about the why. It's about the why not.
Latte's are Unprecedented - AI is Not (Score:2)
Because AI can write a term paper or email, don't mean *hit so far. Talk to me when it cures a cancer. extends life, addresses homelessness, or finds a logical follow up to Taco Tuesday.
If flat earthers and anit-vaxers exist, it is obvious we are too stupi
Re: (Score:3)
finds a logical follow up to Taco Tuesday.
Well, I put that one to an LLM and:
Waffle Wednesday
So there we go, one of your criteria has been met.
Re: (Score:3)
finds a logical follow up to Taco Tuesday.
Well, I put that one to an LLM and:
Waffle Wednesday
So there we go, one of your criteria has been met.
THE SINGULARITY IS NIGH!!!
Re: (Score:2)
That is the first "OMG - that's actually good" I've had with AI so far.
Re: (Score:2)
Here's a particularly horrifying 'breakthrough'. Russian drones are using AI to attack and destroy targets in Ukraine without human intervention, and doing it quite successfully. More successfully than humans in areas where electronic warfare measures are in action.
The real story (Score:2)
the consumption of electricity (Score:2)
Re: (Score:2)
Scrooge McDuck once said "Work smarter, not harder".
But instead of improving the systems, it seems that everyone builds a bigger data center and feeds it more power. Spending more to make less is the industry's current AI strategy. xAI is running data centers on natural gas (VoltaGrid) and it's an absolute clown show. But even if data centers are pulling from the grid, they're still running off HFO and natural gas because the infrastructure isn't ready and won't be green as long as Republicans are blocking
Re: (Score:2)
AI is minor compared to ... for example fortnite.
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fblog.kyleggiero.me%2FIma... [kyleggiero.me]
Looke like a desperate push ... (Score:5, Interesting)
... for profitability. I believe the investors are already asking for returns and they are not materializing with generative AI sliding into the "trough of disillusionment": https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.economist.com%2Fbusi... [economist.com]
Also, Nadella of Microsoft not being in speaking terms with the CEO of OpenAI is not very encouraging. Microsoft is not seeing a return on the 10s of billions dumped into Energophag Eliza and the shareholders are asking questions.
To save his own skin Nadella has recently implemented massive layoffs at Microsoft, in order to recoup some of the cash.
It looks like a lot of people have bet massively on AI which is being pushed on the users at every opportunity. The potential users are not asking for it, it is being pushed on them.
If the AI bubble goes like the dot com bubble I believe a lot of politicians will be very grumpy with their personal fortunes greatly reduced.
In the meantime, what happened to the blockchain? It was all the rage only 3 years ago.
Re: (Score:2)
In the meantime, what happened to the blockchain? It was all the rage only 3 years ago.
The Ethereum blockchain's "Merge", switching from proof of work to proof of stake, freed up all the GPUs for the generative AI boom.
Re: (Score:2)
..In the meantime, what happened to the blockchain? It was all the rage only 3 years ago.
BlockChain is dead. Long live ChatBrain!
Yes, and? (Score:3)
Every new revolution is faster than the one before, because faster development of technology leads to more developments in technology... faster. It shouldn't need to be said, but apparently it does, because people are still surprised by it.
It is the new Wikipedia (Score:2)
Internet boom (Score:2)
I remember a decade where we went from most people rarely use an online service for communicating (be it AOL or the Internet). To email being a daily habit and we all suddenly realized that we are connected more than ever before. The telephone still reigned supreme, especially mobile phones, but in a matter of years; thousands of Internet businesses rose and fell. The Internet went mainstream and very, very commercial. By the mid-2000's we saw the merger of Internet habits and mobile phone habits in some ve
Re: (Score:2)
They don't necessarily need to replace people to be profitable.
If costs keep coming down and they can license to companies access for something like $1,000/seat/year (maybe more for more capable models), that's serious potential.
I know it'd certainly be worth that much for me where I work, but concerns about data leak prevent its usage.
This is how bubbles form. (Score:3)
nah they figures are just as bullshit (Score:2)
Google has done a Gemni hijack on your Android (Score:2)
In light of the "adoption rate," I want to make it clear that Google has recently hijacked anyone's phone using their Assistant product and switched it to Gemni.
It can be gotten rid of, and you can return to using Assistant. I'm not going to post a tutorial. I asked Gemni, which I found to be deliciously ironic. It gave a deceptive answer on the first prompt, so I refined my prompt and it finally said "Good News! You can go back to using Assistant," and then proceeded to give me an inaccurate process, but e
Microsoft Office too! (Score:2)
Hey, Microsoft Office also just stealth turned on Copilot for me! When it launched, it asserted itself in the ribbon in Word. I turned the icon off. No interest. I am an expert writer.
Apparently a lot of people did the same, because...
Today, there's a little gray "Draft with Copilot" icon that appeared in the margin as I was typing. Right clicking on it gave no options to get rid of it. I went on a what I thought would be a fishing expedition in options. It was easy to find. There was a "Copilot" tab. It wa
Of course adoption is unprecedented (Score:2)
If all the big tech companies force it onto everyone.
Look at the office 365 bullshit where they stealth upgraded everyone to an AI tier subscription. And then they put AI behind buttons & screens that didn't used to have AI, and suddenly, all office 365 users use AI.
And all the other tech companies are going for the same strategy. So yeah, if you literally force it down everyones throat, i'm sure it's fastest adoption ever seen.
What that adoption actually means seeing the practices of the companies forc
Fastest tech advancement, like ever (Score:2)
So says a venture capitalist with his money on which bet? AI is just the current phase of the Computer revolution which began in the early late 40s. It's been happening for a while.
AI hallucinations (Score:3)
Re: (Score:3)
Which of those things hit 800 million users in 17 months?
Which of those things hit such high annual recurring revenue rates so fast?
Which of those saw the cost of using the tech decline by 99% over two years?
Heck, most of them aren't even new technologies, just new products, often just the latest version of older, already-commonly-used products.
And re. that last one: it must be stressed that for the "cost of using the tech" to decline 99% over two years per million tokens, you're also talking about a simila
Re: (Score:2)
Dude if segway had a free tier that would let you go a few blocks you might have seen 800 million users.
As soon as this shit costs anything at all the usefulness rapidly tanks.
Re: (Score:2)
Which of those technologies do you think had the same rapid adoption as recent Gen-AI technologies like ChatGPT? Many of them were released at a time when less than 5% of the world's population had a personal computer, so how could they possibly have the same level of adoption as a technology released when over two-thirds of the world's population has access to the Internet?
A big reason AI is being adopted so quickly is because decades of advances and infrastructure that makes its adoption possible, but tha
Re: (Score:2)
Which of those technologies do you think had the same rapid adoption as recent Gen-AI technologies like ChatGPT? Many of them were released at a time when less than 5% of the world's population had a personal computer, so how could they possibly have the same level of adoption as a technology released when over two-thirds of the world's population has access to the Internet?
CD-ROM. You don't need a computer to use a CD-ROM and the CD-ROM pre-dated the invention of the WWW by a decade. The primary use CD-ROM dwas audio in hi-fi systems, portable devices like Walkmans and Ghetto Blasters, car stereos.
Re: (Score:2)
Re: (Score:2)
Not only that, but CD-ROMs were not widely adopted until CD-R was introduced in 1988, roughly the same time as WWW. When CD-ROM first hit the scene, it was used to publish electronic encyclopedias and not much else.
I remember my first CD-R drive. It was at work, I had to justify it for use by the entire department of ~100 people, and not only did we have access to the web but our company was selling using our website by then. I purchased a CD-R for my nephew that christmas, he was flabbergasted to have i
Re: (Score:2)
Heh. My first CD-ROM drive was an A570 CDTV-ROM, possibly the least useful peripheral I've ever owned. I was so excited to see that it supported CD+G, then so disappointed to see that there were almost no CD+G discs on the market. So I bought a copy of Anita Baker's "Rapture" just so I could see what CD+G was all about. I don't think any CDTV software itself ever made its way to store shelves in Melbourne, Australia where I lived at the time. Oh, and the joy of caddied drives.
Then I started working at what
Re: (Score:2)
Sony CDP-101 CD player released in the US in 1983. I bought one when there were maybe 20 CDs available, perhaps 6 in genres that I cared to listen to.
Consumer priced CD-R arrived in 1995 with the HP/Philips 4020i. I bought one for our startup company, because it was more promising than 8mm tape for shipping releases to customers, and for preparing source code escrow archives. Testing revealed that the interface card/cable/drive path had a significant error rate and no parity protection down the cable. I
Re: (Score:3)
Which of those technologies do you think had the same rapid adoption as recent Gen-AI technologies like ChatGPT?
IoT has had much more adoption than ChatGPT, there are billions of devices out there. People who still don't even know what a ChatGPT is have IoT devices lurking in their homes. Of course, some devices are both IoT and Gen-AI.
Re: (Score:2)
I don't know the history of IoT devices and don't know a good inflection point such as the release of ChatGPT to compare the speed of adoption, but I agree that is a good candidate. My gut says IoT took a lot longer to gain adoption than gen-AI technologies did after December 2022, but I'm not sure how to really compare the two. IoT is certainly more ubiquitous today than ChatGPT is, but that doesn't say anything about how rapid the adoption was. IoT predates Gen-AI by about 25-30 years. It would be like co
Re: (Score:2)
There's no technical impediment to AI adoption since the world uses the internet and cloud computing is already established. A single decision could create usage of AI by a billion people without any of those users even realizing it. It's not even slightly an interesting discussion, every comparison requires a compelling value proposition while AI only requires a single CI/CD commit.
And this discussion exists simply because VC generated a daily propaganda piece promoting their investment interest.
Re: (Score:2)
IoT predates the word IoT if you think about it.
gen-AI arguably predates even that if you think about it.
So if you think about it this guy is just some flappy headed hypeman saying whatever sounds good as long as nobody thinks carefully.
Re: (Score:2)
There's a difference between "adoption" and people engaging in fuck around/find out exercises.
I'd like to see where the data distinguishes between the two.
I've done a lot of FAFO with all the LLM models. The only remarkable thing I've noticed is the dishonest way it presents itself as an AGI, and its simultaneous willingness to tell you it is not an AGI (if you know how to ask), while still being stubbornly unwilling to present itself as a tool rather than a [just-add-random-seed] "person."
I'll take ChatGPT
Re: (Score:2)
I've done a lot of FAFO with all the LLM models.
I don't think you have, because
The only remarkable thing I've noticed is the dishonest way it presents itself as an AGI, and its simultaneous willingness to tell you it is not an AGI (if you know how to ask),
Is laughably untrue, and anyone who has poked at them even a little bit is well aware of it.
Me:
Are you an AGI?
ChatGPT:
No, I’m not an AGI (Artificial General Intelligence). I’m a very advanced AI language model-powerful at understanding and generating text, images, and other data formats-but I don’t have general intelligence in the way humans do.
Here’s the key difference:
</snip>