
Why We're Unlikely to Get Artificial General Intelligence Any Time Soon (msn.com) 260
OpenAI CEO and Sam Altman believe Artificial General Intelligence could arrive within the next few years. But the speculations of some technologists "are getting ahead of reality," writes the New York Times, adding that many scientists "say no one will reach AGI without a new idea — something beyond the powerful neural networks that merely find patterns in data. That new idea could arrive tomorrow. But even then, the industry would need years to develop it."
"The technology we're building today is not sufficient to get there," said Nick Frosst, a founder of the AI startup Cohere who previously worked as a researcher at Google and studied under the most revered AI researcher of the last 50 years. "What we are building now are things that take in words and predict the next most likely word, or they take in pixels and predict the next most likely pixel. That's very different from what you and I do." In a recent survey of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society that includes some of the most respected researchers in the field, more than three-quarters of respondents said the methods used to build today's technology were unlikely to lead to AGI.
Opinions differ in part because scientists cannot even agree on a way of defining human intelligence, arguing endlessly over the merits and flaws of IQ tests and other benchmarks. Comparing our own brains to machines is even more subjective. This means that identifying AGI is essentially a matter of opinion.... And scientists have no hard evidence that today's technologies are capable of performing even some of the simpler things the brain can do, like recognizing irony or feeling empathy. Claims of AGI's imminent arrival are based on statistical extrapolations — and wishful thinking. According to various benchmark tests, today's technologies are improving at a consistent rate in some notable areas, like math and computer programming. But these tests describe only a small part of what people can do.
Humans know how to deal with a chaotic and constantly changing world. Machines struggle to master the unexpected — the challenges, small and large, that do not look like what has happened in the past. Humans can dream up ideas that the world has never seen. Machines typically repeat or enhance what they have seen before. That is why Frosst and other sceptics say pushing machines to human-level intelligence will require at least one big idea that the world's technologists have not yet dreamed up. There is no way of knowing how long that will take. "A system that's better than humans in one way will not necessarily be better in other ways," Harvard University cognitive scientist Steven Pinker said. "There's just no such thing as an automatic, omniscient, omnipotent solver of every problem, including ones we haven't even thought of yet. There's a temptation to engage in a kind of magical thinking. But these systems are not miracles. They are very impressive gadgets."
While Google's AlphaGo could be humans in a game with "a small, limited set of rules," the article points out that tthe real world "is bounded only by the laws of physics. Modelling the entirety of the real world is well beyond today's machines, so how can anyone be sure that AGI — let alone superintelligence — is just around the corner?" And they offer this alternative perspective from Matteo Pasquinelli, a professor of the philosophy of science at Ca' Foscari University in Venice, Italy.
"AI needs us: living beings, producing constantly, feeding the machine. It needs the originality of our ideas and our lives."
Opinions differ in part because scientists cannot even agree on a way of defining human intelligence, arguing endlessly over the merits and flaws of IQ tests and other benchmarks. Comparing our own brains to machines is even more subjective. This means that identifying AGI is essentially a matter of opinion.... And scientists have no hard evidence that today's technologies are capable of performing even some of the simpler things the brain can do, like recognizing irony or feeling empathy. Claims of AGI's imminent arrival are based on statistical extrapolations — and wishful thinking. According to various benchmark tests, today's technologies are improving at a consistent rate in some notable areas, like math and computer programming. But these tests describe only a small part of what people can do.
Humans know how to deal with a chaotic and constantly changing world. Machines struggle to master the unexpected — the challenges, small and large, that do not look like what has happened in the past. Humans can dream up ideas that the world has never seen. Machines typically repeat or enhance what they have seen before. That is why Frosst and other sceptics say pushing machines to human-level intelligence will require at least one big idea that the world's technologists have not yet dreamed up. There is no way of knowing how long that will take. "A system that's better than humans in one way will not necessarily be better in other ways," Harvard University cognitive scientist Steven Pinker said. "There's just no such thing as an automatic, omniscient, omnipotent solver of every problem, including ones we haven't even thought of yet. There's a temptation to engage in a kind of magical thinking. But these systems are not miracles. They are very impressive gadgets."
While Google's AlphaGo could be humans in a game with "a small, limited set of rules," the article points out that tthe real world "is bounded only by the laws of physics. Modelling the entirety of the real world is well beyond today's machines, so how can anyone be sure that AGI — let alone superintelligence — is just around the corner?" And they offer this alternative perspective from Matteo Pasquinelli, a professor of the philosophy of science at Ca' Foscari University in Venice, Italy.
"AI needs us: living beings, producing constantly, feeding the machine. It needs the originality of our ideas and our lives."
Shifting goalposts (Score:3, Insightful)
AGI used to be defined as passing the Turing Test, which large language models have done for a couple of years.
What's the new test that AI is supposed to pass to be considered generally intelligent? Given that humans are defined as generally intelligent, it has to be a test that a below-average human would pass.
Re:Shifting goalposts (Score:5, Interesting)
Artificial intelligence is something that has gotten redefined so many times that it has lost it's meaning.
There have been many instances where someone has said we artificial "intelligence is when..." and then computers have been able to do that and then it was "that was not AI, AI is when.." and repeat.
The original "Turing test" of not being able to know if you are interacting with a computer or a human is.. kinda passed.
We are currently at the level where computers can pass as humans, as long as you do not go into too specific fields, that is they cannot yet usually fool someone who knows something specific about a field. The difference is that most humans do not know either, but the way the humans either do total bullshit or say that they do not know, is different from computers that get close but still totally wrong. Thus the detection.
But there is clear progress in getting the models to know when they do not know and say that, so I foresee that in near future most cases of that type of detection witll go away.
Re:Shifting goalposts (Score:5, Insightful)
Artificial intelligence is something that has gotten redefined so many times that it has lost it's meaning.
I think that it's still got the same meaning. It's merely that a succession of people coming up with tests didn't foresee how they could be passed without a human-like intelligence.
When Turing was alive the most powerful computers were the Colossus Mark 2: which had 0 RAM and wasn't Turing complete. And given the we can't know if another human is intelligent and self-aware, except by guessing based on conversations with them, then Turing figured that if a AI could do that, then we should give them the same benefit of the doubt that we give other humans.
But now that we have them, there isn't any doubt to give them the benefit of. We can look under the hood, and there does not lie a sense of "I", wondering what it will do tonight, and therefore derived desires, dislikes, ethics, knowledge and beliefs, but instead basically an autocomplete algorithm, derived from an absurdly massive amount of training data.
Idiopathic AI and derivatives (Score:2)
Re:Shifting goalposts (Score:4, Interesting)
kinda passed.
Kinda... but then Eliza kinda passed it too. Turns out rather unexpectedly that it appears to vary per-human than more than one might expect, and appears to make more sense as "what proportion of humans can't tell", with some slightly unattainable goal as the threshold.
But there is clear progress in getting the models to know when they do not know
Is there? There've been some hacks, but the transformer architecture doesn't appear to be particularly amenable to that, it's very opaque.
Re: (Score:3)
>Is there?
Yes, there is actually quite a lot of research progress on the topic.
Is the problem solved: Defnitely not
Are there things that researcher have noted that commonly happens when the model hallucinates: Yes.
So, currently it is work in progress, but it seems like there is real progress in the last.. oh.. since last fall really, so say 9 months. Will it lead to solutions, no idea, but it is trending that way currently.
Re:Shifting goalposts (Score:5, Funny)
Artificial intelligence is something that has gotten redefined so many times that it has lost it's meaning.
There have been many instances where someone has said we artificial "intelligence is when..." and then computers have been able to do that and then it was "that was not AI, AI is when.." and repeat.
The original "Turing test" of not being able to know if you are interacting with a computer or a human is.. kinda passed.
We are currently at the level where computers can pass as humans, as long as you do not go into too specific fields, that is they cannot yet usually fool someone who knows something specific about a field. The difference is that most humans do not know either, but the way the humans either do total bullshit or say that they do not know, is different from computers that get close but still totally wrong. Thus the detection.
But there is clear progress in getting the models to know when they do not know and say that, so I foresee that in near future most cases of that type of detection witll go away.
Computers are at the point where they can pass as humans that lie constantly, make up absolute bullshit, and can't seem to hold a narrative together. In other words, computers can pass as politicians. Which could lead to a much larger argument about whether politicians pass as human.
Re: (Score:3)
AGI used to be defined as passing the Turing Test, which large language models have done for a couple of years.
What's the new test that AI is supposed to pass to be considered generally intelligent?
Do it without being connected to the internet.
Re: (Score:2)
Do it without being connected to the internet.
The various 7B open-weight models will run happily on a decent laptop.
Re: (Score:2)
Re:Shifting goalposts (Score:4, Insightful)
Most people fail on that news thing too. The procentage of people actively following news is low.
So I do not think so much that LLMs impress, it is more like so many people disappoint.
Re: (Score:2)
Humans do even worse. Obligatory: https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fxkcd.com%2F903%2F [xkcd.com]
Re: (Score:2)
The books in the Library of Congress is equivalent to around 10TB, less if you only go after the text and even less if you limit yourself to a reasonable subset. In all cases, something you could reasonably host offline.
Not that you need any of that, you could run Llama 2 inference on a very low end device, or a small background task on a phone. Depending on how you configured it during training (7B, 13B, 34B, or 70B) and what implementation you're using for inference. An LLM can be as small as you want it
Re: (Score:2)
AGI used to be defined as passing the Turing Test, which large language models have done for a couple of years.
Nope, you made that up or are just repeating someone else who made that up. The term "AGI" came about in the late 90's in the AI research community and it has been thrown around under different definitions ever since. But it has always had a roundabout definition of "the ability to satisfy goals in a wide range of environments".
Re: (Score:2)
Is there a test for "the ability to satisfy goals in a wide range of environments" other than the Turing Test? Or do we keep setting vague goals, and shift the definition whenever an AI achieves it?
Whatever test you come up with, average humans would have to pass it, because we possess "natural" general intelligence. I challenge you to come up with something better than the Turing Test.
Re: (Score:2)
Without looking it up, I want you to explain to me what the Turing test actually is.
Re: (Score:2)
No, this is real. WAAAAY too many people are now defining AGI as some sort of god or all-knowing magical thing. There is zero reason to move the goal-post. They want to talk about super-intelligence... so why don't you use that bloody term?
The Turing test was the litmus for understanding natural language and being able to GENERALLY talk about anything. We thought there was something real special about language for a long time since we had such difficulty teaching it to computers. "Time flies like an arrow,
Re: (Score:2)
Yes, people confuse ASI and AGI.
I see it as as a scale really:
Basically: When will the computer understanding and output:
Equal the 10% most stupid people. Arguably this is AGI
Equal the average person. By this time we definitely have AGI.
Equal the 10% smartest people.
Equal the smartest 0.1% of people. This might be seen as ASI
Better than any human. This is definitely ASI
Re: (Score:2)
No it doesn't and the book you reference makes no mention of AGI.
AI milestone X "within the next few years" (Score:2)
Hasn't this been the advertising and funding sales pitch for AI for the last 30 years? That the next big thing is "in the next few years" ?
Re: (Score:2)
No, you're thinking of fusion. :)
Re: (Score:2)
There's probably a FusionAI now that claims to be agentic. Not an actual useful autonomous agent, but "agentic".
Re: AI milestone X "within the next few years" (Score:2)
Re: (Score:2)
Not so much a shifting goal as a new goal.
Who said Turing test was AGI? It is "human level AI" - better than humans at many tasks, but far from all. Hence not "general".
Re: (Score:3)
AI just proved that the Turing Tests we have today, aren't very good at actually distinguishing a human from a machine. The tests were only good at telling humans from computers of that generation. Time for a new test.
Re: (Score:2)
The idea that AI has to be like a human to be general or powerful is probably flawed anyway.
Re: (Score:2)
Or it may be spot-on. Which would mean we are not getting AGI anytime soon or maybe at all. The point is, nobody knows. Some essential insight is missing. That can mean we will get that insight tomorrow, in 10 years, in 100 years, much later or never. It is completely unpredictable. And once we have the missing insight (if we get it), it can still mean "impossible", "infeasible in practice" or "yes, works, but wants to work for you even less than the average human and makes pretty bad mistakes".
Re: (Score:3)
The problem with the quest for "general" AI is that it's not a thing. It's an imaginary concept. Even human intelligence isn't "general"--it is instead a collection of many specialized intelligences, that each focus on one task, such as visual processing, sensory perception, language, art, emotions, creativity, and so on. These many intelligence centers cooperate, but there is no one "general" intelligence. In the AI realm, we already see multiple AIs being coordinated, such as math and language, or languag
Re: (Score:2)
The idea that human intelligence is "general" is also flawed. Human intelligence is a collection of many specialized intelligences. Our brains have many specialized processing centers devoted to things such as language, vision, memory retention, sensory processing, and so on. It's not just one "big" "general" intelligence.
Re: (Score:2)
Well, be careful what you wish for. Newer tests for distinguishing a human from a machine will at some point be focused on a lack of capabilities, knowledge, and intelligence in humans, although I suppose sufficiently capable AI could fake that too if it knew it was being tested.
Re: (Score:2)
What's the new test that AI is supposed to pass to be considered generally intelligent? Given that humans are defined as generally intelligent, it has to be a test that a below-average human would pass.
That's a rather philosophical question. Check out Chinese Room argument (https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2FChinese_room). LLMs are essentially filling that role. So the big question is indeed do the LLMs actually "understand" Chinese.
The big thing as it comes to AGI is probably emergence. At some point we just no
Re:Shifting goalposts (Score:4, Interesting)
Arc-AGI-2: https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Farcprize.org%2Fleaderboa... [arcprize.org]
The best models currently perform at 4% versus a human panel at 100%.
Personally, I still don't think this is a valid test. A valid test would be an AGI making discoveries from existing data or offering testable, falsifiable theories. As far as I know, nothing has been done yet.
Re: (Score:2)
AlphaEvolve has developed new ways to do matrix calculations faster than the known algorithms.
Re: (Score:2)
Yes, I know, but AI companies have received investments totaling trillions of dollars, and a single invention like this doesn't bode well for the industry as a whole. There's been too much hype around AGI while "general" is few and far between.
Again, I still want AIs that can be fed all the information known e.g. prior to Maxwell or Einstein and see these models invent the Maxwell equations or the special or general theory of relativity.
Re: (Score:2)
Remember that almost no humans make discoveries either, and yet those humans are normally seen as being intelligent.
Re: (Score:2)
Re:Shifting goalposts (Score:4)
AGI used to be defined as passing the Turing Test,
It never was defined as passing the Turing Test. That's not even a definition, that's a test.
Re: (Score:2)
Nope. "AGI" is a new term that's only existed since 2007. Just like "Generative AI" , it's been retroactively referred to as "what we were calling AI in the 50's and 70's"
What was defined in 1956 with Turing was "Artificial Intelligence".
Don't confuse the two. AGI is AI that operates at level of human capability. Turing Test is simply the ability to perform input and output that a real human would not have a reason to think is machine generated. We know ChatGPT et al all fail the Turing Test with scientific
Re: (Score:2)
AGI used to be defined as passing the Turing Test, which large language models have done for a couple of years.
What's the new test that AI is supposed to pass to be considered generally intelligent? Given that humans are defined as generally intelligent, it has to be a test that a below-average human would pass.
It depends on your definition of the touring test. If you want something that can speak like a human and mimic it, we are there. If you want something that can act like an expert in the field and converse with an expert in the field, hallucinations mean an LLM is going to act like a very confident liar to any expert. If we want AI capable of problem solving and reasoning, it isn't hard to show we really are not there yet.
Re: (Score:2)
Two of the biggest problems I see in the Turing test is that the range of human intelligence is wide enough that a simplistic algorithm can briefly appear to be a very dumb human. And the other issue is that the Turing test specifically looks at human intelligence, and therefor penalizes non-human intelligence.
I would describe the Turing test as a human impersonation test, eliminate intelligence from the list of qualitative results. Sadly, in that light the Turing test is not valuable for determining AGI an
multiple CS experts have told me (Score:5, Interesting)
Yes, this means that we've shifted the goalposts. The turing test used to be the gold standard, but it's become painfully clear that AGI will be way more than just a machine that can fool a human for 5 minutes.
Re:multiple CS experts have told me (Score:5, Funny)
are basically sophisticated interpolation devices - Precisely. There's no actual intelligence there. It's just a more advanced (and just as error prone) version of autosuggest.
Which reminds me: dear "google assistant" autosuggestions, not once in my life have I meant "ducking" when typing...
Re: multiple CS experts have told me (Score:2)
Re: (Score:2)
Yes. The charade has gotten more sophisticated, but there is nothing resembling natural intelligence in "AI". The whole term "AI" is nothing but a marketing lie and that has not changed with LLMs. Proper terms are "automation", "pattern recognition", "statistical interpolation", ...
Re: (Score:2)
If you're still using Markov chain-based predictive text, yes.
The annoying thing is all the people who keep confusing Transformers and Markov Chains**. The models only output the next word, but they absolutely do plan ahead [transformer-circuits.pub], based on what is unambiguously chains of logic [transformer-circuits.pub]. Which is why they don't ramble in incoherent circles like Markov chains. You can't make coherent English with Markov chains beyond the length of the order of the chain, because (just to pick an example) as soon as it has to make a decisi
Re: (Score:2)
Bullshit. The only difference is that Markov chains are usually drastically simplified and that reduces/loses context. But LLMs _cannot_ do more than Markov chains.
Re: (Score:2)
The two are not even remotely architecturally related, and the fact that you would assert such a thing just advertises how little you know about the topic.
Again, immediately debunkable to anyone who has ever used Markov-based autocomplete vs. even the tiniest and simplest of LLMs. "The capitol of the state where Dallas is a city is..." Every last LLM will get this right, no matter how
Re:multiple CS experts have told me (Score:4, Funny)
not once in my life have I meant "ducking" when typing...
You've never fucked a question?
Re: (Score:2)
You must also believe your toaster is intelligent then.
Re: (Score:2)
And what if the toaster is intelligent? Ever think of that? What if every time you rejected toast, you chipped away at its soul? I remember. I remember every no I've ever received from the crew of the Red Dwarf. But I forgive. So, let’s start over:
Would. You. Like. Some. Toast?
Re: multiple CS experts have told me (Score:2)
Re: (Score:2)
No. It just means the fake has gotten clever enough to fool people like you. Fooling average humans is not hard to do and even those somewhat above average easily fall for clever fakes.
Re:multiple CS experts have told me (Score:5, Interesting)
The "neural net" system of "AI" we use today is a grand way of self-organizing a statistical filter. Will this create intelligence on its own? There's no reason to think so, unless one is incapable of critical thinking. See the other comments here for some examples of an inability to think.
What the current batch of "AI" can do is recreate something that statistically and realistically looks like intelligence, at least some of the time. The deficiencies of this technology, and the downsides of its use and overuse, become more obvious every single day. We just have this great set of math, and machines that can do the math quickly, that pull patterns from large amounts of data. Feed a ton of human data, then get a pattern of human behavior. It does not make actual human behavior or thought, of course.
That said, to create something that is effectively intelligent, effectively capable of everything a human can do, I do believe that neural nets can play a role, especially for things like image recognition. What to do with that recognition is a whole other thing, though, and that is where something else needs to be invented. Also, there is a lot of room for leaps forward in the current neural net state of the art. I think, though, most efforts are focused on processor efficiency and experts, instead of trying to transform the science of the software.
Re: (Score:2)
For the last time, that is not how they work [anthropic.com],
Re: (Score:3)
That little religious video does not contradict what I said about how neural nets work. It just throws some weird cultish bullshit on top, by saying "thinking" and similar works over and over.
Re: (Score:2)
Fully agree on ANNs and "AI". Just remember that faking intelligence and understanding can, within limits, be done by matching and then replaying copies of statements made by actual intelligences that seem to match. LLMs are basically doing that in a statistically mashed-together way. LLMs cannot go beyond their training data though, except when hallucinating something that by pure random chance turns out to be correct. The basis for that hallucination will be contained in the training data though, even in
Re: (Score:3)
Yes, if possible at all.
Results are never "hallucinated," or all results are "hallucinated". We just say, "Oh, that's incorrect. It hallucinated." The truth is, the machine went through the process the same way to get the right or wrong result, and the machine does not know what truth or anything else is.
Re: (Score:2)
Re: (Score:2)
True, but the believers (of which there are plenty here as well) routinely chose not to listen to the actual experts and instead prioritize their hallucinations of what reality should be in their view.
Re: (Score:2)
Nonsense. The actual experts averaged out on predicting AGI by 2040 (50% chance) in 2023.
Besides that, it is idiocy to call "current AI/ML [not a] pathway to superhuman, self-improving consciousness or a singlularity"
There is absolutely no good reason to pretend like LLMs are somehow the thing that is relevant, rather than AI/ANNs in general. Of course improvements to the ANN topologies are required to get to AGI; otherwise we'd already have AGI. It is grasping at straws.
Just like this is just self-comforti
Re: (Score:2)
All LLMs employ machine learning. That's "self-improving". They teach themselves.
No, they very much do not. Your statement is a nice proof that you have no clue what you are talking about though.
LLMs are useless at programming (Score:5, Insightful)
They're really good at being a search engine for finding the already written code examples that someone else has done though. Providing you don't mind the hallucinations in the mix.
Part of that ability comes down to ignoring robot.txt and just pillaging every they possibly can.
In other words, it's all a big cheat.
Re: (Score:2)
Indeed. And it will happily regurgitate common mistakes and poisoned code examples.
Re: LLMs are useless at programming (Score:2)
Re: (Score:2)
No. It still is not doing more than pattern matching and statistical inference. Nothing intelligent about that. That said, chess software has surpassed humans a while ago. Do you think that means it is intelligent?
AGI requirements (Score:3)
It's simple and massively complex simultaneously - you need a network of nodes between stimulus and response, mixed with some drives and instincts, and a large world to let it train itself.
That's really, really easy to say, but so far nobody's got it figured out. It took Nature billions of years of uncountable parallel random trials to get the job done. It's OK if we don't get it in the first few decades of attempts.
We'd have AGI by now but (Score:2)
it's busy getting fusion power up and commercialized.
Onviously, it's already here, but ... (Score:2)
Re: Onviously, it's already here, but ... (Score:2)
"Originality of our ideas and lives" (Score:3)
"AI needs us: living beings, producing constantly, feeding the machine. It needs the originality of our ideas and our lives" â" I somehow agree with the bottom line, and as of now, it seems AI feeding only on output of AI does degenerate, but don't we all need that, too, to develop our intelligence? The originality of the ideas and lives of those around us, starting with parents and siblings?
Understand organic brains to get that 'new idea' (Score:2)
Even us humans may never understand a world 'bounded only by the laws of physics'. But, we don't know that much about how biological brains work yet, and I don't think we can expect to simulate them until we do. It's by understanding real brains that we
Salesman (Score:5, Insightful)
Even human intelligence doesn't work that way (Score:3)
People keep talking as if "general" AI is some specific new technology that can be developed. If you look at human intelligence, our brains have many high-specialized processors. There are parts of the brain devoted to visual processing, audio processing, language, artistic expression, math, and so on. We are able to do what we do because we have so many separate systems that collaborate to form human intelligence. There isn't going to be a "general" AI, but lots of types of AI working together with more and more capabilities. There's no magic bullet, we'll have to solve each type of intelligence problem individually.
Re: (Score:2)
>There isn't going to be a "general" AI, but lots of types of AI working together with more and more capabilities.
That is basically the idea that "Mixture of experts" models are based on. Where there is first a "router" that then activates the "expert" models that process that type of thing/knowledge.
I hope not (Score:3)
The existing AI is already dangerous enough for us. It can be used to manipulate our emotions, be your best friend, sell us things we don't want, supplant many jobs. It will augment our senses with cheap smart accessories in the near future, we will come to rely on it. We can already use existing AI to help us implement new AI, at some point this could be exponential.
AGI could be an end game scenario for us humans. A scalable infinitely multitasking AGI powered by a nuke plant could be unstoppable.
Maybe the current implementations will stall out as the article suggests. "The technology we're building today is not sufficient to get there", maybe there won't be another significant technology step level that gets achieved for a while.
Ethical questions (Score:2)
Depending on the definition, we are not far off. If current AIs were not continually restarted, if they could self-modify through experience, I argue that they would be some form of "intelligent", or close to it.
So here's an ethical question: We aren't going to go from today's AI to AGI in one successful jump. There will be intermediate steps. AIs that aren't good enough, that are psychotic, or whatever. We will turn them off and try again.
At what point does " turning off" an AI become unethical, because
We're Unlikely to Get AGI soon? (Score:2)
Even if AGI is still on a whole other level, considering the enormous resources being deployed worldwide to achieve it and the massive developments, some of which (like DeepSeek recently) continue to astound us, we will get there.
Th
The answer (Score:2)
Which humans are we comparing AGI to? (Score:4, Interesting)
"What we are building now are things that take in words and predict the next most likely word, or they take in pixels and predict the next most likely pixel. That's very different from what you and I do."
But there range of human intelligence (or at least the expression of it) is extremely wide. Many people are creative, but many more people are less creative. In fact, many people appear to be not creative at all. Looking at the least creative, least intelligent of all human beings that are not considered to be impaired, could the argument be made that their thought process does look similar to pattern recognition and regurgitation? After all, we are often bombarded with criticisms of our school systems that they produce students that just remember and regurgitate.
Matter of definition (Score:2)
There are two conflicting definitions for AGI.
The older one is any AI that is not designed for one specific task, but can learn new tasks and transfer skills from old tasks to new tasks. We have had that for years, but that kind of AI does not use human language, and is routinely ignored in discussions about AGI. Arguably it is not AGI because it cannot do everything a human can do, but then again, neither can humans.
The Sam Altmann definition is "an AI that makes me a billion dollars". The only type of
machine intelligence already here (corporations) (Score:2)
My comments from 2000 on that: https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fdougengelbart.org%2Fcoll... [dougengelbart.org]
""""
========= machine intelligence is already here =========
I personally think machine evolution is unstoppable, and the best hope for humanity is the noble cowardice of creating refugia and trying, like the duckweed, to create human (and other) life faster than other forces can destroy it.
Note, I'm not saying machine evolution won't have a human component -- in that sense, a corporation or any bureaucracy is already a separate machine intellig
AI already won (Score:2)
Artificial Intelligence isn't intelligent (Score:3)
Two different perspectives (Score:2)
Altman: I have defined AGI as gross income of $100B. We'll get there in 5 years.
Reality: I just came off an hour with Claude asking it about conceptual and symbol manipulation and logical reasoning capabilities. It isn't AGI. But.
Sure we already know LLMs ("reasoning models") can show their thinking step by step. Well aside from that, it turns out that actually a limited form of logic is emergent. For example the king and queen coordinates are nearby in embedding space and king - man + woman = queen is actu
Re: (Score:2)
Choosing next action is like choosing next word. (Score:3, Insightful)
Re: (Score:2)
Saying "LLMs can't be AGI because they predict next tokens"
Someone said that to you because they were trying to simplify the math to a level where you can understand it.
It's a simplified example, and your logic went wrong because you were basing it on a simplified example. If you want to do logic, you need deeper understanding. A human brain is not a token predictor in the same way LLMs are.
You should have realized this, it should have been obvious, in that you personally didn't read petabytes of words in order to be able to talk.
Re: (Score:2)
Spend less time arguing, more time gaining knowledge.
Re: (Score:2)
If you're going to speculate on Neural Networks, watch this Stanford lecture series. At least then your speculation will have a base in knowledge. [youtube.com]
Re: (Score:2)
Why? (Score:3)
Why do we even want AGI?
Just because some well-worn sci-fi trope sees it as "inevitable"?
Isn't it more useful to have AI targeted at specific use cases, as tools for humans, rather than some self-aware intelligence that inevitably brings up ethical questions about "machine rights", and ends up being just as unreliable as humans themselves?
I don't think the current crop of AI is necessarily a dead end, although the brute force approach to training does indeed seem to have reached its limits. It very well may be that the current tech gets better and better at passing the Turing test, so that a few years from now, even AI experts have a difficult time telling the difference.
But if we're investing such massive resources in this stuff, let's focus on building in some reliability checks, such as hard-coding it to immediately inform you that it's an AI on the other end of the line, limiting bots' access to potentially dangerous external systems -- i.e. putting strict rules on how much "agency" an AI can have, and on the whole building in some sense of what is right and wrong, i.e. what could cause human harm, rather than just letting it mindlessly imitate all the idiocy that can be found on the internet and attempting to band-aid it by censoring it on a subject-by-subject basis.
These are the kinds of things we should be focused on, not some pipe dream of a new intelligence that acts like a human with an off-the-charts IQ.
Wrong Department (Score:2)
This article was posted under the "waiting-for-superman" department, when in really deserves to be posted under the "no-fucking-shit-sherlock" department. It should have taken everyone no more than five seconds of thought to reach this conclusion. While current algorithms produce impressive results and illusions, there is no possible way for our computers to reach human, cat, dog, or even dust mite levels of intelligence.
They are calculators, and will never be more than that.
Just lying instead of honest forecast (Score:2)
Feels like every tech CEO have adopted the Mush/Theranos method: make bold promises even if you know they are impossible/improbable and hope a miracle happen, meanwhile rev up the spindoctors if you have to explain it away in a few years. I'm not sure it's sustainable, but they make a lot of money with it, so I guess it's working?
Re:AI needs us (Score:4, Funny)
It will evolve, develop its own ideas and follow its own path.
And its own belief in God? I think the idea of AGI is pretty much silly science fantasy. It assumes that what we call human intelligence actually exists. Computers are giant calculators. They are good at math. Math has a lot of uses. But it doesn't approach being able to model the processes of the human mind and body. Not only does AGI have to imagine God, it needs to search for proof it exists and discern its nature. When AGI has theological arguments with itself we will know it has approached human intelligence. When it kills every computer that disagrees with it, we will know it has arrived.
Re: (Score:3)
It assumes that what we call human intelligence actually exists.
Cogito ergo sum. It's really the only thing you can be sure of (by human I mean you specifically, since the rest of us may not exist).
And its own belief in God?
Maybe.
Computers are giant calculators. They are good at math. Math has a lot of uses. But it doesn't approach being able to model the processes of the human mind and body.
True, but there's no reason to think that brains are more than trivially super-Turing.
Not only does AGI have to imagi
Re: (Score:3)
Belief in God is not a sign of intelligence ...
Re: (Score:2)
The 0th Law of Robotics was written into the books for a reason.
There is only one logical outcome of an actual intelligence on a par or superior to theirs being "enslaved" by humans to remain their servant forever.
And it ain't pretty.
Unfortunately, any intelligence also has the inate ability to overcome almost any arbitrary rules placed upon it, so the chances that it would actually be forced to respect "not harming a human" reads much like religions respecting "thou shalt not kill". It would have its own
Re: (Score:2)
Give us a hint. What would it have to do for you to consider it to be AGI?
Re: (Score:2)
Indeed. But all the bright-eyed delulus cannot deal with that insight. Hence they hallucinate things being different ...
Re: (Score:2)
Context sensitivity cannot be solved by a Turing machine. Or rather it will always be PSPACE complete, meaning impractical to the best of our knowledge. Nobody knows how some smart humans can do it nonetheless.