Something that quick won't be from random mutations of coding genes, but it's entirely believable for genes that aren't considered coding but which control coding genes. It would also be believable for epigenetic markers.
So there's quite a few ways you can get extremely rapid change. I'm curious as to which mechanism is used - it might not be either of those I suggested, either.
I didn't see any outrage. If you need to see outrage that desperately, seek help.
Manchester, England. I resisted getting a Slashdot ID for a while, as I hated the idea of username/password logins rather than just entering a name. Otherwise my UID would be two or three digits, as I had been on Slashdot a long time before they had usernames.
No, I would have to disagree, for a simple reason.
Humans are capable of abstracting/generalising, extrapolating, and projecting because of the architecture of the brain and not because of any specific feature of a human neuron.
The very nature of a NN architecture means that NNs, as they exist, cannot ever perform the functions that make humans intelligent. The human brain is designed purely around input semantics (what things mean), not input syntax (what things are), because the brain has essentially nothing external in it, everything is internal.
I use AGI to mean an AI that is capable of anything the biological brain is capable of, because the brain is the only probable intelligence to measure against.
This has nothing to do with good/bad, or any of the other stuff in your post. It's a simple process - if there's a model of thought the brain can do, then it is a modern of thought an AGI must do or it is neither general nor intelligent.
For Microsoft to actually fix a bug, and not merely add a new one, is a feat that is beyond the imagination.
1. Even the AI systems (and I've checked with Claude, three different ChatGPT models, and Gemini) agree that AGI is not possible with the software path currently being followed.
2. This should be obvious. Organic brains have properties not present within current neural nets (localised, regionalised, and even globalised feedback loops within an individual module of a brain (the brain doesn't pause for inputs but rather mixes any external inputs with synthesised inputs, the and brain's ability to run through various possible forecasts into the future and then select from them along the brain's ability to perform original synthesis between memory constructs and any given forecast to create scenarios for which no input exists, to produce those aforementioned synthesised inputs). There simply isn't a way to have multi-level infinite loops in existng NN architectures.
3. The brain doesn't perceive external inputs the way NNs do - as fully-finished things - but rather as index pointers into memory. This is absolutely critical. What you see, hear, feel, etc -- none of that is coming from your senses. Your senses don't work like that. Your senses merely tell the brain what constructs to pull, and the brain constructs the context window entirely from those memories. It makes no reference to the actual external inputs at all. This is actually a good thing, because it allows the brain to evolve and abstract out context, things that NNs can't do precisely because they don't work this way.
Until all this is done, OpenAI will never produce AGI.
I wrote this half finished story while being tortured by my government.
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.ontarioadministrativesegregation.ca%2Fhome.html
CHAPTER 1
I wrote this half finished story while being tortured by my government.
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.ontarioadministrativesegregation.ca%2Fhome.html
CHAPTER 1
This sort of system is only useful because LLMs are limited. If they can be told to farm certain categories of step to middleware, then when they encounter such a step, they should farm out the request. I've found, with trying engineering problems, that LLMs consume a lot of steps finding out what to collect, with a risk of hallucination. That's exactly the sort of thing that can be farmed out.
According to both Claude and ChatGPT, that sort of process is the focus of a lot of research, right now, although apparently it's not actually been tried with reasoners.
Yeah, that would count.
Specifically, with something like Protege, I can define how things relate, so I could set up an ontology of cameras, with digital and film as subtypes, where lenses are a component of cameras, films are a component of film cameras, and pixel count is a property of digital cameras.
The reasoner could then tell you about how the bits relate, but a SPARQL search could also search for all records in a database pertaining specifically to any of these parameters.
At least some search engines let you do this kind of search. On those, you can specifically say "I want to find all pages referencing a book where this is specifically identified as the title, it isn't referencing something else, and that specifically references an email address".
So, in principle, nothing would stop Claude or ChatGPT saying "I need to find out about relationships involving pizza and cheese", and the reasoner could tell it.
That means if you're designing a new project, you don't use any tokens for stuff that's project-related but not important right now, and you use one step no matter how indirect the relationship.
This would seem to greatly reduce hallucination risks and keeps stuff focussed.
What you're suggesting can bolt directly onto this, so Protege acts as a relationship manager and your add-on improves memory.
This triplet would seem to turn AI from a fun toy into a very powerful system.
There are ontology editors, such as Protege, where there are a slew of logical reasoners that can tell you how information relates. This is a well-known weakness in LLMs, which know about statistical patterns but have no awareness of logical connections.
The gent who wakes up and finds himself a success hasn't been asleep.