Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Hmmm. (Score 1) 52

Something that quick won't be from random mutations of coding genes, but it's entirely believable for genes that aren't considered coding but which control coding genes. It would also be believable for epigenetic markers.

So there's quite a few ways you can get extremely rapid change. I'm curious as to which mechanism is used - it might not be either of those I suggested, either.

Comment Re:Few thoughts. (Score 1) 51

Manchester, England. I resisted getting a Slashdot ID for a while, as I hated the idea of username/password logins rather than just entering a name. Otherwise my UID would be two or three digits, as I had been on Slashdot a long time before they had usernames.

Comment Re:Few thoughts. (Score 1) 51

No, I would have to disagree, for a simple reason.

Humans are capable of abstracting/generalising, extrapolating, and projecting because of the architecture of the brain and not because of any specific feature of a human neuron.

The very nature of a NN architecture means that NNs, as they exist, cannot ever perform the functions that make humans intelligent. The human brain is designed purely around input semantics (what things mean), not input syntax (what things are), because the brain has essentially nothing external in it, everything is internal.

Comment Re:Few thoughts. (Score 1) 51

I use AGI to mean an AI that is capable of anything the biological brain is capable of, because the brain is the only probable intelligence to measure against.

This has nothing to do with good/bad, or any of the other stuff in your post. It's a simple process - if there's a model of thought the brain can do, then it is a modern of thought an AGI must do or it is neither general nor intelligent.

Comment Few thoughts. (Score 4, Informative) 51

1. Even the AI systems (and I've checked with Claude, three different ChatGPT models, and Gemini) agree that AGI is not possible with the software path currently being followed.

2. This should be obvious. Organic brains have properties not present within current neural nets (localised, regionalised, and even globalised feedback loops within an individual module of a brain (the brain doesn't pause for inputs but rather mixes any external inputs with synthesised inputs, the and brain's ability to run through various possible forecasts into the future and then select from them along the brain's ability to perform original synthesis between memory constructs and any given forecast to create scenarios for which no input exists, to produce those aforementioned synthesised inputs). There simply isn't a way to have multi-level infinite loops in existng NN architectures.

3. The brain doesn't perceive external inputs the way NNs do - as fully-finished things - but rather as index pointers into memory. This is absolutely critical. What you see, hear, feel, etc -- none of that is coming from your senses. Your senses don't work like that. Your senses merely tell the brain what constructs to pull, and the brain constructs the context window entirely from those memories. It makes no reference to the actual external inputs at all. This is actually a good thing, because it allows the brain to evolve and abstract out context, things that NNs can't do precisely because they don't work this way.

Until all this is done, OpenAI will never produce AGI.

Comment Re:RAG (Score 1) 5

This sort of system is only useful because LLMs are limited. If they can be told to farm certain categories of step to middleware, then when they encounter such a step, they should farm out the request. I've found, with trying engineering problems, that LLMs consume a lot of steps finding out what to collect, with a risk of hallucination. That's exactly the sort of thing that can be farmed out.

According to both Claude and ChatGPT, that sort of process is the focus of a lot of research, right now, although apparently it's not actually been tried with reasoners.

Comment Re:RAG (Score 1) 5

Yeah, that would count.

Specifically, with something like Protege, I can define how things relate, so I could set up an ontology of cameras, with digital and film as subtypes, where lenses are a component of cameras, films are a component of film cameras, and pixel count is a property of digital cameras.

The reasoner could then tell you about how the bits relate, but a SPARQL search could also search for all records in a database pertaining specifically to any of these parameters.

At least some search engines let you do this kind of search. On those, you can specifically say "I want to find all pages referencing a book where this is specifically identified as the title, it isn't referencing something else, and that specifically references an email address".

So, in principle, nothing would stop Claude or ChatGPT saying "I need to find out about relationships involving pizza and cheese", and the reasoner could tell it.

That means if you're designing a new project, you don't use any tokens for stuff that's project-related but not important right now, and you use one step no matter how indirect the relationship.

This would seem to greatly reduce hallucination risks and keeps stuff focussed.

What you're suggesting can bolt directly onto this, so Protege acts as a relationship manager and your add-on improves memory.

This triplet would seem to turn AI from a fun toy into a very powerful system.

User Journal

Journal Journal: Question: Can you use Semantic Reasoners with LLMs 5

There are ontology editors, such as Protege, where there are a slew of logical reasoners that can tell you how information relates. This is a well-known weakness in LLMs, which know about statistical patterns but have no awareness of logical connections.

Comment Huh. (Score 3) 40

Why are they monitoring syscalls?

The correct solution is surely to use the Linux Kernel Security Module mechanism, as you can then monitor system functions, regardless of how they are accessed. All system functions, not just the ones that have provision for tracepoints.

For something like security software, you want the greatest flexibility for the least effort, and Linux allows you to do just that.

Because it's fine-grained, security companies can then pick and choose what to regard or disregard, giving them plenty of scope for varying the level of detail. And because the LSM allows services yo be denied, there's a easy way for the software to stop certain attacks.

But I guess that the obvious and most functional approach would mean that the vendors would have to write a worthwhile product.

Comment Re:They are going from 4.5 to 4.1? (Score 1) 13

Since R1 has good reasoning, but no real breadth, and is open source, the logical thing would be to modify R1 to pre-digest inputs and create an optimised input to 4.1. The logic there would be that people generally won't provide prompts ideally suited to how LLMs work, so LLM processing will always be worse than it could be.

R1 should, however, be ample for preprocessing inputs to make them more LLM-friendly.

Slashdot Top Deals

If Machiavelli were a programmer, he'd have worked for AT&T.

Working...