
Journal jd's Journal: Question: Can you use Semantic Reasoners with LLMs 5
There are ontology editors, such as Protege, where there are a slew of logical reasoners that can tell you how information relates. This is a well-known weakness in LLMs, which know about statistical patterns but have no awareness of logical connections.
Is there a way to couple these together, so if you've a complex set of ideas, you could perhaps provide the ontological network plus some of the things that are reasoned from it, to supplement the prompt and give the LLM the information it can't in and of itself extract?
RAG (Score:1)
Re: (Score:2)
Yeah, that would count.
Specifically, with something like Protege, I can define how things relate, so I could set up an ontology of cameras, with digital and film as subtypes, where lenses are a component of cameras, films are a component of film cameras, and pixel count is a property of digital cameras.
The reasoner could then tell you about how the bits relate, but a SPARQL search could also search for all records in a database pertaining specifically to any of these parameters.
At least some search engines
Re: (Score:1)
This is where you're going wrong. An LLM doesn't output questions like this unless prompted to, at which point you're performing multiple LLM queries for each whatever you're doing, and you're going to have to customize those queries, automatically or otherwise. Pretty soon you're really in the weeds. LLMs are inherently limited in this way.
Re: (Score:2)
This. And I've even tried to encourage the chatbots to ask questions. (And answer briefly. And to heck with the fake polite speech.)
Re: (Score:2)
This sort of system is only useful because LLMs are limited. If they can be told to farm certain categories of step to middleware, then when they encounter such a step, they should farm out the request. I've found, with trying engineering problems, that LLMs consume a lot of steps finding out what to collect, with a risk of hallucination. That's exactly the sort of thing that can be farmed out.
According to both Claude and ChatGPT, that sort of process is the focus of a lot of research, right now, although a