Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
User Journal

Journal jd's Journal: Question: Can you use Semantic Reasoners with LLMs 5

There are ontology editors, such as Protege, where there are a slew of logical reasoners that can tell you how information relates. This is a well-known weakness in LLMs, which know about statistical patterns but have no awareness of logical connections.

Is there a way to couple these together, so if you've a complex set of ideas, you could perhaps provide the ontological network plus some of the things that are reasoned from it, to supplement the prompt and give the LLM the information it can't in and of itself extract?

Question: Can you use Semantic Reasoners with LLMs

Comments Filter:
  • I'm planning to implement retrieval augmented generation with a program using an LLM to augment in a way that should mitigate in-context forgetting. Does that count? I'm going to use a fuzzy vector search to find related terms and feed that to the LLM with the rest of my text.
    • by jd ( 1658 )

      Yeah, that would count.

      Specifically, with something like Protege, I can define how things relate, so I could set up an ontology of cameras, with digital and film as subtypes, where lenses are a component of cameras, films are a component of film cameras, and pixel count is a property of digital cameras.

      The reasoner could then tell you about how the bits relate, but a SPARQL search could also search for all records in a database pertaining specifically to any of these parameters.

      At least some search engines

      • > in principle, nothing would stop Claude or ChatGPT saying "I need to find out about relationships involving pizza and cheese", and the reasoner could tell it.

        This is where you're going wrong. An LLM doesn't output questions like this unless prompted to, at which point you're performing multiple LLM queries for each whatever you're doing, and you're going to have to customize those queries, automatically or otherwise. Pretty soon you're really in the weeds. LLMs are inherently limited in this way.
        • by shanen ( 462549 )

          This. And I've even tried to encourage the chatbots to ask questions. (And answer briefly. And to heck with the fake polite speech.)

        • by jd ( 1658 )

          This sort of system is only useful because LLMs are limited. If they can be told to farm certain categories of step to middleware, then when they encounter such a step, they should farm out the request. I've found, with trying engineering problems, that LLMs consume a lot of steps finding out what to collect, with a risk of hallucination. That's exactly the sort of thing that can be farmed out.

          According to both Claude and ChatGPT, that sort of process is the focus of a lot of research, right now, although a

Remember: use logout to logout.

Working...