Yeah, that would count.
Specifically, with something like Protege, I can define how things relate, so I could set up an ontology of cameras, with digital and film as subtypes, where lenses are a component of cameras, films are a component of film cameras, and pixel count is a property of digital cameras.
The reasoner could then tell you about how the bits relate, but a SPARQL search could also search for all records in a database pertaining specifically to any of these parameters.
At least some search engines let you do this kind of search. On those, you can specifically say "I want to find all pages referencing a book where this is specifically identified as the title, it isn't referencing something else, and that specifically references an email address".
So, in principle, nothing would stop Claude or ChatGPT saying "I need to find out about relationships involving pizza and cheese", and the reasoner could tell it.
That means if you're designing a new project, you don't use any tokens for stuff that's project-related but not important right now, and you use one step no matter how indirect the relationship.
This would seem to greatly reduce hallucination risks and keeps stuff focussed.
What you're suggesting can bolt directly onto this, so Protege acts as a relationship manager and your add-on improves memory.
This triplet would seem to turn AI from a fun toy into a very powerful system.