-
Notifications
You must be signed in to change notification settings - Fork 24
Home
Welcome to the InferGPT wiki!
For three reasons:
Language models are great a predicting the next token - as they are designed to. The issue though, compared to humans, is when one human requests another, we very rarely just spew out a response. Instead, we usually ask a question back. For example, if I ask you for a film recommendation, if you know me well, you would think: "Chris loves Marvel, and I know there's been a recent film released" - so you would ask: "Have you seen the latest Ant-Man?"
Alternatively, if you didn't know me well, you would ask things such as: "What genre of films do you like?"
We believe knowledge graphs are the solution to the above issue; to understand the user's current profile and ask questions based on missing context needed to solve their issue. It can then also store conversations, context and new information as time goes on - always remaining contextually updated.
Graphs are great at this sort of task. They infer fast and they carry deep context with their edges. Most excitingly they also:
- Act as super-vector stores with Neo4j's cypher language, providing better performance vs cosine similarity methods.
- Make great recommendation models - graphs could even start to predict what you want to do next!
For a deeper dive I highly recommend the Neo4j Going Meta YouTube series
Most agent architectures and research are great for proof of concepts but none are suitable for production environments. The multi-agent architecture we're proposing solves multiple production issues such as scalability, enablement of smaller models, explainability and modularisation.
This is to say, they have no ability to learn, adapt or evolve to the environment. InferGPT proposes a solution to this problem.