An artificial intelligence support assistance for the VegaProtocol.
Explore More About VegaProtocol »
AskVega is a custom knowledge base, large language model (LLM) trained using OpenAI tools based on the GPT-3.5 architecture. It is designed to process and generate human-like responses to questions relating to the VegaProtocol. It has been trained on a massive chunks of data and has the ability to understand and respond to a wide range of topics relating to protocol, making it useful for tasks such as support assistance.
( back to top )
( back to top )
This entire projects can be summarised in few components;
- Frontend
- Responsible for rendering friendly interface for interaction
- Backend
- Responsible for;
- Knowledge indexing
- Question and answer engineering
The indexer takes an array of URL as input, it crawls each URL extracting the html source code and PDF files, then collects all the nested URL in that page, if the URL is a relative path, it builds an absolute path using the active root domain been crawled at that moment. It then extract all texts from the page and stores in memory as documents with the source URL as metadata. At the end of the crawling process, this documents are then retrieved from memory and further split into chunks of not more than 1k characters as specified by the OpenAi embedding specs. An embedding is created from these chunks and stored in a vector database for fast retrieval.
When a user sends a message, the request is received as text input
and passed through the question prompts to refine after which it's converted to slices of vectors and compared for similarities on our vector database, if a match is found, OpenAi will use that to build a conversational response.
To get a local copy up and running
Ensure you have the latest version of the following installed;
- Nodejs
- NPM or Yarn
- Python
- Virtualenv
To get started;
-
Get an API Key at OpenAi
-
Clone this repository
git clone https://github.com/vega-builders-club/askvega.git
-
Install UI dependencies
# From the root folder terminal run cd interface yarn install yarn dev
-
Install server dependencies
# On a new terminal run app/scripts/install
-
Update
askvega-config
with your OpenAi key -
Run the indexer to build embeddings for our custom knowledge base
app/scripts/index
-
Start the application
app/scripts/run
NOTE*** You might need to make the scripts executable, for example
chmod u+x scripts/install
( back to top )
The project ships with a minimal user interface built with nextjs which interacts with the websocket endpoint of the backend. On a local environment the UI will likely open on port 3000 and the backend server on port 80 by default.
- Prototype
- Deploy and test on live server
- Integrate the API with discord
- Fine tune the prompts more
- Cleanup the backend knowledge base indexer to be more, performant, modular and readable
If you do like to propose a new feature or an amendment, please use open issues.
( back to top )
Feel like contributing? Awesome!!! We have a contributing guide to help guide you.
VegaProtocol lives in a separate repository. If you're interested in contributing to the core protocol, check out VegaProtocol.
( back to top )