This is a fork from https://github.com/assafelovic/gpt-researcher with some modifications to make it work with local Ollama models.
- Clone this repository
- Install the requirements
- Copy the .env.example file to .env and fill in the fields accordingly
- Make sure to have the Ollama model already downloaded with ollama server running
- The example is using
mixtral:8x7b-instruct-v0.1-q6_K
- For embbedings I'm using llama3, so unless you change
gpt_researcher/memory/embeddings.py
to use something else you will need to run:ollama pull llama3
- Modify the
query
variable in themain.py
file with the prompt you want to use for your research - Run the
main.py
file