Talos is meant to be a powerful interface to easily create automated workflows that uses large language models (LLMs) to parse, extract, summarize, etc. different types of content and push it to other APIs/databases/etc.
Think Zapier but with language model magic 🪄.
Click to play video. Note: the summarization step is sped up (~1 minute).- Fetch this wikipedia page.
- Extract who the page is about, give a summary, and extract any topics discussed.
- Put into a template.
- Read this Yelp review
- Extract the sentiment and give me a list of complaints and/or praises
- Put into a report template.
- Email it to me.
- Read through this PDF file.
- Create a bullet point summary of the entire book.
- Generate key takeaways from the book 3a. For each takeaway, elaborate
- Combine into a template.
To start running talos
locally, install the dependencies
# Copy the environment variables
> cp .env.template .env.local
# Install dependencies
> npm install
# Start the front-end
> npm run start
The UI will be available at http://localhost:3000/playground.
Now we need to start the backend.
memex is a self-hosted LLM backend & memory store that exposes basic LLM functionality as a RESTful API.
You will need to either download a local LLM model to use or add your OpenAI to
the .env.template
file to get started.
> git clone https://github.com/spyglass-search/memex
> cd memex
> docker-compose up