Skip to content

Latest commit

 

History

History
42 lines (28 loc) · 1.27 KB

README.md

File metadata and controls

42 lines (28 loc) · 1.27 KB

retell-custom-llm-python-demo

This is a sample demo repo to show how to have your own LLM plugged into Retell.

This repo currently uses OpenAI endpoint. Feel free to contribute to make this demo more realistic.

Steps to run in localhost

  1. First install dependencies
pip3 install -r requirements.txt
  1. Fill out the API keys in .env

  2. In another bash, use ngrok to expose this port to public network

ngrok http 8080
  1. Start the websocket server
uvicorn app.server:app --reload --port=8080

You should see a fowarding address like https://dc14-2601-645-c57f-8670-9986-5662-2c9a-adbd.ngrok-free.app, and you are going to take the hostname dc14-2601-645-c57f-8670-9986-5662-2c9a-adbd.ngrok-free.app, prepend it with wss://, postpend with /llm-websocket (the route setup to handle LLM websocket connection in the code) to create the url to use in the dashboard to create a new agent. Now the agent you created should connect with your localhost.

The custom LLM URL would look like wss://dc14-2601-645-c57f-8670-9986-5662-2c9a-adbd.ngrok-free.app/llm-websocket

Run in prod

To run in prod, you probably want to customize your LLM solution, host the code in a cloud, and use that IP to create agent.