This repository contains examples of how to create a LLM (Large Language Model) Function Calling serverless by YoMo framework.
YoMo support multiple LLM providers, like Ollama, Mistral, Llama, Azure OpenAI, Cloudflare AI Gateway, etc. You can choose the one you want to use, details can be found on Doc: LLM Providers and Doc: Configuration.
- tool-get-utc-time: Get the UTC time by city name.
- tool-currency-converter: Currency Calculator by 3rd party API.
- tool-get-weather: Get the weather information by city name by 3rd party API.
- tool-timezone-calculator: Calculate the timezone for a specific time.
- tool-get-ip-and-latency: Get IP and Latency by give website name like "Nike" and "Amazone" by
ping
command.
Check Docs: Self Hosting for details on how to deploy YoMo LLM Bridge and Function Calling Serverless on your own infrastructure. Furthermore, if your AI agents become popular with users all over the world, you may consider deploying in multiple regions to improve LLM response speed. Check Docs: Geo-distributed System for instructions on making your AI applications more reliable and faster.