The most easy-to-use, simple and lightweight interface to run your local LLM, for everyone.
- Generate
- model selection
- temperature
- Image input for llama3.2-vision:latest
- Model Management
- View and delete models
- Pull new model
- Local server status
- Dark mode
- Include Context: Full or selection
- Clear chat history
- 0-basic: Basic proof of concept of ollama-gui
- chat: main project
- persona-studio: Build and manage your own personas
- everychat: your AI chat multiverse
- Ollama setup - install ollama app for mac (You can download model or just proceed and use gui)
- Quit the app (check on your status bar).
- Open terminal and enter
ollama serve
. Keep that terminal window open. - Check http://localhost:11434/, it should say "Ollama is running".
- Download the repo and open
web/chat/index.html