- 🤖 Gollama: Ollama in your terminal, Your Offline AI Copilot 🦙
Gollama is a delightful tool that brings Ollama, your offline conversational AI companion, directly into your terminal. It provides a fun and interactive way to generate responses from various models without needing internet connectivity. Whether you're brainstorming ideas, exploring creative writing, or just looking for inspiration, Gollama is here to assist you.
- Chat TUI with History: Gollama now provides a chat-like TUI experience with a history of previous conversations. Saves previous conversations locally using a SQLite database to continue your conversations later.
- Interactive Interface: Enjoy a seamless user experience with intuitive interface powered by Bubble Tea.
- Customizable Prompts: Tailor your prompts to get precisely the responses you need.
- Multiple Models: Choose from a variety of models to generate responses that suit your requirements.
- Visual Feedback: Stay engaged with visual cues like spinners and formatted output.
- Multimodal Support: Gollama now supports multimodal models like Llava
- Model Installation & Management: Easily install and manage models using the Ollamanager library. Directly integrated with Gollama, refer the Ollama Model Management section for more details.
- Ollama installed on your system or a gollama API server
accessible from your machine. (Default:
http://localhost:11434
, optionally can be configured using theOLLAMA_HOST
environment variable. Refer the official Ollama Go SDK docs for further information - At least one model installed on your Ollama server. You can install models
using the
ollama pull <model-name>
command. To find a list of all available models, check the Ollama Library. You can also use theollama list
command to list all locally installed models.
You can install Gollama using one of the following methods:
Grab the latest release from the releases page and extract the archive to a location of your choice.
Note
Prerequisite: Go installed on your system.
You can also install Gollama using the go install
command:
go install github.com/gaurav-gosain/gollama@latest
You can pull the latest docker image from the GitHub Docker Container Registry and run it using the following command:
docker run --net=host -it --rm ghcr.io/gaurav-gosain/gollama:latest
-
Run the executable:
gollama
or
/path/to/gollama
-
Follow the on-screen instructions to interact with Gollama.
Note
Running Gollama with the -h
flag will display the list of available flags.
-v, --version Prints the version of Gollama
-m, --manage manages the installed Ollama models (update/delete installed models)
-i, --install installs an Ollama model (download and install a model)
-r, --monitor Monitor the status of running Ollama models
--model string Model to use for generation
--prompt string Prompt to use for generation
--images strings Paths to the image files to attach (png/jpg/jpeg), comma separated
Warning
The responses for multimodal LLMs are slower than the normal models (also depends on the size of the attached image)
Key | Description |
---|---|
↑/k |
Up |
↓/j |
Down |
→/l/pgdn |
Next page |
←/h/pgup |
Previous page |
g/home |
Go to start |
G/end |
Go to end |
enter |
Select chat |
q |
Quit |
d |
Delete chat |
ctrl+n |
New chat |
? |
Toggle extended help |
Key | Description |
---|---|
ctrl+up/k |
Move view up |
ctrl+down/j |
Move view down |
ctrl+u |
Half page up |
ctrl+d |
Half page down |
ctrl+p |
Previous message |
ctrl+n |
Next message |
ctrl+y |
Copy last response |
alt+y |
Copy highlighted message |
ctrl+o |
Toggle image picker |
ctrl+x |
Remove attachment |
ctrl+h |
Toggle help |
ctrl+c |
Exit chat |
Note
The ctrl+o
keybinding only works if the selected model is multimodal
Note
The management screens can be chained together.
For example, using the flags -imr
will run Ollamanager
with tabs for installing, managing, and monitoring models.
The following keybindings are common to all modal management screens:
Key | Description |
---|---|
? |
Toggle help menu |
↑/k |
Move up |
↓/j |
Move down |
←/h |
Move left |
→/l |
Move right |
enter |
Pick selected item |
/ |
Filter/fuzzy find items |
esc |
Clear filter |
q/ctrl+c |
Quit |
n/tab |
Switch to the next tab |
p/shift+tab |
Switch to the previous tab |
Note
The following keybindings are specific to the Manage Models
screen/tab:
Key | Description |
---|---|
u |
Update selected model |
d |
Delete selected model |
Note
Gollama uses the Ollamanager library to manage models. It provides a convenient way to install, update, and delete models.
ollamanager-demo.mp4
echo "Once upon a time" | gollama --model="llama3.1" --prompt="prompt goes here"
gollama --model="llama3.1" --prompt="prompt goes here" < input.txt
Important
Not supported for all models, check if the model is multimodal
gollama --model="llava:latest" \
--prompt="prompt goes here" \
--images="path/to/image.png"
Warning
The --model
and --prompt
flags are mandatory for CLI mode.
The --images
flag is optional.
You can also run Gollama locally using docker:
-
Clone the repository:
git clone https://github.com/Gaurav-Gosain/gollama.git
-
Navigate to the project directory:
cd gollama
-
Build the docker image:
docker build -t gollama .
-
Run the docker image:
docker run --net=host -it gollama
Note
The above commands build the docker image with the tag gollama
.
You can replace gollama
with any tag of your choice.
If you prefer to build from source, follow these steps:
-
Clone the repository:
git clone https://github.com/Gaurav-Gosain/gollama.git
-
Navigate to the project directory:
cd gollama
-
Build the executable:
go build
Gollama relies on the following third-party packages:
- ollama: The official Go SDK for ollama.
- ollamanager: A Go library for installing, managing and monitoring ollama models.
- bubbletea: A library for building terminal applications using the Model-Update-View pattern.
- glamour: A markdown rendering library for the terminal.
- huh: A library for building terminal-based forms.
- lipgloss: A library for styling text output in the terminal.
- Implement piped mode for automated usage.
- Add ability to copy responses to clipboard.
- GitHub Actions for automated releases.
- Add support for downloading models directly from Ollama using the rest API.
- Add support for extracting and copying codeblocks from the generated responses.
- Add CLI options to interact with the database and perform operations like:
- Deleting chats
- Creating a new chat
- Listing chats
- Continuing a chat from the CLI
Contributions are welcome! Whether you want to add new features, fix bugs, or improve documentation, feel free to open a pull request.
This project is licensed under the MIT License - see the LICENSE file for details.