A powerful stand-alone AI application bundle
Belullama is a comprehensive AI application that bundles Ollama, Open WebUI, and Automatic1111 (Stable Diffusion WebUI) into a single, easy-to-use package. It allows you to create and manage conversational AI applications and generate images with minimal setup.
Belullama provides a complete solution for running large language models and image generation models on your local machine. It combines the power of Ollama for running LLMs, Open WebUI for a user-friendly interface, and Automatic1111 for Stable Diffusion image generation.
- All-in-One AI Platform: Belullama integrates Ollama, Open WebUI, and Automatic1111 (Stable Diffusion WebUI) in a single package.
- Easy Setup: The stand-alone version comes with a simple installer script for quick deployment.
- Conversational AI: Create and manage chatbots and conversational AI applications with ease.
- Image Generation: Generate images using Stable Diffusion models through the Automatic1111 WebUI.
- User-Friendly Interface: Open WebUI provides an intuitive interface for interacting with language models.
- Offline Operation: Run entirely offline, ensuring data privacy and security.
- Extensibility: Customize and extend functionalities to meet your specific requirements.
To install the stand-alone version of Belullama, which includes Ollama, Open WebUI, and Automatic1111, use the following command:
curl -s https://raw.githubusercontent.com/ai-joe-git/Belullama/main/belullama_installer.sh | sudo bash
This script will set up all components and configure them to work together seamlessly.
If you prefer to install Belullama as a CasaOS app, follow these steps:
- Access your CasaOS server through your web browser.
- Click the "+" button and select "Install a customized app".
- Download the Docker file from here.
- In the CasaOS interface, click "Install" and follow the prompts to complete the installation.
We're excited to announce that we're actively working on an NVIDIA GPU-compatible version of Belullama! This upcoming release will allow users with NVIDIA graphics cards to leverage their GPU power for significantly faster processing and improved performance.
As we're in the final stages of development, we're looking for beta testers to help us ensure the NVIDIA version works flawlessly across different setups. If you have an NVIDIA GPU and would like to contribute to the project by being a beta tester, please try the GPU supported version:
To install the GPU version of Belullama, which includes Ollama, Open WebUI, and Automatic1111, use the following command:
curl -s https://raw.githubusercontent.com/ai-joe-git/Belullama/main/belullama_installer_gpu.sh | sudo bash
This script will set up all components and configure them to work together seamlessly.
While we don't have a fixed release date yet, we're aiming to launch the NVIDIA-compatible version very soon. Stay tuned to this repository for updates!
Please note that the current version of Belullama is CPU-based. If you're eager to start using Belullama right away, you can still enjoy its features using your CPU.
We appreciate your patience and support as we work to make Belullama even more powerful and accessible. Thank you for being part of our community!
After installation, you can start using Belullama:
- Access Open WebUI through your web browser (the URL will be provided after installation).
- Use the interface to interact with language models, create chatbots, or generate text.
- To access Stable Diffusion WebUI, use the provided URL for Automatic1111.
- Follow the on-screen instructions to generate images or fine-tune models.
For detailed usage instructions, please refer to the documentation in the Belullama repository.
Contributions to Belullama are welcome! If you have ideas, bug reports, or feature requests, please open an issue in the repository. Pull requests for code improvements or new features are also appreciated.
Belullama is released under the MIT License. See the LICENSE file in the repository for details.