You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Dropdown menu for quickly switching between different models.
Large number of extensions (built-in and user-contributed), including Coqui TTS for realistic voice outputs, Whisper STT for voice inputs, translation, multimodal pipelines, vector databases, Stable Diffusion integration, and a lot more. See the wiki and the extensions directory for details.
Precise chat templates for instruction-following models, including Llama-2-chat, Alpaca, Vicuna, Mistral.
LoRA: train new LoRAs with your own data, load/unload LoRAs on the fly for generation.
Transformers library integration: load models in 4-bit or 8-bit precision through bitsandbytes, use llama.cpp with transformers samplers (llamacpp_HF loader), CPU inference in 32-bit precision using PyTorch.
OpenAI-compatible API server with Chat and Completions endpoints -- see the examples.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Description
A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
Official Website
https://github.com/oobabooga/text-generation-webui
Documentation link
https://github.com/oobabooga/text-generation-webui/wiki
Last application release & date
snapshot-2024-02-25
Application license
AGPL-3.0 license
Source code repository link
https://github.com/oobabooga/text-generation-webui
Docker image link
https://github.com/oobabooga/text-generation-webui/tree/main/docker
Other's
No response
Please confirm the following
Gathering crowds
Hey folks!
Please upvote ⬆️ this discussion to show your interest in this request!
Thanks ⛺
Beta Was this translation helpful? Give feedback.
All reactions