This repository has been archived by the owner on May 31, 2023. It is now read-only.
Releases: ChaoticByte/Eucalyptus-Chat
Releases · ChaoticByte/Eucalyptus-Chat
Version 4.3
- Redesigned the chat history
- Renamed the profile names of vicuna-v0 and vicuna-v1.1
- Updated the README
Version 4.2
- Bumped llama-cpp-python[server] from 0.1.54 to 0.1.56
- Changed the profile format
- Added support for Vicuna v1.1
- Added support for Manticore Chat
Version 4.1
- Bugfix: Preserve whitespaces in messages #10
- minor code improvements
Version 4.0
BREAKING! Bumped llama-cpp-python[server] from 0.1.50 to 0.1.54
This (again) requires re-quantized models. The new format is ggml v3. See ggerganov/llama.cpp#1508
Version 3.1
- Added a toggle button for the sidebar
- Implemented a responsive design (fixes #4)
- Made more minor improvements to the frontend
Version 3.0
Changed the frontend to support other LLMs and added support for Vicuna.
Version 2.1
Added more parameters to the sidebar: top_k, repeat_penalty, presence_penalty, frequency_penalty
Version 2.0
(Breaking) Bumped llama-cpp-python[server] from 0.1.48 to 0.1.50
You may have to re-quantize your models, see ggerganov/llama.cpp#1405
Version 1.1
v1.1 Minor style improvements and fixes
Version 1.0
v1.0 Pinned all pip dependencies in requirements.txt and added dependabot …