Skip to content
This repository has been archived by the owner on May 31, 2023. It is now read-only.

Releases: ChaoticByte/Eucalyptus-Chat

Version 4.3

30 May 21:25
Compare
Choose a tag to compare
  • Redesigned the chat history
  • Renamed the profile names of vicuna-v0 and vicuna-v1.1
  • Updated the README

Version 4.2

30 May 20:02
Compare
Choose a tag to compare
  • Bumped llama-cpp-python[server] from 0.1.54 to 0.1.56
  • Changed the profile format
  • Added support for Vicuna v1.1
  • Added support for Manticore Chat

Version 4.1

25 May 19:40
Compare
Choose a tag to compare
  • Bugfix: Preserve whitespaces in messages #10
  • minor code improvements

Version 4.0

24 May 19:10
060d522
Compare
Choose a tag to compare

BREAKING! Bumped llama-cpp-python[server] from 0.1.50 to 0.1.54

This (again) requires re-quantized models. The new format is ggml v3. See ggerganov/llama.cpp#1508

Version 3.1

18 May 22:25
Compare
Choose a tag to compare
  • Added a toggle button for the sidebar
  • Implemented a responsive design (fixes #4)
  • Made more minor improvements to the frontend

Version 3.0

18 May 14:44
Compare
Choose a tag to compare

Changed the frontend to support other LLMs and added support for Vicuna.

Version 2.1

18 May 10:02
Compare
Choose a tag to compare

Added more parameters to the sidebar: top_k, repeat_penalty, presence_penalty, frequency_penalty

Version 2.0

18 May 09:22
9247218
Compare
Choose a tag to compare

(Breaking) Bumped llama-cpp-python[server] from 0.1.48 to 0.1.50
You may have to re-quantize your models, see ggerganov/llama.cpp#1405

Version 1.1

18 May 09:21
Compare
Choose a tag to compare
v1.1

Minor style improvements and fixes

Version 1.0

18 May 07:58
Compare
Choose a tag to compare
v1.0

Pinned all pip dependencies in requirements.txt and added dependabot …