Skip to content
This repository has been archived by the owner on May 31, 2023. It is now read-only.

Version 4.0

Compare
Choose a tag to compare
@ChaoticByte ChaoticByte released this 24 May 19:10
· 13 commits to main since this release
060d522

BREAKING! Bumped llama-cpp-python[server] from 0.1.50 to 0.1.54

This (again) requires re-quantized models. The new format is ggml v3. See ggerganov/llama.cpp#1508