Skip to content

Commit

Permalink
renamed main python script
Browse files Browse the repository at this point in the history
  • Loading branch information
LostRuins committed Mar 29, 2023
1 parent 664b277 commit d8febc8
Show file tree
Hide file tree
Showing 3 changed files with 4 additions and 4 deletions.
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# llama-for-kobold
# llamacpp-for-kobold

A self contained distributable from Concedo that exposes llama.cpp function bindings, allowing it to be used via a simulated Kobold API endpoint.

Expand All @@ -8,7 +8,7 @@ What does it mean? You get llama.cpp with a fancy UI, persistent stories, editin

## Usage
- [Download the latest release here](https://github.com/LostRuins/llamacpp-for-kobold/releases/latest) or clone the repo.
- Windows binaries are provided in the form of **llamacpp-for-kobold.exe**, which is a pyinstaller wrapper for **llamacpp.dll** and **llama-for-kobold.py**. If you feel concerned, you may prefer to rebuild it yourself with the provided makefiles and scripts.
- Windows binaries are provided in the form of **llamacpp-for-kobold.exe**, which is a pyinstaller wrapper for **llamacpp.dll** and **llamacpp_for_kobold.py**. If you feel concerned, you may prefer to rebuild it yourself with the provided makefiles and scripts.
- Weights are not included, you can use the `quantize.exe` to generate them from your official weight files (or download them from other places).
- To run, execute **llamacpp-for-kobold.exe** or drag and drop your quantized `ggml_model.bin` file onto the .exe, and then connect with Kobold or Kobold Lite.
- By default, you can connect to http://localhost:5001
Expand All @@ -17,7 +17,7 @@ What does it mean? You get llama.cpp with a fancy UI, persistent stories, editin

## OSX and Linux
- You will have to compile your binaries from source. A makefile is provided, simply run `make`
- After all binaries are built, you can run the python script with the command `llama_for_kobold.py [ggml_model.bin] [port]`
- After all binaries are built, you can run the python script with the command `llamacpp_for_kobold.py [ggml_model.bin] [port]`

## Considerations
- Don't want to use pybind11 due to dependencies on MSVCC
Expand Down
File renamed without changes.
2 changes: 1 addition & 1 deletion make_pyinstaller.bat
Original file line number Diff line number Diff line change
@@ -1 +1 @@
pyinstaller --noconfirm --onefile --console --icon "./niko.ico" --add-data "./klite.embd;." --add-data "./llamacpp.dll;." --add-data "./llamacpp_blas.dll;." --add-data "./libopenblas.dll;." "./llama_for_kobold.py" -n "llamacpp-for-kobold.exe"
pyinstaller --noconfirm --onefile --console --icon "./niko.ico" --add-data "./klite.embd;." --add-data "./llamacpp.dll;." --add-data "./llamacpp_blas.dll;." --add-data "./libopenblas.dll;." "./llamacpp_for_kobold.py" -n "llamacpp-for-kobold.exe"

0 comments on commit d8febc8

Please sign in to comment.