Status: Deprecated.
Llama.cpp bot for use in the department of Humoids in OneLoveIPFS chat rooms.
- Invokes llama.cpp daemon in chat mode
- Discord and Matrix for chat clients
- Native cross-posting across chat clients
nodejs
andnpm
(Latest LTS, v18 minimum supported)- Matrix account for the bot
- Discord application for the bot
- Clone and compile llama.cpp.
- Download the required models (i.e.
ggml-model-q4_0.bin
). - Copy the example
.env
file and configure.
cp .env.example .env
- Install node modules.
npm i
- Start bot
npm start
An example config file has been included in .env.example
file. Copy this to .env
file and modify. In addition, the bridge may be configured through environment variables and command line args.
Environment variables are prefixed with HUMOID_LLAMA_
and uppercased.
Argument | Description | Default |
---|---|---|
--log_level |
Sets the log level of the bot | info |
--exec_path |
Path to compiled llama.cpp executable | |
--exec_args |
Additional llama.cpp arguments | |
--model_path |
Path to ggml model | |
--prompt_file |
Path to text file containing prompt | |
--ignore_prefix |
Message prefix ignored by bot | ~ |
--matrix_homeserver |
Matrix homeserver where bot lives | |
--matrix_bot_token |
Matrix bot user access token | |
--matrix_room_id |
Matrix room ID that bot responds to | |
--discord_channel_id |
Discord channel ID where bot lives in | |
--discord_bot_token |
Discord bot token | |
--discord_loading_emoji_id |
Suffix to append for response progress on Discord |