Skip to content

Commit

Permalink
docs: Add a warning around the experimental nature of ollama, and add…
Browse files Browse the repository at this point in the history
… update instructions
  • Loading branch information
MohamedBassem committed Mar 27, 2024
1 parent 731ed3a commit 5cbce67
Show file tree
Hide file tree
Showing 2 changed files with 20 additions and 1 deletion.
16 changes: 15 additions & 1 deletion docs/docs/02-installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,8 @@ MEILI_MASTER_KEY=another_random_string

You **should** change the random strings. You can use `openssl rand -base64 36` to generate the random strings.

Using `HOARDER_VERSION=release` will pull the latest stable version. You might want to pin the version instead to control the upgrades (e.g. `HOARDER_VERSION=0.10.0`). Check the latest versions [here](https://github.com/MohamedBassem/hoarder-app/pkgs/container/hoarder-web).

Persistent storage and the wiring between the different services is already taken care of in the docker compose file.

Keep in mind that every time you change the `.env` file, you'll need to re-run `docker compose up`.
Expand All @@ -47,14 +49,17 @@ To enable automatic tagging, you'll need to configure OpenAI. This is optional t
Learn more about the costs of using openai [here](/openai).

<details>
<summary>If you want to use Ollama (https://ollama.com/) instead for local inference.</summary>
<summary>[EXPERIMENTAL] If you want to use Ollama (https://ollama.com/) instead for local inference.</summary>

**Note:** The quality of the tags you'll get will depend on the quality of the model you choose. Running local models is a recent addition and not as battle tested as using openai, so proceed with care (and potentially expect a bunch of inference failures).

- Make sure ollama is running.
- Set the `OLLAMA_BASE_URL` env variable to the address of the ollama API.
- Set `INFERENCE_TEXT_MODEL` to the model you want to use for text inference in ollama (for example: `llama2`)
- Set `INFERENCE_IMAGE_MODEL` to the model you want to use for image inference in ollama (for example: `llava`)
- Make sure that you `ollama pull`-ed the models that you want to use.


</details>

### 5. Start the service
Expand All @@ -64,3 +69,12 @@ Start the service by running:
```
docker compose up -d
```

Then visit `http://localhost:3000` and you should be greated with the Sign In page.


## Updating

Updating hoarder will depend on what you used for the `HOARDER_VERSION` env variable.
- If you pinned the app to a specific version, bump the version and re-run `docker compose up -d`. This should pull the new version for you.
- If you used `HOARDER_VERSION=release`, you'll need to force docker to pull the latest version by running `docker compose up --pull always -d`.
5 changes: 5 additions & 0 deletions docs/docs/03-configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,11 @@ The app is mainly configured by environment variables. All the used environment

Either `OPENAI_API_KEY` or `OLLAMA_BASE_URL` need to be set for automatic tagging to be enabled. Otherwise, automatic tagging will be skipped.

:::warning
- The quality of the tags you'll get will depend on the quality of the model you choose.
- Running local models is a recent addition and not as battle tested as using OpenAI, so proceed with care (and potentially expect a bunch of inference failures).
:::

| Name | Required | Default | Description |
| --------------------- | -------- | -------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| OPENAI_API_KEY | No | Not set | The OpenAI key used for automatic tagging. More on that in [here](/openai). |
Expand Down

0 comments on commit 5cbce67

Please sign in to comment.