Skip to content

Commit

Permalink
🦙 doc update: llama3 (#2470)
Browse files Browse the repository at this point in the history
* docs: update breaking_changes.md

* docs: update ai_endpoints.md -> llama3 for Ollama and groq

* librechat.yaml: update groq models

* Update breaking_changes.md

logs location

* Update breaking_changes.md

---------

Co-authored-by: Danny Avila <danny@librechat.ai>
  • Loading branch information
fuegovic and danny-avila authored Apr 20, 2024
1 parent e6310c8 commit 4196a86
Show file tree
Hide file tree
Showing 3 changed files with 50 additions and 6 deletions.
16 changes: 12 additions & 4 deletions docs/general_info/breaking_changes.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,13 +11,21 @@ weight: -10
Certain changes in the updates may impact cookies, leading to unexpected behaviors if not cleared properly.

---
## v0.7.1+

!!! info "🔍 Google Search Plugin"
## v0.7.0+

- **[Google Search Plugin](../features/plugins/google_search.md)**: Changed the environment variable for this plugin from `GOOGLE_API_KEY` to `GOOGLE_SEARCH_API_KEY` due to a conflict with the Google Generative AI library pulling this variable automatically. If you are using this plugin, please update your `.env` file accordingly.
!!! failure "Error Messages (UI)"
![image](https://github.com/danny-avila/LibreChat/assets/32828263/0ab27798-5515-49b4-ac29-e4ad83d73d7c)

## v0.7.0+
Client-facing error messages now display this warning asking to contact the admin. For the full error consult the console logs or the additional logs located in `./logs`

!!! warning "🪵Logs Location"

- The full logs are now in `./logs` (they are still in `./api/logs` for local, non-docker installations)

!!! warning "🔍 Google Search Plugin"

- **[Google Search Plugin](../features/plugins/google_search.md)**: Changed the environment variable for this plugin from `GOOGLE_API_KEY` to `GOOGLE_SEARCH_API_KEY` due to a conflict with the Google Generative AI library pulling this variable automatically. If you are using this plugin, please update your `.env` file accordingly.

!!! info "🗃️ RAG API (Chat with Files)"

Expand Down
32 changes: 31 additions & 1 deletion docs/install/configuration/ai_endpoints.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,9 +64,11 @@ Some of the endpoints are marked as **Known,** which means they might have speci
baseURL: "https://api.groq.com/openai/v1/"
models:
default: [
"llama3-70b-8192",
"llama3-8b-8192",
"llama2-70b-4096",
"mixtral-8x7b-32768",
"gemma-7b-it"
"gemma-7b-it",
]
fetch: false
titleConvo: true
Expand Down Expand Up @@ -374,3 +376,31 @@ Some of the endpoints are marked as **Known,** which means they might have speci
forcePrompt: false
modelDisplayLabel: "Ollama"
```

!!! tip "Ollama -> llama3"

To prevent the behavior where llama3 does not stop generating, add this `addParams` block to the config:

```yaml
- name: "Ollama"
apiKey: "ollama"
baseURL: "http://host.docker.internal:11434/v1/"
models:
default: [
"llama3"
]
fetch: false # fetching list of models is not supported
titleConvo: true
titleModel: "llama3"
summarize: false
summaryModel: "llama3"
forcePrompt: false
modelDisplayLabel: "Ollama"
addParams:
"stop": [
"<|start_header_id|>",
"<|end_header_id|>",
"<|eot_id|>",
"<|reserved_special_token"
]
```
8 changes: 7 additions & 1 deletion librechat.example.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,13 @@ endpoints:
apiKey: '${GROQ_API_KEY}'
baseURL: 'https://api.groq.com/openai/v1/'
models:
default: ['llama2-70b-4096', 'mixtral-8x7b-32768', 'gemma-7b-it']
default: [
"llama3-70b-8192",
"llama3-8b-8192",
"llama2-70b-4096",
"mixtral-8x7b-32768",
"gemma-7b-it",
]
fetch: false
titleConvo: true
titleModel: 'mixtral-8x7b-32768'
Expand Down

0 comments on commit 4196a86

Please sign in to comment.