Skip to content

Commit

Permalink
📖 docs: Local Qwen for Ollama (lobehub#1723)
Browse files Browse the repository at this point in the history
* 📖 docs: Local Qwen for Ollama (lobehub#1622)

* 📝 docs: update docs

* 📝 docs: update docs

* 📝 docs: update docs

---------

Co-authored-by: Maple Gao <esanisa@gmail.com>
  • Loading branch information
arvinxx and MapleEve authored Mar 25, 2024
1 parent b4e21dc commit 99c47da
Show file tree
Hide file tree
Showing 2 changed files with 92 additions and 0 deletions.
47 changes: 47 additions & 0 deletions docs/usage/providers/ollama/qwen.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
---
title: Using the Local Qwen Model
image: https://github.com/lobehub/lobe-chat/assets/28616219/689e19ea-3003-4e15-a1d5-b7343d5ba898
---

# Using the Local Qwen Model

<Image alt={'Using Qwen in LobeChat'} cover src={'https://github.com/lobehub/lobe-chat/assets/28616219/7a5fd01a-9fed-49c1-93a3-422269213f19'} />

[Qwen](https://github.com/QwenLM/Qwen1.5) is a large language model (LLM) open-sourced by Alibaba Cloud. It is officially defined as a constantly evolving AI large model, and it achieves more accurate Chinese recognition capabilities through more training set content.

<Video src="https://github.com/lobehub/lobe-chat/assets/28616219/31e5f625-8dc4-4a5f-a5fd-d28d0457782d" />

Now, through the integration of LobeChat and [Ollama](https://ollama.com/), you can easily use Qwen in LobeChat. This document will guide you on how to use the local deployment version of Qwen in LobeChat:

<Steps>
## Local Installation of Ollama

First, you need to install Ollama. For the installation process, please refer to the [Ollama Usage Document](/en/usage/providers/ollama).

## Pull the Qwen Model to Local with Ollama

After installing Ollama, you can install the Qwen model with the following command, taking the 14b model as an example:

```bash
ollama pull qwen:14b
```
<Callout type={'info'}>
The local version of Qwen provides different model sizes to choose from. Please refer to the
[Qwen's Ollama integration page](https://ollama.com/library/qwen) to understand how to choose the
model size.
</Callout>
<Image alt={'Use Ollama Pull Qwen Model'} height={473} inStep src={'https://github.com/lobehub/lobe-chat/assets/1845053/fe34fdfe-c2e4-4d6a-84d7-4ebc61b2516a'} />
### Select the Qwen Model
In the LobeChat conversation page, open the model selection panel, and then select the Qwen model.
<Image alt={'Choose Qwen Model'} height={430} inStep src={'https://github.com/lobehub/lobe-chat/assets/28616219/e0608cca-f62f-414a-bc55-28a61ba21f14'} />
<Callout type={'info'}>
If you do not see the Ollama provider in the model selection panel, please refer to [Integration with Ollama](/en/self-hosting/examples/ollama) to learn how to enable the Ollama provider in LobeChat.
</Callout>
</Steps>
Next, you can have a conversation with the local Qwen model in LobeChat.
45 changes: 45 additions & 0 deletions docs/usage/providers/ollama/qwen.zh-CN.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
---
title: 使用本地通义千问 Qwen 模型
image: https://github.com/lobehub/lobe-chat/assets/28616219/689e19ea-3003-4e15-a1d5-b7343d5ba898
---

# 使用本地通义千问 Qwen 模型

<Image alt={'在 LobeChat 中使用 Qwen'} cover src={'https://github.com/lobehub/lobe-chat/assets/28616219/7a5fd01a-9fed-49c1-93a3-422269213f19'} />

[通义千问](https://github.com/QwenLM/Qwen1.5) 是阿里云开源的一款大语言模型(LLM),官方定义是一个不断进化的 AI 大模型,并通过更多的训练集内容达到更精准的中文识别能力。

<Video src="https://github.com/lobehub/lobe-chat/assets/28616219/31e5f625-8dc4-4a5f-a5fd-d28d0457782d" />

现在,通过 LobeChat 与 [Ollama](https://ollama.com/) 的集成,你可以轻松地在 LobeChat 中使用 通义千问。

本文档将指导你如何在 LobeChat 中使用通义千问本地部署版:

<Steps>
### 本地安装 Ollama

首先,你需要安装 Ollama,安装过程请查阅 [Ollama 使用文件](/zh/usage/providers/ollama)

### 用 Ollama 拉取 Qwen 模型到本地

在安装完成 Ollama 后,你可以通过以下命令安装 Qwen 模型,以 14b 模型为例:

```bash
ollama pull qwen:14b
```

<Image alt={'使用 Ollama 拉取 Qwen 模型'} height={473} inStep src={'https://github.com/lobehub/lobe-chat/assets/1845053/fe34fdfe-c2e4-4d6a-84d7-4ebc61b2516a'} />

### 选择 Qwen 模型

在会话页面中,选择模型面板打开,然后选择 Qwen 模型。

<Image alt={'模型选择面板中选择 Qwen 模型'} height={430} inStep src={'https://github.com/lobehub/lobe-chat/assets/28616219/e0608cca-f62f-414a-bc55-28a61ba21f14'} />

<Callout type={'info'}>
如果你没有在模型选择面板中看到 Ollama 服务商,请查阅 [与 Ollama
集成](/zh/self-hosting/examples/ollama) 了解如何在 LobeChat 中开启 Ollama 服务商。
</Callout>
</Steps>

接下来,你就可以使用 LobeChat 与本地 Qwen 模型对话了。

0 comments on commit 99c47da

Please sign in to comment.