The 'Interactive chat based on DialoGPT model using Intel® Extension for PyTorch* Quantization' sample demonstrates how to create interactive chat based on pre-trained DialoGPT model and add the Intel® Extension for PyTorch* quantization to it.
Area | Description |
---|---|
What you will learn | How to create interactive chat and add INT8 dynamic quantization form Intel Extension for PyTorch* (IPEX) |
Time to complete | 10 minutes |
Category | Concepts and Functionality |
The Intel® Extension for PyTorch* extends PyTorch* with optimizations for extra performance boost on Intel® hardware. While most of the optimizations will be included in future PyTorch* releases, the extension delivers up-to-date features and optimizations for PyTorch on Intel® hardware. For example, newer optimizations include AVX-512 Vector Neural Network Instructions (AVX512 VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX).
This sample shows how to create interactive chat based on the pre-trained DialoGPT model from HuggingFace and how to add INT8 dynamic quantization to it. The Intel® Extension for PyTorch* gives users the ability to speed up operations on processors with INT8 data format and specialized computer instructions. The INT8 data format uses quarter the bit width of floating-point-32 (FP32), lowering the amount of memory needed and execution time to process with minimum to zero accuracy loss.
Optimized for | Description |
---|---|
OS | Ubuntu* 20.04 or newer |
Hardware | Intel® Xeon® Scalable Processor family |
Software | Intel® Extension for PyTorch* |
You will need to download and install the following toolkits, tools, and components to use the sample.
-
Intel® AI Analytics Toolkit (AI Kit)
You can get the AI Kit from Intel® oneAPI Toolkits.
See Get Started with the Intel® AI Analytics Toolkit for Linux* for AI Kit installation information and post-installation steps and scripts. -
Jupyter Notebook
Install using PIP:
$pip install notebook
.
Alternatively, see Installing Jupyter for detailed installation instructions.
The necessary tools and components are already installed in the environment. You do not need to install additional components. See Intel® DevCloud for oneAPI for information.
This code sample implements interactive chat based on DialoGPT pre-trained model and quantizes it using Intel® Extension for PyTorch*.
The sample tutorial contains one Jupyter Notebook and a Python script. You can use either.
Notebook | Description |
---|---|
IntelPytorch_Interactive_Chat_Quantization.ipynb |
Performs chat creation with IPEX quantization and provides interface for interactions in Jupyter Notebook. |
Script | Description |
---|---|
IntelPytorch_Interactive_Chat_Quantization.py |
The script performs chat creation with IPEX quantization and provides simple interactions based on prepared input. |
When working with the command-line interface (CLI), you should configure the oneAPI toolkits using environment variables. Set up your CLI environment by sourcing the setvars
script every time you open a new terminal window. This practice ensures that your compiler, libraries, and tools are ready for development.
Run the Interactive chat based on DialoGPT model using Intel® Extension for PyTorch* Quantization
Sample
Note: If you have not already done so, set up your CLI environment by sourcing the
setvars
script in the root of your oneAPI installation.Linux*:
- For system wide installations:
. /opt/intel/oneapi/setvars.sh
- For private installations:
. ~/intel/oneapi/setvars.sh
- For non-POSIX shells, like csh, use the following command:
bash -c 'source <install-dir>/setvars.sh ; exec csh'
For more information on configuring environment variables, see Use the setvars Script with Linux* or macOS*.
-
Activate the Conda environment.
conda activate pytorch
-
Activate Conda environment without Root access (Optional).
By default, the AI Kit is installed in the
/opt/intel/oneapi
folder and requires root privileges to manage it.You can choose to activate Conda environment without root access. To bypass root access to manage your Conda environment, clone and activate your desired Conda environment using the following commands similar to the following.
conda create --name user_pytorch --clone pytorch conda activate user_pytorch
- Change to the sample directory.
- Launch Jupyter Notebook.
jupyter notebook --ip=0.0.0.0 --port 8888 --allow-root
- Follow the instructions to open the URL with the token in your browser.
- Locate and select the Notebook.
IntelPytorch_Interactive_Chat_Quantization.ipynb
- Change your Jupyter Notebook kernel to PyTorch (AI kit).
- Run every cell in the Notebook in sequence.
- Change to the sample directory.
- Run the script.
python IntelPytorch_Interactive_Chat_Quantization.py < input.txt
Run the Interactive chat based on DialoGPT model using Intel® Extension for PyTorch* Quantization
Sample on Intel® DevCloud
-
If you do not already have an account, request an Intel® DevCloud account at Create an Intel® DevCloud Account.
-
On a Linux* system, open a terminal.
-
SSH into Intel® DevCloud.
ssh DevCloud
Note: You can find information about configuring your Linux system and connecting to Intel DevCloud at Intel® DevCloud for oneAPI Get Started.
-
Follow the instructions to open the URL with the token in your browser.
-
Locate and select the Notebook.
IntelPytorch_Interactive_Chat_Quantization.ipynb
-
Change the kernel to PyTorch (AI kit).
-
Run every cell in the Notebook in sequence.
If you receive an error message, troubleshoot the problem using the Diagnostics Utility for Intel® oneAPI Toolkits. The diagnostic utility provides configuration and system checks to help find missing dependencies, permissions errors, and other issues. See the Diagnostics Utility for Intel® oneAPI Toolkits User Guide for more information on using the utility.
Loading model...
Quantization in progress...
>> You: Hello! How are you?
DialoGPT: Great and you?
>> You: I am good!
DialoGPT: Well good!
>> You: Can you go to the cinema today?
DialoGPT: Of course I can!
>> You: What movie would you like to see?
DialoGPT: Can you pick out the movies that aren't in english?
>> You: Ok, see you at cinema! Bye!
DialoGPT: See ya!
Inference with FP32
Loading model...
Warmup...
Inference...
Inference with Dynamic INT8
Loading model...
Quantization in progress...
Warmup...
Inference...
[CODE_SAMPLE_COMPLETED_SUCCESFULLY]
Code samples are licensed under the MIT license. See License.txt for details.
Third party program Licenses can be found here: third-party-programs.txt