Skip to content

TensorRT-LLM 0.7.1 Release

Compare
Choose a tag to compare
@kaiyux kaiyux released this 27 Dec 01:59
· 97 commits to main since this release
80bc075

Hi,

We are very pleased to announce the 0.7.1 version of TensorRT-LLM. It has been an intense effort, and we hope that it will enable you to easily deploy GPU-based inference for state-of-the-art LLMs. We want TensorRT-LLM to help you run those LLMs very fast.

This update includes:

  • Models
    • BART and mBART support in encoder-decoder models
    • FairSeq Neural Machine Translation (NMT) family
    • Mixtral-8x7B model
      • Support weight loading for HuggingFace Mixtral model
    • OpenAI Whisper
    • Mixture of Experts support
    • MPT - Int4 AWQ / SmoothQuant support
    • Baichuan FP8 quantization support
  • Features
    • [Preview] Speculative decoding
    • Add Python binding for GptManager
    • Add a Python class ModelRunnerCpp that wraps C++ gptSession
    • System prompt caching
    • Enable split-k for weight-only cutlass kernels
    • FP8 KV cache support for XQA kernel
    • New Python builder API and trtllm-build command(already applied to blip2 and OPT )
    • Support StoppingCriteria and LogitsProcessor in Python generate API (thanks to the contribution from @zhang-ge-hao)
    • fMHA support for chunked attention and paged kv cache
  • Bug fixes
    • Fix tokenizer usage in quantize.py #288, thanks to the contribution from @0xymoro
    • Fix LLaMa with LoRA error #637
    • Fix LLaMA GPTQ failure #580
    • Fix Python binding for InferenceRequest issue #528
    • Fix CodeLlama SQ accuracy issue #453
    • Minor bug fixes
  • Performance
    • MMHA optimization for MQA and GQA
    • LoRA optimization: cutlass grouped gemm
    • Optimize Hopper warp specialized kernels
    • Optimize AllReduce for parallel attention on Falcon and GPT-J
    • Enable split-k for weight-only cutlass kernel when SM>=75
  • Documentation

Currently, there are two key branches in the project:

  • The rel branch is the stable branch for the release of TensorRT-LLM. It has been QA-ed and carefully tested.
  • The main branch is the dev branch. It is more experimental.

We are updating the main branch regularly with new features, bug fixes and performance optimizations. The stable branch will be updated less frequently, and the exact frequencies depend on your feedback.

Thanks,

The TensorRT-LLM Engineering Team