Skip to content

Latest commit

 

History

History
20 lines (12 loc) · 1.41 KB

README.md

File metadata and controls

20 lines (12 loc) · 1.41 KB

ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer

License: Apache 2.0

Haoran You*, Huihong Shi*, Yipin Guo* and Yingyan Lin

Accepted by NeurIPS 2023. More Info: [ Paper | Slide | Project | Poster | Github ]


Updates

  • We have made the entire code for PVT models publicly available, encompassing training, evaluation, TVM compilation of the entire model, and subsequent throughput measurements and comparisons. For additional information, refer to the ./pvt directory.
  • We have also released the unit test for our MatAdd and MatShift kernels constructed with TVM. This test enables you to replicate the comparison results illustrated in Figures 4 and 5 of our paper. Please refer to the ./Ops_Speedups folder for more information.

ToDos

  • Publish the pre-trained checkpoints and provide the corresponding expected TVM output in the form of a .json file for replicating our results.
  • Upload the presentation to Youtube and share the link.