Skip to content

Commit

Permalink
emphasis on compute savings
Browse files Browse the repository at this point in the history
  • Loading branch information
PicoCreator authored Sep 21, 2023
1 parent 85c2b4f commit 3964f74
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ So it's combining the best of RNN and transformer - great performance, fast infe
# TLDR vs Existing transformer models

**Good**
+ Lower resource usage (VRAM, CPU, GPU, etc) when running and training
+ Lower resource usage (VRAM, CPU, GPU, etc) when running and training. With 10x to a 100x lower compute requirements compared to transformers with large context sizes.
+ Scales to any context length linearly (transformers scales quadratically)
+ Perform just as well, in terms of answer quality and capability
+ RWKV models are generally better trained in other languages (e.g. Chinese, Japanese, etc), then most existing OSS models
Expand Down

0 comments on commit 3964f74

Please sign in to comment.