From 0c83313add945e494b501932d356857634a26930 Mon Sep 17 00:00:00 2001 From: Kye Date: Sat, 13 Jan 2024 00:49:41 -0500 Subject: [PATCH] [README] --- README.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index c177494..632757d 100644 --- a/README.md +++ b/README.md @@ -1,7 +1,9 @@ [![Multi-Modality](agorabanner.png)](https://discord.gg/qUtxnK2NMf) # Mamba Transformer -![Mamba Transformer](https://www.figma.com/file/YZxnPMjtj5XrEA4XC2ANIE/Mamba-Transformer?type=whiteboard&node-id=1%3A2&t=ybC6tnz8xiYie1hb-1) + +![Mamba Transformer](https://www.figma.com/file/YZxnPMjtj5XrEA4XC2ANIE/Mamba-Transformer?type=whiteboard&node-id=1%3A2&t=ybC6tnz8xiYie1hb-1?raw=true) + Integrating Mamba/SSMs with Transformer for Enhanced Long Context and High-Quality Sequence Modeling. This is 100% novel architecture that I have designed to combine the strengths and weaknesses out of SSMs and Attention for an all-new advanced architecture with the purpose of surpassing our old limits. Faster processing speed, longer context lengths, lower perplexity over long sequences, enhanced and superior reasoning while remaining small and compact.