Skip to content

Releases: kyegomez/swarms

0.9.00

21 Jul 23:09
Compare
Choose a tag to compare
clean up

Former-commit-id: fbc4988d7845f8ba80df5a491e7ce86c9a1e2733

0.9.0

21 Jul 23:06
Compare
Choose a tag to compare
clean up

Former-commit-id: 510c20c2e5810d207a4bc5decda08e9c34d9149f

0.8.9

21 Jul 23:02
Compare
Choose a tag to compare
clean up

Former-commit-id: 510c20c2e5810d207a4bc5decda08e9c34d9149f

0.8.8

21 Jul 22:57
Compare
Choose a tag to compare
clean up no pytrace

Former-commit-id: 578aa9fc3743d5da9de366d4cb39a556020fb209

0.8.7

21 Jul 21:38
Compare
Choose a tag to compare
clean up

Former-commit-id: f730efaf2c0570521eafc21ec4141e07f985889a

0.8.6

21 Jul 21:27
Compare
Choose a tag to compare
clean up

Former-commit-id: 7f2583e7a927be8428aa83a6acd3c55fc9cb3530

0.8.5

21 Jul 21:24
Compare
Choose a tag to compare
clean up

Former-commit-id: 7f2583e7a927be8428aa83a6acd3c55fc9cb3530

0.8.4

18 Jul 16:06
Compare
Choose a tag to compare

Changelog

Bugs

  1. The flash attention module was missing in the original codebase. This resulted in a module not found error during the execution.
  2. Incorrect implementation of flash attention integration with the main attention module. The forward method in the Attend class wasn't correctly handling flash attention.
  3. The flash_attn function within the Attend class had incorrect assumptions about the dimensions of the k and v tensors. This led to dimension mismatch errors during the tensor operations.
  4. The original flash_attn method was not handling the scale correctly when qk_norm was set to True.

Improvements

  1. Integrated the flash attention module into the main codebase and ensured the dimensions and operations are correct.
  2. Modified the forward method in the Attend class to handle flash attention correctly. It checks whether flash attention is enabled and accordingly calls the correct attention method.
  3. Adjusted the flash_attn method to account for possible missing dimensions in q, k, and v tensors, and to correct for possible dimension mismatches.
  4. Included a check to determine if the tensor is on a CUDA device and if so, to leverage the appropriate CUDA configuration for efficient attention.
  5. Correctly handled the scale in the flash_attn method when qk_norm was True.
  6. Added assertions and informative error messages for incompatible options such as 'talking heads' and 'flash attention'.
  7. Ensured compatibility with PyTorch version 2.0 and above for using flash attention.

0.8.3

18 Jul 16:04
Compare
Choose a tag to compare
clean up

Former-commit-id: b864f4367c555783ba1c9faa6afa6abc85be99dc

0.8.2

18 Jul 15:05
Compare
Choose a tag to compare
clena up

Former-commit-id: 1554541a6778260153bde965ffd722be37e3f5ee