Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable llama-405b - w/a for memory allocation error #184

Merged

Conversation

afierka-intel
Copy link

Work around for allocation error while loading llama-405b.

@afierka-intel afierka-intel changed the title Private/afierka/llama 405b Enable llama-405b Aug 14, 2024
@afierka-intel afierka-intel changed the title Enable llama-405b Enable llama-405b - w/a for memory allocation error Aug 14, 2024
@kzawora-intel kzawora-intel added the habana Issues or PRs submitted by Habana Labs label Aug 29, 2024
@kzawora-intel kzawora-intel merged commit 691255b into HabanaAI:habana_main Sep 4, 2024
13 checks passed
zhouyu5 pushed a commit to zhouyu5/vllm-fork that referenced this pull request Sep 13, 2024
Work around for allocation error while loading llama-405b.
zhouyu5 pushed a commit to zhouyu5/vllm-fork that referenced this pull request Sep 20, 2024
Work around for allocation error while loading llama-405b.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
habana Issues or PRs submitted by Habana Labs
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants