Skip to content

model parallelism #243

Answered by zhuohan123
HUA9803 asked this question in Q&A
Discussion options

You must be logged in to vote

Thanks for the question. All our models already supports tensor parallel execution. For example, if you have 2 GPUs, you can pass in argument --tesnor-parallel-size 2 or -tp 2. We will add documents on distributed execution (#206).

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by zhuohan123
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants
Converted from issue

This discussion was converted from issue #238 on June 25, 2023 17:07.