You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
One thing I want to say is in the case of qlora, parameters just be loaded to CPU memory and GPU memory just rise less than 1GB (I also want to know what is it, maybe is the quantization constant?) during from_pretrained(). And when run trainer.train() the parameters fully loaded to each GPU then sharded.
And I also use above code to measure memory used to check if zero_init run successfully:
When I use the entire above code, there is only "extra memory" loaded to GPU like the picture below:
When I use the code removing bnb_config, there is correct result(7B sharded to 8 GPUs):
Now the situation is whether with or without lora, bnb will contribute to incorrect result with zero_init but without bnb it will be success. So maybe the fact is bnb prevent zero_init.
Expected behavior
Successfully using zero3_init to construct llama2: parameters first sharded and then loaded to GPUs.
The text was updated successfully, but these errors were encountered:
System Info
Reproduction
When I run the code, parameters first be fully loaded to each GPU and then be sharded. But after I try to use zero3+lora instead zero3+qlora (just remove bnb_config = BitsAndBytesConfig(...) ), it magically worked! Parameters first shard then load to each GPU!
So I am confused if bitsandbytes doesn't support zero3_init, or there are some errors in my code. I'll really appreciate if someone can help me! @tjruwase @loadams @adk9
Here is my code refer to https://huggingface.co/docs/peft/accelerate/deepspeed#use-peft-qlora-and-deepspeed-with-zero3-for-finetuning-large-models-on-multiple-gpus and https://huggingface.co/docs/transformers/v4.18.0/en/main_classes/deepspeed#constructing-massive-models:~:text=If%20you%20want%20to%20use%20a,is%20how%20example%20scripts%20are%20written.:
Here is my accelerate config:
Here is my deepspeedzero3 config:
Here is my launcher context:
One thing I want to say is in the case of qlora, parameters just be loaded to CPU memory and GPU memory just rise less than 1GB (I also want to know what is it, maybe is the quantization constant?) during from_pretrained(). And when run trainer.train() the parameters fully loaded to each GPU then sharded.
And I also use above code to measure memory used to check if zero_init run successfully:
When I use the entire above code, there is only "extra memory" loaded to GPU like the picture below:
When I use the code removing bnb_config, there is correct result(7B sharded to 8 GPUs):
Now the situation is whether with or without lora, bnb will contribute to incorrect result with zero_init but without bnb it will be success. So maybe the fact is bnb prevent zero_init.
Expected behavior
Successfully using zero3_init to construct llama2: parameters first sharded and then loaded to GPUs.
The text was updated successfully, but these errors were encountered: