-
Notifications
You must be signed in to change notification settings - Fork 112
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support layerwise quantization #1018
base: main
Are you sure you want to change the base?
Conversation
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
model = load_empty_model( | ||
model_id, | ||
trust_remote_code=trust_remote_code, | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The model was already loaded above so would make sense to remove this part as well, alos we need to set cls=load_empty_model
when loading the model with load_empty_model https://github.com/intel/neural-compressor/blob/v3.1.1/neural_compressor/torch/utils/utility.py#L354
model = load_empty_model( | |
model_id, | |
trust_remote_code=trust_remote_code, | |
) | |
model = load_empty_model(model_id, cls=model_class, **loading_kwargs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I improve the code, load_empty_model only needed by "layer-wise" feature.
I didn't pass loading_kwargs, because load_empty_model
function doesn't support loading_kwargs, the error raised.
> model = cls(config, **kwargs)
E TypeError: __init__() got an unexpected keyword argument 'subfolder'
) | ||
else: | ||
quantization_config = RtnConfig(bits=bits, group_size=8) | ||
quantization_config = RtnConfig(bits=bits, group_size=8, use_layer_wise=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need to specify it when creating the quantization config ? Looks like with the current integration this information will be ignored (load_empty_model
will be called in all cases)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I improve it, add a case to test layer-wise.
Signed-off-by: changwangss <chang1.wang@intel.com>
Signed-off-by: changwa1 <chang1.wang@intel.com>
572e37c
to
5f14658
Compare
Signed-off-by: changwa1 <chang1.wang@intel.com>
Signed-off-by: changwa1 <chang1.wang@intel.com>
Signed-off-by: changwa1 <chang1.wang@intel.com>
Signed-off-by: changwa1 <chang1.wang@intel.com>
Hi @echarlaix, Due to some layer-wise bug fixes in INC 3.2, and with the release planned for 12.9, I’ve set the installation to point to a specific commit for now. Once INC 3.2 is officially released, I will raise a PR to update this. Let me know your thoughts! |
CI IPEX issue fixed by #1009 , detail has discussed with @IlyasMoutawwakil in #1027 . |
if hasattr(quantization_config, "use_layer_wise") and quantization_config.use_layer_wise: | ||
from neural_compressor.torch import load_empty_model | ||
|
||
model = load_empty_model(model_id, cls=model_class, trust_remote_code=trust_remote_code) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why not :
model = load_empty_model(model_id, cls=model_class, trust_remote_code=trust_remote_code) | |
model = load_empty_model(model_id, cls=model_class, **loading_kwargs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let me make explaination, because we initialize the config first, and then initialize the model in load_empty_model
function. https://github.com/intel/neural-compressor/blob/e2696603f45f5796f1c048aab33eef11aaeb2cdb/neural_compressor/torch/utils/utility.py#L356
when the **loading_kwargs
passed, config initialization will raise error like the following.
> model = cls(config, **kwargs)
E TypeError: __init__() got an unexpected keyword argument 'subfolder'
if hasattr(quantization_config, "use_layer_wise") and quantization_config.use_layer_wise: | ||
from neural_compressor.torch import load_empty_model | ||
|
||
model = load_empty_model(model_id, cls=model_class, trust_remote_code=trust_remote_code) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks the same for both cpu / xpu, shouldn't it be moved to the correct device ? also should be moved outside of the if use_xpu condition as the code is duplicated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
agree, I improved it.
Co-authored-by: Ella Charlaix <80481427+echarlaix@users.noreply.github.com>
Signed-off-by: sys-lpot-val <sys_lpot_val@intel.com>
Signed-off-by: changwangss <chang1.wang@intel.com>
What does this PR do?
INC support layer-wise feature, both supported cpu and xpu.
because 3.2 plan to release Dec, 9, so I limit the INC installation commit now.
Once INC 3.2 is officially released, I will raise a PR to update this
Fixes # (issue)
Before submitting