Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix (graph/bias_correction): Fix when layer parameters are offloaded to accelerate #962

Merged
merged 4 commits into from
Jul 8, 2024

Conversation

nickfraser
Copy link
Collaborator

Currently, if a layer doesn't have a bias, and skip_if_no_bias=False and the parameters of the current module are being offloaded with accelerate, applying bias correction fails with the following error:

Traceback (most recent call last):
  File "/home/nfraser/workspace/optimum-amd/examples/quantization/brevitas/quantize_llm.py", line 161, in <module>
    main(args)
  File "/home/nfraser/workspace/optimum-amd/examples/quantization/brevitas/quantize_llm.py", line 65, in main
    quantized_model = quantizer.quantize(qconfig, calibration_dataset)
  File "/home/nfraser/.local/miniforge3/envs/20240516_oamd/lib/python3.9/site-packages/optimum/amd/brevitas/quantizer.py", line 244, in quantize
    apply_bias_correction(
  File "/home/nfraser/.local/miniforge3/envs/20240516_oamd/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/home/nfraser/.local/miniforge3/envs/20240516_oamd/lib/python3.9/site-packages/optimum/amd/brevitas/quantizer.py", line 337, in apply_bias_correction
    model(**inps)
  File "/home/nfraser/.local/miniforge3/envs/20240516_oamd/lib/python3.9/site-packages/brevitas/graph/calibrate.py", line 122, in __exit__
    self.bias_correction.apply_correction(self.model)
  File "/home/nfraser/.local/miniforge3/envs/20240516_oamd/lib/python3.9/site-packages/brevitas/graph/calibrate.py", line 292, in apply_correction
    module.register_parameter(
  File "/home/nfraser/.local/miniforge3/envs/20240516_oamd/lib/python3.9/site-packages/brevitas/nn/mixin/parameter.py", line 111, in register_parameter
    super(QuantBiasMixin, self).register_parameter(name, value)
  File "/home/nfraser/.local/miniforge3/envs/20240516_oamd/lib/python3.9/site-packages/brevitas/nn/mixin/parameter.py", line 81, in register_parameter
    super(QuantWeightMixin, self).register_parameter(name, value)
  File "/home/nfraser/.local/miniforge3/envs/20240516_oamd/lib/python3.9/site-packages/torch/nn/modules/module.py", line 557, in register_parameter
    raise TypeError(f"cannot assign '{torch.typename(param)}' object to parameter '{name}' "
TypeError: cannot assign 'torch.meta.FloatTensor' object to parameter 'bias' (torch.nn.Parameter or None required)

This PR resolves this issue.

@nickfraser nickfraser requested review from Giuseppe5 and removed request for Giuseppe5 May 31, 2024 15:00
@nickfraser nickfraser merged commit f7d634d into Xilinx:dev Jul 8, 2024
22 checks passed
@nickfraser nickfraser deleted the fix/bias_correction_accelerate branch July 8, 2024 14:20
fabianandresgrob pushed a commit to fabianandresgrob/brevitas that referenced this pull request Jul 10, 2024
…to `accelerate` (Xilinx#962)

* Fix (graph/bias_correction): Fix when layer parameters are offloaded to `accelerate`

* Fix (bias_correction): Typo fix

* Fix (bias_correction): Apply accelerate fix to entire `if/elif` block.

* fix (bias_corr/accelerate): Added comment
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant