Skip to content

Commit

Permalink
Fix
Browse files Browse the repository at this point in the history
  • Loading branch information
Giuseppe5 committed May 17, 2024
1 parent ede0a9d commit 66615ef
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions src/brevitas/graph/gpfq.py
Original file line number Diff line number Diff line change
Expand Up @@ -314,8 +314,8 @@ def __init__(

def single_layer_update(self):
# raise error in case no quant-input is here
if self.quant_input is None:
raise ValueError('Expected self.quant_input to calculate L1-norm upper bound, but recevied None. ' + \
if self.quant_metadata is None:
raise ValueError('Expected self.quant_metadata to calculate L1-norm upper bound, but recevied None. ' + \
'Make sure that either the input to the model is a IntQuantTensor or the layer has an input quant enabled. ' \
'Also, check if `use_quant_activations=True` in `gpfq_mode` when `accumulator_bit_width` is specified. ' + \
'Alternatively, provide a custom `a2q_layer_filter_fnc` to `gpfq_mode` to filter layers without a quant_tensor input.')
Expand Down

0 comments on commit 66615ef

Please sign in to comment.