Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

2 potential bugs related to FloatQuant #1126

Open
nghielme opened this issue Dec 11, 2024 · 4 comments
Open

2 potential bugs related to FloatQuant #1126

nghielme opened this issue Dec 11, 2024 · 4 comments
Labels
bug Something isn't working

Comments

@nghielme
Copy link

nghielme commented Dec 11, 2024

The issue is to share with you 2 potential issues related to FloatQuant implementation:

  • One is related to the definition of the minimum internal scale, used to quantize the mantissa, here
    I think the minimum should be 1. - self.exponent_bit_width() - self.exponent_bias() - self.mantissa_bit_width()) or a different expression but I think that self.exponent_bit_width() should be part of the expression. This can be deduced also from the way the actual internal scale is computed, here. The first part floor_ste(torch.log2(torch.abs(x) + eps)) represents in fact the exponent of the float x.
    I noticed for example that 0.3125 is quantized to 0.25 using a quantizer with 4 exponent bits, 4 mantissa bits, 0 exponent bias.

  • The other is related to the following expression, here
    I think the correct one should be -x < -max_value in order to identify the values that exceed max_value from the negative side.

I am writing a similar quantizer so any comment on the 2 listed points can also be useful for me to write it properly

@nghielme nghielme added the bug Something isn't working label Dec 11, 2024
@nickfraser
Copy link
Collaborator

Thank you for sharing this issue - I'm currently investigating (unfortunately, while juggling some other tasks). Early days my side, but since you said you're writing similar quantization functions, I'll give you some initial feedback anyway. Please take these comments with "a grain of salt".

I noticed for example that 0.3125 is quantized to 0.25 using a quantizer with 4 exponent bits, 4 mantissa bits, 0 exponent bias.

Thanks, I am able to reproduce this and agree this is unexpected behaviour. What I don't understand is why our other tests (public and internal) haven't caught this. I plan to start addressing this by first creating a more extensive test suite.

I think the minimum should be 1. - self.exponent_bit_width() - self.exponent_bias() - self.mantissa_bit_width()) or a different expression but I think that self.exponent_bit_width() should be part of the expression.

I am not convinced of this yet, but maybe I'll change my mind as my investigation continues. ;)

@nickfraser
Copy link
Collaborator

nickfraser commented Dec 16, 2024

I think the correct one should be -x < -max_value in order to identify the values that exceed max_value from the negative side.

I'm not following your logic here. The goal if the mask n_max_val_mask is identify values of x that are less than the highest magnitude representable negative value in the float representation. Your expression doesn't do this - your mask will be identical to p_max_val_mask. However, I do believe our expression should actually be x < min_value to capture the unsigned case.

@nghielme
Copy link
Author

Hi Nick,
Thanks for you comments.

I think the minimum should be 1. - self.exponent_bit_width() - self.exponent_bias() - self.mantissa_bit_width()) or a different expression but I think that self.exponent_bit_width() should be part of the expression.

I am not convinced of this yet, but maybe I'll change my mind as my investigation continues. ;)

I agree, in the meantime I realised that the minimum value that exponent can represent is - self.exponent_bias().

@nickfraser
Copy link
Collaborator

I'm going to contradict myself and say that I believe our minifloat quantizers are behaving as expected.

I noticed for example that 0.3125 is quantized to 0.25 using a quantizer with 4 exponent bits, 4 mantissa bits, 0 exponent bias.

I don't believe 0.3125 can be represented with that format, so value of 0.25 seems reasonable here.

I have generated quite an extensive test to "kick the tyres" of our minifloat formats in #1136. @nghielme, if you still unsure, could you check the logic here and tell me if I'm mistaken. If you think there are values that can be represented by a format but not by Brevitas, can you specify the values of the mantissa and exponent in an unsigned format? I believe an extra mantissa bit, or an exponent bias of 1 would be required to represent that value.

the minimum value that exponent can represent is - self.exponent_bias().

The minimum effective value of the exponent is 1. - self.exponent_bias() as an exponent of 0 simply marks that the mantissa is denormalised. I believe this is exactly the source of the confusion.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants