You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In your paper you mention that you allocated 2, 3, or 4 bits to each layer of the model using a criteria. But in Fig. 1(d): Construct LUT and Query&Add, the binary weights are shown to be 8-bit. This has confused me a bit. Is the figure created with 8-bit weights in mind instead of <= 4 bit weights? Or am I misunderstanding the flow?
Another way I tried to interpret Fig. 1 is that the FP16 Shift and Query&Add blocks have to run once for every bit of W. For instance, if we have allocated 3 bits to a weight W, the ShiftAddLLM block runs 3 times, each time for one bit of the W. In this interpretation, each bit in the 8-bit binary weights in Fig. 1(d) correspond to one of the activation (x) values.
Could you please elaborate more on how the bit allocation maps to the ShiftAddLLM architecture?
The text was updated successfully, but these errors were encountered:
Hi,
Could you find the code how they use this shift&add during inference?
I could only find that they unpacked the alpha and binary weight back to FP16 format, which is contradicting the methodology as shown in figure 1.
Hi,
In your paper you mention that you allocated 2, 3, or 4 bits to each layer of the model using a criteria. But in Fig. 1(d): Construct LUT and Query&Add, the binary weights are shown to be 8-bit. This has confused me a bit. Is the figure created with 8-bit weights in mind instead of <= 4 bit weights? Or am I misunderstanding the flow?
Another way I tried to interpret Fig. 1 is that the FP16 Shift and Query&Add blocks have to run once for every bit of W. For instance, if we have allocated 3 bits to a weight W, the ShiftAddLLM block runs 3 times, each time for one bit of the W. In this interpretation, each bit in the 8-bit binary weights in Fig. 1(d) correspond to one of the activation (x) values.
Could you please elaborate more on how the bit allocation maps to the ShiftAddLLM architecture?
The text was updated successfully, but these errors were encountered: