You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Question for you guys: as best I can tell, there is no support at present for keeping activations in fp8 between the "output" matmul (of either an attention block or MLP block) and the next norm (layernorm or rmsnorm). The missing features to make that work are:
FP8 residual input/output (along with the necessary scales) to fp8 gemm
FP8 input (with scale) to {rms,layer}norm
Is this something you have considered, or is there a more fundamental limitation (kernel-wise or accuracy-wise) that means you always want to keep the residual path in a 16-bit format. (Fwiw, my own interest here is for inference.)
Thanks,
Carl
The text was updated successfully, but these errors were encountered:
Question for you guys: as best I can tell, there is no support at present for keeping activations in fp8 between the "output" matmul (of either an attention block or MLP block) and the next norm (layernorm or rmsnorm). The missing features to make that work are:
Is this something you have considered, or is there a more fundamental limitation (kernel-wise or accuracy-wise) that means you always want to keep the residual path in a 16-bit format. (Fwiw, my own interest here is for inference.)
Thanks,
Carl
The text was updated successfully, but these errors were encountered: