Does data precision options utilize AMP or are they full dtype casting? #330
Replies: 1 comment
-
I missed that part of the quickstart guide which unambiguously says: "Internally, this sets the mixed precision data type when doing the forward pass through the model. This setting trades precision for speed during training. Not all data types are supported on all GPUs. In practice, float16 only slightly reduces the quality, while providing a significant speed boost." |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello! I was checking the docs and the discussions because I wasn't sure, when selecting data types is this casting all of the values in the models to that dtype/precision or is this utilizing automatic mixed precision?
Thank you!
Beta Was this translation helpful? Give feedback.
All reactions