-
Notifications
You must be signed in to change notification settings - Fork 26.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
compute_metric(eval_pred) in trainer is not mini-batch #31667
Comments
Hi @SamYuen101234, thanks for raising an issue! This is a question best placed in our forums. We try to reserve the github issues for feature requests and bug reports. |
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I am trying to implement a custom compute metric for trainer. The logits and labels are numpy array of the full evaluation data, however, my evaluation data input has the size (1000, 43, 50257). The computation can't be done in a 24GB L4 GPU on colab. Any way to load the data in mini batch like using dataloader instead of given a full numpy array.
`# eval_pred is all the valid data not only the mini-batch
def compute_metrics(eval_pred):
accuracy_metric = load_metric("accuracy")
logits, labels = eval_pred
The text was updated successfully, but these errors were encountered: