Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update MetricWrapper to work with the update/compute from torchmetrics #466

Open
4 tasks done
DomInvivo opened this issue Sep 20, 2023 · 1 comment · Fixed by #517 · May be fixed by #519
Open
4 tasks done

Update MetricWrapper to work with the update/compute from torchmetrics #466

DomInvivo opened this issue Sep 20, 2023 · 1 comment · Fixed by #517 · May be fixed by #519
Labels
enhancement New feature or request

Comments

@DomInvivo
Copy link
Collaborator

DomInvivo commented Sep 20, 2023

Currently, we cannot compute train metrics because it requires to save all the preds and targets over the epoch.
The problem also affects the val and test sets, but since they're smaller, the effect is less noticeable.

To fix this, the MetricWrapper should have an update method taking the preds and target that calls the underlying self.metrics.update, and the compute method which no longer takes in the pred and target, but instead calls the self.metric.compute

  • Add the update to the MetricWrapper
  • Modify the MetricWrapper.compute to work with the update
  • How to deal with missing labels??

Also, all TorchMetrics in spaces.py should become their class equivalent rather than functions.

  • Change the spaces.py to use classes rather than functions. Make sure the classes get initialized.
@DomInvivo DomInvivo added the enhancement New feature or request label Sep 20, 2023
@DomInvivo DomInvivo linked a pull request Jul 10, 2024 that will close this issue
17 tasks
@DomInvivo DomInvivo linked a pull request Jul 16, 2024 that will close this issue
28 tasks
@DomInvivo
Copy link
Collaborator Author

All metrics are moved to their Class versions in PR #511

Exceptions: AvPR and Auroc. Because they require to save all preds and labels, which breaks the GPU memory. Especially when using mean-per-label option, where it keeps thousands of vectors and faces memory leaks.

Instead, they use the MetricToConcatenatedTorchMetrics to wrap them

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
1 participant