Support for logging multiple runs and comparison between them (model selection) #579
Unanswered
AntonioCarta
asked this question in
Feature Request
Replies: 1 comment
-
Yes I agree. This should not be difficult. I have implemented similar functionalities in my projects using Avalanche and it is quite straightforward. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I think the logging module right now is at a good point. We need to polish some of the outputs but overall it si clear and informative. One thing I'm missing though is some kind of automation for multiple experiment management.
I'm thinking about two main use cases:
Right now I can do this but I need to be careful to save each experiment in a different folder. It is also cumbersome to compare different runs without external tools. Since Avalanche provides logging features, I would like to have minimal support for:
core50_replay
) and the logging tools automatically save in each run in a separate folder (core50_replay/run0
,core50_replay/run1
).core50_replay
where I can quickly compare the different runs. As a starting point the report can be really basic like a table where each row has the name of the run (run0
) and the main metric value (stream accuracy).I think W&B has everything we need to provide a minimal support for these use cases.
Beta Was this translation helpful? Give feedback.
All reactions