Replies: 1 comment
-
I think it averaged the output of the five models, which is mentioned in the documentation.(https://github.com/MIC-DKFZ/nnUNet/blob/master/documentation/how_to_use_nnunet.md) - Model training - Overview "a natural way of obtaining a good model ensemble (average the output of these 5 models for prediction) to boost performance." |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello,
I am trying to understand how the ensembling actually works in nnUnet, in the case of using only one configuration. What does the ensembling really do after the 5 fold cross validation?
I haven't found the answer to my question in the documentation or the paper/supplementary information.
I have used only one configuration (2D), and I have the following inquires:
1. How does each model (corresponding to each fold) vary? Does the model of each fold have a different architecture or hyperparametrs? If so, when performing final inference using all 5 folds, which architecture/hyperparameters are used?
2. I tried running inference using 1 fold (the fold that gave the best metrics), and using all folds, and the results on the test set were not identical. Therefore, I would like to know what happens during ensembling of the folds (keep in mind I'm only using 1 configuration), are the predictions of each model averaged, taking the majority vote, or the 5 architectures are somehow combined?
3. Please correct me if I'm wrong: it is my understand that the following happens during 5 folds CV using 1 configuration:
Thank you in advance, answering these question will be of great help to better understand how this tool is working!
Beta Was this translation helpful? Give feedback.
All reactions