Replies: 4 comments
-
Hi @maximek3 Thank you for your interest in our work. We have not released the pre-trained weights for TransChex. However, we have created an intuitive tutorial, with all steps including data pre-processing detailed. In addition, the tutorial can be trained using moderately-sized GPUs (<11G VRAM) and in a very reasonable amount of time. I recommend checking it out: Thanks |
Beta Was this translation helpful? Give feedback.
-
Thanks for the pointer. I had a look at the tutorial already and it's very useful. I just wondered whether the TransChex model was also pre-trained on a larger dataset. If yes, do you plan to release these weights eventually? Thanks! |
Beta Was this translation helpful? Give feedback.
-
Thanks for the comment. Yes, in fact we have trained it on larger datasets (e.g. MIMIC III) and was able to validate its effectiveness. However, the licences of these datasets prohibit us from releasing these pre-trained weights. As such, we chose to create the above tutorial using a dataset which allows the usage in the context of our tutorial and can be trained in a reasonable time. Thanks, |
Beta Was this translation helpful? Give feedback.
-
Hey there. We are unable to find the paper "Hatamizadeh et al.,TransCheX: Self-Supervised Pretraining of Vision-Language Transformers for Chest X-ray Analysis" over the web. Could we be provided the link of the paper so that the model can be well understood? |
Beta Was this translation helpful? Give feedback.
-
Hi, are there any pre-trained weights available for Transchex? Thanks!
Beta Was this translation helpful? Give feedback.
All reactions