All the following commands should be run under the promptmr_examples/cmrxrecon
folder.
Please refer to https://cmrxrecon.github.io/ for more information.
We provide Google Drive links for downloading our models trained on the CMRxRecon Training Set (120 cases). You can use the respective IDs to retrieve the corresponding performance results of these models on the Validation Leaderboard. (As of October 31, 2023, PromptMR-16cascades
delivers the best performance on the Leaderboard.)
Model | # of Params | Download Link | Cine Leaderboard ID | Mapping Leaderboard ID |
---|---|---|---|---|
PromptMR-12cascades | 82M | Link | 9741084 | 9741082 |
PromptMR-16cascades | 108M | Link | 9741143 | 9741142 |
Note: The leaderboard evaluates only the small central crop area within three slices for each of the 60 validation cases, offering a limited representation of the overall reconstruction results.
You can also directly use the following command to download the models to the pretrained_models
folder:
mkdir pretrained_models
cd pretrained_models
gdown 1YWMvi1HhC2dC2_hmGJAsfBlOGvZAvYvI
gdown 1YXB9M9pJ7JY4ld0D3l5a2hAU0UcuyJhN
The following commands will reproduce the results of the pretrained PromptMR models on the CMRxRecon Validation Set. Please modify the input
and output
to your own path of the downloaded CMRxRecon dataset and the output reconstruction path. center_crop
is used to crop the central 2 slices, the first 3 time frames(for cine) or all contrasts (for mapping) and the central 1/6 area of the original images following the submission tutorial, if you want to save the full reconstruction volume, please remove --center_crop
from the command. num_cascades
is the number of cascades used in the unrolled model. task
is to select to inference on which type of data, which can be Cine
or Mapping
or Both
. Please reduce the batch_size
if you encounter out of GPU memory error.
## use pretrained promptmr-12cascades model
CUDA_VISIBLE_DEVICES=0 python run_pretrained_promptmr_cmrxrecon_inference_from_matlab_data.py \
--input /research/cbim/datasets/fastMRI/CMRxRecon/MICCAIChallenge2023/ChallengeData/MultiCoil \
--output /research/cbim/vast/bx64/PycharmProjects/cmr_challenge_results/reproduce_promptmr_12_cascades_cmrxrecon \
--model_path pretrained_models/promptmr-12cascades-epoch=11-step=258576.ckpt \
--evaluate_set ValidationSet \
--task Both \
--batch_size 4 \
--num_works 2 \
--center_crop \
--num_cascades 12
## use pretrained promptmr-16cascades model
CUDA_VISIBLE_DEVICES=0 python run_pretrained_promptmr_cmrxrecon_inference_from_matlab_data.py \
--input /research/cbim/datasets/fastMRI/CMRxRecon/MICCAIChallenge2023/ChallengeData/MultiCoil \
--output /research/cbim/vast/bx64/PycharmProjects/cmr_challenge_results/reproduce_promptmr_16cascades_cmrxrecon \
--model_path pretrained_models/promptmr-16cascades-epoch=11-step=258576.ckpt \
--evaluate_set ValidationSet \
--task Both \
--batch_size 4 \
--num_works 2 \
--center_crop \
--num_cascades 16
For efficient data access during training, it's advisable to convert the original MATLAB data files in TrainingSet to fastMRI-style h5py format. Please modify the data_path
in the following command to your own path of the downloaded CMRxRecon dataset. The converted h5py files will be saved in the h5py_folder
folder. After the conversion, the data will be automatically split into 100 training and 20 validation cases.
python prepare_h5py_dataset_for_training.py \
--data_path /research/cbim/datasets/fastMRI/CMRxRecon/MICCAIChallenge2023/ChallengeData/MultiCoil \
--h5py_folder h5_FullSample
The h5py files will be saved in the following structure (click to expand)
/research/cbim/datasets/fastMRI/CMRxRecon/MICCAIChallenge2023/ChallengeData/MultiCoil
│ ├── Cine
│ │ ├── TrainingSet
│ │ │ ├── FullSample
│ │ │ ├── h5_FullSample
│ │ │ │ ├── train
│ │ │ │ │ ├── P001
│ │ │ │ │ │ ├── cine_lax.h5
│ │ │ │ │ │ ├── cine_sax.h5
│ │ │ │ │ ├── ...
│ │ │ │ │ ├── P100
│ │ │ │ ├── val
│ │ │ │ │ ├── P101
│ │ │ │ │ │ ├── cine_lax.h5
│ │ │ │ │ │ ├── cine_sax.h5
│ │ │ │ │ ├── ...
│ │ │ │ │ ├── P120
│ │ ├── ValidationSet
│ ├── Mapping
│ │ ├── TrainingSet
│ │ │ ├── FullSample
│ │ │ ├── h5_FullSample
│ │ │ │ ├── train
│ │ │ │ │ ├── P001
│ │ │ │ │ │ ├── T1map.h5
│ │ │ │ │ │ ├── T2map.h5
│ │ │ │ │ ├── ...
│ │ │ │ │ ├── P100
│ │ │ │ ├── val
│ │ │ │ │ ├── P101
│ │ │ │ │ │ ├── T1map.h5
│ │ │ │ │ │ ├── T2map.h5
│ │ │ │ │ ├── ...
│ │ │ │ │ ├── P120
│ │ ├── ValidationSet
The following commands will train the PromptMR model on both Cine and Mapping multicoil data. Please modify the data_path
to your own path of the downloaded CMRxRecon dataset. h5py_folder
is the folder name of the converted h5py files. combine_train_val
is used to combine the training (100) and validation (20) cases for training. mask_type
specifies the type of the mask, which should be equispaced_fixed
for CMRxRecon dataset (equispaced mask with fixed 24 low frequency lines). center_numbers
is the number of low frequency lines in the mask. accelerations
is the list of accelerations used in the training. num_cascades
is the number of cascades used in the unrolled model. The checkpoints and log files will be saved in exp_name
folder. num_gpus
is the number of GPUs used for training. The use_checkpoint
enables a technique that trades compute for memory, which is useful when GPU memory is limited.
## train promptmr-12cascades model
CUDA_VISIBLE_DEVICES=0,1 python train_promptmr_cmrxrecon.py \
--center_numbers 24 \
--accelerations 4 8 10 \
--challenge multicoil \
--mask_type equispaced_fixed \
--data_path /research/cbim/datasets/fastMRI/CMRxRecon/MICCAIChallenge2023/ChallengeData/MultiCoil \
--h5py_folder h5_FullSample \
--combine_train_val \
--exp_name promptmr_trainval \
--num_cascades 12 \
--num_gpus 2 \
--use_checkpoint
## train promptmr-16cascades model
CUDA_VISIBLE_DEVICES=0,1 python train_promptmr_cmrxrecon.py \
--center_numbers 24 \
--accelerations 4 8 10 \
--challenge multicoil \
--mask_type equispaced_fixed \
--data_path /research/cbim/datasets/fastMRI/CMRxRecon/MICCAIChallenge2023/ChallengeData/MultiCoil \
--h5py_folder h5_FullSample \
--combine_train_val \
--exp_name promptmr_16_cascades_trainval \
--num_cascades 16 \
--num_gpus 2 \
--use_checkpoint
Below are the results as reported in the paper, in which models were trained on 100 training cases and evaluated on 20 validation cases using PromptMR-12cascades. The released code can achieve better results than those presented in the paper. We have not released the code for the stage 2 refinement described in the paper, as it offers only marginal SSIM improvement to our PromptMR model.