This is an end-to-end framework for accurate and robust left ventricle indices quantification, including cavity and myocardium areas, six regional wall thicknesses, and three directional dimensions.
The proposed method first decomposes a CMR image into directional frequency bands via Steerable Pyramid Transformation. Then the deep representation of each direction is extracted separately via a CNN model, and we also use an LSTM module to model the temporal dynamics. Finally, we explore the multidirectional relationship of features, indices, and directional subbands to optimize the quantification system.
Create a conda environment and install dependencies:
cd RobustLV
conda create -n RobustLV python=3.7
conda activate RobustLV
# Install the according versions of torch and torchvision
conda install pytorch torchvision cudatoolkit
If you want to test the Mamba module, please refer to VMamba to set up the environment.
The dataset we used can be found at the MICCAI 2018/2019 Left Ventricle Full Quantification Challenge, an open-source dataset on Kaggle. The dataset can be put under the './data/' path.
Train the model with the below command:
CUDA_VISIBLE_DEVICES=0 python train.py
Please modify the corresponding hyperparameters to conduct experiments in 'config.py' file.