Skip to content

Source code and experiments for the paper: "Dark Corner on Skin Lesion Image Dataset: Does it matter?"

Notifications You must be signed in to change notification settings

mmu-dermatology-research/dark_corner_artifact_removal

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Dark Corner Artifact Removal

This repository contains all code and experiments created for dark corner artifact removal for ISIC (and other) skin lesion images.

Please read the DISCLAIMER before using any of the methods or code from this repository.

If you use any part of the DCA masking/removal process in this research project, please consider citing this paper:

@InProceedings{Pewton_2022_CVPR, 
 author = {Pewton, Samuel William and Yap, Moi Hoon},
 title = {Dark Corner on Skin Lesion Image Dataset: Does It Matter?},
 booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
 month = {June},
 year = {2022},
 pages = {4831-4839}
}

The main dataset used in this research is the result from the duplicate removal process detailed in this repository.

If using this dataset or any of the associated methods then please consider citing the following paper:

@article{cassidy2021isic,
 title   = {Analysis of the ISIC Image Datasets: Usage, Benchmarks and Recommendations},
 author  = {Bill Cassidy and Connah Kendrick and Andrzej Brodzicki and Joanna Jaworek-Korjakowska and Moi Hoon Yap},
 journal = {Medical Image Analysis},
 year    = {2021},
 issn    = {1361-8415},
 doi     = {https://doi.org/10.1016/j.media.2021.102305},
 url     = {https://www.sciencedirect.com/science/article/pii/S1361841521003509}
} 

File Structure

** - needs to be created by user

Dark_Corner_Artifact_Removal
├─ Data
|   └─ Annotations
|   └─ DCA_Masks
|   |      └─ train
|   |	   |    └─ mel
|   |	   |	└─ oth
|   |	   └─ val
|   |	   	└─ mel
|   |		└─ oth
|   └─ Dermofit**
|   └─ Metrics_Dermofit
|   |		└─ generated_metrics
|   |		└─ input**
|   |		|    └─ gt**
|   |		|    └─ large**
|   |		|    └─ medium**
|   |		|    └─ oth**
|   |		|    └─ small**
|   |		└─ output**
|   |		     └─ large**
|   |		     └─ medium**
|   |		     └─ oth**
|   |		     └─ small**
|   └─ train_balanced_224x224
|   |      └─ train
|   |	        └─ mel
|   |		└─ oth
|   |	   └─ val
|   |	   	└─ mel
|   |		└─ oth
|   └─ train_balanced_224x224_inpainted_ns
|   |      └─ train
|   |	   |    └─ mel
|   |	   |	└─ oth
|   |	   └─ val
|   |	   	└─ mel
|   |		└─ oth
|   └─ train_balanced_224x224_inpainted_telea
|          └─ train
|	   |    └─ mel
|	   |	└─ oth
|	   └─ val
|	   	└─ mel
|		└─ oth
├─ Models
|    └─ Baseline
|    |	  └─ .. all model experiments ..
|    └─ Inpaint_NS
|    |    └─ .. all model experiments ..
|    └─ Inpaint_Telea
|         └─ .. all model experiments ..
├─ Modules
└─ Notebooks
     └─ 0 - Preliminary Experiments
     └─ 1 - Dataset
     └─ 2 - Dynamic Masking
     └─ 3 - Image Modifications
     └─ 4 - Results

Project Pre-requisite Setup

  1. Generate the ISIC balanced dataset from https://github.com/mmu-dermatology-research/isic_duplicate_removal_strategy and save inside the Data directory.
  2. Download EDSR_x4.pb from https://github.com/Saafke/EDSR_Tensorflow and save inside the Models directory. /Models/EDSR_x4.pb
  3. Create Dermofit directory inside Data directory. Data/Dermofit
  4. Load Dermofit image library https://licensing.edinburgh-innovations.ed.ac.uk/product/dermofit-image-library inside the Dermofit directory. This will be split into many sub-folders (AK, ALLBCC, ALLDF, etc...), leave that as it is.
  5. Create Metrics_Dermofit file structure as shown above.
  6. Modify filepaths and run /Modules/prepare_dermofit.py only if using dermofit.

Requirements

This project requires the following installations:

Project Steps

Generate all DCA masks

Load /Notebooks/2 - Dynamic Masking/Mask All DCA Images.ipynb/ in Jupyter Notebook and run

Modify All DCA Images

Load /Notebooks/3 - Image Modifications/Inpaint Dataset.ipynb/ in Jupyter Notebook and run cells as required - recommended to run individually as the removal process is time consuming.

Generate Dermofit Image Metrics

Run /Modules/generate_dermofit_metrics.py

Train Models

Modify and run /Modules/AbolatationStudy.py as required

Generate Model Performance Metrics

Modify and run /Modules/model_performance.py as required

GradCAM Heatmaps

Load /Notebooks/4 - Results/GradCAM Method Comparison.ipynb/ in Jupyter Notebook and run Load /Notebooks/4 - Results/GradCAM-Inscribed DCAs.ipynb/ in Jupyter Notebook and run

Supplementary Material

Full results for the deep learning experiments:

Baseline Model Results:

ModelSettingsMetricsMicro-Average
Best Epoch Acc TPR TNR F1 AUC Precision
VGG16330.780.840.730.800.870.76
VGG19320.780.800.760.790.870.77
Xception200.810.860.760.820.880.78
ResNet50180.790.850.740.800.870.77
ResNet10160.780.850.700.790.850.74
ResNet152190.790.840.740.800.870.76
ResNet50V2140.770.820.730.780.850.75
ResNet101V2410.790.790.780.790.870.78
ResNet152V2250.780.780.770.780.850.77
InceptionV3360.800.800.810.800.880.80
InceptionResNetV2200.820.830.800.820.890.81
DenseNet12150.760.840.670.780.820.72
DenseNet169360.800.870.720.810.880.76
DenseNet201170.790.870.700.800.860.74
EfficientNetB0280.780.870.690.800.870.74
EfficientNetB1190.770.860.680.790.850.73
EfficientNetB3130.750.880.630.780.820.70
EfficientNetB4460.780.850.710.790.860.74

Inpainting Results (Navier Stokes based method)

ModelSettingsMetricsMicro-Average
Best Epoch Acc TPR TNR F1 AUC Precision
VGG16490.790.850.720.800.870.75
VGG19340.780.840.720.790.860.75
Xception190.800.830.780.810.880.79
ResNet50390.790.840.750.800.880.77
ResNet101330.790.870.710.810.870.75
ResNet152170.790.850.730.800.880.76
ResNet50V2200.790.810.760.790.870.77
ResNet101V2400.790.880.700.800.880.79
ResNet152V2230.780.800.750.780.860.76
InceptionV3220.790.800.770.790.870.78
InceptionResNetV2190.800.790.810.800.880.81
DenseNet121370.800.830.770.800.880.78
DenseNet169120.770.780.750.770.850.76
DenseNet201250.780.800.750.780.860.76
EfficientNetB0200.770.880.660.790.860.72
EfficientNetB1130.760.780.750.770.830.75
EfficientNetB3280.770.820.730.780.860.75
EfficientNetB4370.780.880.690.800.870.74

Inpainting Results (Telea based method)

ModelSettingsMetricsMicro-Average
Best Epoch Acc TPR TNR F1 AUC Precision
VGG16540.790.820.750.790.870.77
VGG19100.710.780.640.730.780.68
Xception100.790.840.750.800.880.77
ResNet50100.770.810.740.780.870.76
ResNet101330.800.800.790.800.880.79
ResNet152230.790.800.780.790.870.78
ResNet50V2230.780.760.810.780.870.80
ResNet101V2250.790.780.790.780.870.79
ResNet152V2290.790.830.750.800.870.77
InceptionV3180.790.810.760.790.860.77
InceptionResNetV2110.790.880.690.810.880.74
DenseNet121610.800.800.800.800.880.80
DenseNet169180.780.750.800.770.870.79
DenseNet201380.790.840.730.800.870.76
EfficientNetB0180.780.850.720.800.870.75
EfficientNetB1510.780.860.790.780.870.79
EfficientNetB3490.790.790.780.790.870.78
EfficientNetB4100.750.860.640.770.820.71

References

@article{cassidy2021isic,
 title   = {Analysis of the ISIC Image Datasets: Usage, Benchmarks and Recommendations},
 author  = {Bill Cassidy and Connah Kendrick and Andrzej Brodzicki and Joanna Jaworek-Korjakowska and Moi Hoon Yap},
 journal = {Medical Image Analysis},
 year    = {2021},
 issn    = {1361-8415},
 doi     = {https://doi.org/10.1016/j.media.2021.102305},
 url     = {https://www.sciencedirect.com/science/article/pii/S1361841521003509}
} 

@misc{rosebrock_2020, 
 title   = {Grad-cam: Visualize class activation maps with Keras, tensorflow, and Deep Learning}, 
 url     = {https://pyimagesearch.com/2020/03/09/grad-cam-visualize-class-activation-maps-with-keras-tensorflow-and-deep-learning/}, 
 journal = {PyImageSearch}, 
 author  = {Rosebrock, Adrian}, 
 year    = {2020}, 
 month   = {3},
 note    = {[Accessed: 10-03-2022]}
} 

@article{scikit-image,
 title   = {scikit-image: image processing in {P}ython},
 author  = {van der Walt, {S}t\'efan and {S}ch\"onberger, {J}ohannes {L}. and
           {Nunez-Iglesias}, {J}uan and {B}oulogne, {F}ran\c{c}ois and {W}arner,
           {J}oshua {D}. and {Y}ager, {N}eil and {G}ouillart, {E}mmanuelle and
           {Y}u, {T}ony and the scikit-image contributors},
 year    = {2014},
 month   = {6},
 keywords = {Image processing, Reproducible research, Education,
             Visualization, Open source, Python, Scientific programming},
 volume  = {2},
 pages   = {e453},
 journal = {PeerJ},
 issn    = {2167-8359},
 url     = {https://doi.org/10.7717/peerj.453},
 doi     = {10.7717/peerj.453}
}

@article{scikit-learn,
 title   = {Scikit-learn: Machine Learning in {P}ython},
 author  = {Pedregosa, F. and Varoquaux, G. and Gramfort, A. and Michel, V.
         and Thirion, B. and Grisel, O. and Blondel, M. and Prettenhofer, P.
         and Weiss, R. and Dubourg, V. and Vanderplas, J. and Passos, A. and
         Cournapeau, D. and Brucher, M. and Perrot, M. and Duchesnay, E.},
 journal = {Journal of Machine Learning Research},
 volume  = {12},
 pages   = {2825--2830},
 year    = {2011}
}

@inproceedings{lim2017enhanced,
  title  = {Enhanced deep residual networks for single image super-resolution},
  author = {Lim, Bee and Son, Sanghyun and Kim, Heewon and Nah, Seungjun and Mu Lee, Kyoung},
  booktitle= {Proceedings of the IEEE conference on computer vision and pattern recognition workshops},
  pages  = {136--144},
  year   = {2017}
}

About

Source code and experiments for the paper: "Dark Corner on Skin Lesion Image Dataset: Does it matter?"

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published