Skip to content

Latest commit

 

History

History
288 lines (177 loc) · 8.15 KB

README.md

File metadata and controls

288 lines (177 loc) · 8.15 KB

ClaraVisio: Computational DeFogging via Image-to-Image Translation on a Free-Floating Fog Dataset

ClaraVisio (or Clara for short; Latin for "clear sight") builds on top of two previous attempts: StereoFog by Anton Pollock and FogEye by David Moody, Laura Parke, Chandler Welch, with the aim of collecting data and developing a framework for Image-to-Image translation (I2I) of foggy pictures. This project was conducted under the supervision of Prof. Rajesh Menon at the Laboratory for Optical Nanotechnologies at the University of Utah during the summer of 2024 made possible by the University of Utah Summer Program for Undergraduate Research (SPUR). This work differs from previous research in using a novel free-floating fog dataset and a transformer-based model.


Table of Contents


Description

Placeholder text for the project description.

Back to the top


Image Capturing

*Placeholder text for the image capturing process.changed. *

Files are in raspberr_pi folder with the SOP

uses rclone to sync with google, configuration

to ssh into your raspberry pi 5:

to have the script running at boot up use

sudo crontab -e

added this code to bottom:

@reboot /path/to/python/script &

saved with CTRL+O and exit with CTRL+X

Back to the top


Model Training

Placeholder text for the model training process.

install conda from website

module use $HOME/MyModules
module load miniconda3/latest

to run jupyter notebooks you need to:

pip install notebook

Back to the top


Datasets

Placeholder text for the datasets.

StereoFog images: GDrive FogEye images: MSOneDrive - Only Available for U of U students/staff, contact us for premission; needs cleaning (only download directories that contain raw files) ClaraVisio images:

place inside a datasets/SteroFog directory and unzip

apt-get install unzip
unzip file.zip
python preprocess_stereofog_dataset.py --dataroot /scratch/general/nfs1/u6059624/StereoFog/stereofog_images

python preprocess+augment.py --dataroot /scratch/general/nfs1/u6059624/ClaraVisio/claravisio_images

python preprocess_clara.py --dataroot /scratch/general/nfs1/u6059624/ClaraVisio/clara_bmp


python png2bmp.py --dataroot /scratch/general/nfs1/u6059624/ClaraVisio/claravisio_images --output_name clara_bmp

python preprocess2.py --dataroot /scratch/general/nfs1/u6059624/StereoFog/stereofog_images_augmented --augment

need to run again to create a new split.

Back to the top


How to Use

Installation

git clone https://github.com/amirzarandi/claravisio
cd claravisio

Installing a Python environment

Next, an appropriate Python environment needs to be created. All code was run on Python 3.9.7. For creating the environment, either conda or pyenv virtualenv can be used.


The environment can be created using conda with:

conda create --name claravisio python=3.9.7

Or using pyenv virtualenv with:

pyenv virtualenv 3.9.7 claravisio

Then activate the environment with:

conda activate claravisio

Or:n

pyenv activate claravisio

Using pip, the required packages can then be installed. (for conda environments, execute

conda install pip

before to install pip). The packages are listed in the requirements.txt and can be installed with:

pip install -r requirements.txt

In case you want to install them manually, the packages include:

  • numpy
  • torch
  • opencv-python
  • matplotlib
  • ...

train

python train.py --dataroot /scratch/general/nfs1/u6059624/ClaraVisio/clara_augmented --name CLARA3_AUG --continue_train --model pix2pix --direction BtoA --gpu_ids 0 --n_epochs 25 --n_epochs_decay 15


python train.py --dataroot /scratch/general/nfs1/u6059624/ClaraVisio/clara_images_processed --name CLARA2 --model pix2pix --direction BtoA --gpu_ids 0 --n_epochs 25 --n_epochs_decay 15

python train.py --dataroot /scratch/general/nfs1/u6059624/ClaraVisio/claravisio_images_augmented --name CLARA_AUG --direction BtoA --model pix2pix --gpu_ids 0 --n_epochs 25 --n_epochs_decay 15 


python train.py --dataroot /scratch/general/nfs1/u6059624/StereoFog/stereofog_images_augmented --norm batch --netD n_layers --n_layers_D 2 --netG resnet_9blocks --gan_mode vanilla --ngf 128 --ndf 32 --lr_policy linear --init_type normal --name AL4 --model pix2pixMSSSIM --direction BtoA --gpu_ids 0 --n_epochs 25 --n_epochs_decay 15


python preprocess2.py --dataroot /scratch/general/nfs1/u6059624/ClaraVisio/claravisio_images_augmented --augment

python train.py --dataroot /scratch/general/nfs1/u6059624/ClaraVisio/claravisio_images_augmented --norm batch --netD n_layers --n_layers_D 2 --netG resnet_9blocks --gan_mode vanilla --ngf 128 --ndf 32 --lr_policy linear --init_type normal --name CLARA5 --model pix2pixMSSSIM --direction BtoA --gpu_ids 0 --n_epochs 35 --n_epochs_decay 25

python test.py --dataroot /scratch/general/nfs1/u6059624/ClaraVisio/claravisio_images_augmented --norm batch --netD n_layers --n_layers_D 2 --netG resnet_9blocks --ngf 128 --ndf 32 --init_type normal --name CLARA5 --model pix2pixMSSSIM --direction BtoA --gpu_ids 0

test

python test.py --dataroot /scratch/general/nfs1/u6059624/ClaraVisio/clara_augmented --direction BtoA --model pix2pix --name CLARA3_AUG --gpu_ids 0


python test.py --dataroot /scratch/general/nfs1/u6059624/ClaraVisio/clara_images_processed --direction BtoA --model pix2pix --name AL2 --gpu_ids 0

python plot_res.py --results_path results/CLARA5 --shuffle
python quantitative_evaluation_model_results.py --results_path results/CLARA5
python quantitative_evaluation_model_results.py --results_path results/CLARA3_AUG

python preprocess_clara.py --dataroot datasets/ClaraVisio

metrics

python make_hdr_dirs.py --dataroot /scratch/general/nfs1/u6059624/FogEye/HDR

python plot_epoch_progress.py --model_name CLARA4 --checkpoints_path checkpoints/CLARA4

python quantitative_evaluation_model_results.py --results_path results/CLARA2

API Reference

Placeholder text for the API reference.

Back to the top


Getting Started

Placeholder text for getting started.

Back to the top


Results

Placeholder text for results.

Back to the top


Limitations

Placeholder text for limitations.

Back to the top


License

Placeholder text for license information.

Back to the top


Citation

Placeholder text for citation information.

Back to the top


References

Placeholder text for references.

Back to the top


Appendix

Placeholder text for the appendix.

Back to the top


Author Info

Placeholder text for author info.

Back to the top