The field of remote sensing uses imagery captured from satellites, aircrafts, and UAVs in order to observe and analyze the Earth. Many remote sensing applications that are used today employ deep learning models that require large amounts of data or specific types of data. The lack of data can hinder model performance. A generative adversarial network (GAN) is a deep learning model that can generate synthetic data and can be used as a method for data augmentation to increase performance of data reliant deep learning models. GANs are also capable of image-to-image translation such as transforming a satellite image containing cloud coverage into one without clouds. These possibilities have led to many new and exciting GAN applications. This project aims to explore one such application of GANs in the area of satellite imagery, generation of synthetic images using data augmentation ability of GAN. The tasks carried out were accessing the data from an open access data source, preparing the data as per our requirements, using them to generate mask layers and finally translating the masks to output synthetic images
This Project is an re-implementation of the paper " Mohandoss, Tharun Kulkarni, Aditya Northrup, Daniel Mwebaze, Ernest Alemohammad, Hamed. (2020). Generating Synthetic Multi-spectral Satellite Imagery from Sentinel-2 "