This project involves the segmentation of city images using a Generative Adversarial Network (GAN) based on the Pix2Pix model. The model was trained for 350 epochs with a dataset of 3200 images.
## Introduction
The goal of this project is to accurately segment images of cityscapes into different classes such as buildings, roads, and sky. We use the Pix2Pix model, a type of conditional GAN, which is well-suited for image-to-image translation tasks.
The dataset used for training consists of 3200 images of cityscapes. Each image has corresponding ground truth segmentation masks. The dataset can be found on Kaggle: Cityscapes Image Pairs.