The dataset that I used were from Iran researcher Mohammad Rahimzadeh. From the dataset that he shared, I took 3100 images that contain 500 infected Covid-19 images, 500 normal images, and i use 950 covid-19 images and 950 normal images to validation the result. After validation, i use 100 covid-19 images and 100 normal images.
The dataset that I used is shared in this folder : https://drive.google.com/drive/folders/1moLjWmHHTq_sGn6HhNX278ldu5jtD_at?usp=sharing
The whole dataset can be seen here : https://github.com/mr7495/COVID-CTset
The raw images are 16-bit grayscale images in TIFF format and normal monitors cannot visualize the image clearly. According to Mohammad Rahimzadeh instruction, the dataset must be normalized first by converted it to float by dividing each image pixel value by the maximum pixel value of that image. By using this normalisation the images will be 32-bit float type pixel values which can be seen in normal monitors.