Algorithm compresses image by taking diagonal averages of pixels and storing every four pixels in one pixel which reduces size significantly. Most images tested don't lose more than 2%. Decompression algorithm implemented too. Have fun trying it with the display in a ssh -X. Make sure you install X-quartz :)
- input: image file
- output: array of rgb
- no data loss
- input: RGB integers in form of array of structs
- output: RGB floats i.e. each integer's ratio to maximum value of rgb in the picture
- no data loss
- input: RGB floats
- output: color space structs Y, Pb, Pr i.e. in floating point form
- Formula for output: y = 0.299 * r + 0.587 * g + 0.114 * b; pb = -0.168736 * r - 0.331264 * g + 0.5 * b; pr= 0.5 r-0.418688g-0.081312*b;
- data loss: float x float produces trailing decimals that would eventually get lost after the floating 7 point place.
- input: color space structs
- output: a struct containing Pb, Pr, a, b, c, and d -> 1/4 the amount of the color space structs
- Formula for the output a=(Y4 +Y3 +Y2 +Y1)/4.0; b = (Y4 + Y3 − Y2 − Y1)/4.0; c = (Y4 − Y3 + Y2 − Y1)/4.0; d = (Y4 −Y3 −Y2 +Y1)/4.0;
- data loss: averaging out every 4 pixels into 1 would yield in minor data loss
- input: b, c, and d in float from
- output: scaled ints in 5-bit scaled ints
- data loss: data is lost as values are truncated at 0.3 and -0.3
- input: size reduced uarray with 32-bit representation (9-a, 5-b, 5-c, 5-d, 8-pb and 8-pr) for every 4 pixels
- output: one 32 bit word that has all the data for a given 4-pixel
- data loss: no data loss
Decompression happens by reversing every step in the same file it was compressed.