Thursday, February 13, 2014

QuickTime 16-bit

This might be beating a dead horse, but QuickTime truly sucks.

For those using 16-bit (deep color) applications, always use the Force 16-bit encoding option, it is the highest quality and surprisingly, it is often the lowest data rate.

Now for the weird reason.

QuickTime loves 8-bit, it prefers it greatly, and support for deep color is difficult at best.  Over the years we tried to make the 16-bit the preferred mode for our codec within QuickTime, yet there are many video tools that broke when we did this.  The compromise was to add the Force 16-bit into the QuickTime compression option, to allow user to control the codecs pixel type preference – applications that can handle 16-bit will benefit, and applications that don’t, still work.

Using After Effects for my test environment (but the same applies to other QuickTime enabled deep color applications.) I created a smooth gradient 16-bit image, then encoded it at 8-bit using using a 8-bit composite, 16-bit using a 16-bit composite and 16-bit using a 16-bit composite with Force mode enabled (pictured above.)
Without post color correction, all three encodes looked pretty much the same*, yet the data rates are very different.

* Note: QuickTime screws up the gamma for the middle option, so with the image gamma corrected to compensate, they looked the same.

The resulting file sizes for 1080p 4:4:4 CineForm encodes at Filmscan quality:
8-bit – 13.4Mbytes/s
16-bit – 28.4Mbytes/s
16-bit Forced – 5.3Mbytes/s

Our instincts that higher bit-rate is higher quality will lead us astray in this case.

Under color correction you can see the difference, so I went extreme using this curve:
To output this (16-bit Forced)
The result are beautiful, really a great demo for wavelets.

Zooming in the results are still great. Nothing was lost with the smallest of the output files.

 Of course we know 8-bit will be bad
We also seeing the subtle wavelet compression ringing at the 8-bit contours enhanced by this extreme color correction.  This is normal, yet it shows you something about the CineForm codec, it always uses deep color precision.  8-bit looks better using more than 8-bits to store it.  That ringing mostly disappears using an 8-bit composite, an 8-bit DCT compressor could not do as well.
Storing 8-bit values into a 12-bit encoder, steps of 1,1,1,1,2,2,2,2 (in 8-bit gradients are clipped producing these flat spots) are encoded as 16,16,16,16,32,32,32,32, the larger step does take more bits to encode – all with the aim to deliver higher quality.  Most compression likes continuous tones and gradients, edges are harder. Here the 8-bit breaks the smooth gradients into contours which have edges. The clean 16-bit forced encode above is all gradients, no edges, result in a smaller, smooth, beautiful image.

 Now for the QuickTime craziness, 16-bit without forcing 16-bit.
The image is dithered.  This is the “magic” of QuickTime, I didn’t ask for dithering, I didn’t want dithering. Dithering is why the file is so big when compressed.  QuickTime is given a 16-bit format, to a codec that can do 16-bit, but sees it can also do 8-bit, so it dithers to 8-bit, screws up the gamma, then gives that to the encoder.  Now nearly every pixel has an edge, therefore a lot more information to encode.  CineForm still successfully encodes dithered images with good results, yet this is not want you expect.  If you wanted noise, you can add that as need, you don't want your video interface (QuickTime) to add noise for you.

If anyone can explain why Quicktime does this, I would love to not have users have to manually select “Force 16-bit encoding”.    


P.S. real world deep 10/12-bit sources pretty much always produce smaller files than 8-bit.  This was an extreme example to show way this is happening.