Saturday, 14 November 2015

Forensics - Bit Depth Limitations

Contrast Ratios and Bit Depth
If you think about it, the only meaningful way to describe the brightness of light, other than by direct measurement, is by making comparisons between different brightness levels.  We experience and refer to this difference as contrast.  An obvious question might be, how well do humans perceive a subtle change in brightness?  Human sight is arranged to buffer or mask global and local changes in lighting.  On entering a building from outside for instance the eyes adjust almost instantaneously and imperceptibly to the large drop in brightness by widening the aperture or iris of the eye.  The indoor environment might still be perceived as just a little darker than the outside environment, perhaps still visible through windows of the room.  In reality the brightness indoors is only a small fraction of the brightness outside.  This difference can be measured with a light meter or lux meter.  Typically illuminance might be 400lux in a bright room but as high as 100,000lux on a bright day outside.  This represents a 250 fold difference.  We don't tend to perceive the gap as being quite so large.

While humans may not be good at detecting subtle changes in global lighting because of this adaptation, at the same time when faced with an image we can be very good at detecting very subtle brightness differences at the local level.  By placing tonal swatches beside one another and asking observers to detect a difference in brightness it should be relatively easy to assess the capabilities of human vision.  The CIE have been studying these capabilities for almost 100 years and, as a result we know quite a lot about the capabilities and limitations of our eyesight.

The manufacturers of digital display devices obviously have a great interest in the capabilities of human vision and how to provide the clearest, sharpest and most efficient imaging technology.  Rather surprising then that there is no accepted standard covering all of this (see HERE) allowing manufactures of TV's and other screens to exaggerate the capabilities of their products.  But surely there must be some benchmark?  What if the ability to perceive detail on a screen was a matter of life or death?  In the medical industry display devices must meet certain standards.  I found this useful reference while looking at medical imaging devices.  Using a term called "just noticeable difference" (JND) it turns out that most people can detect a minimum of 720 different shades of grey on medical displays which have an output brightness range of between 0.8 - 600 cd/m2.  So when deciding what bit depth to apply to these screens 8-bit (256 shades only) is clearly not taking enough advantage of the available contrast ratio while 16-bit (65,536 shades) goes way beyond what is required, as humans simply cannot perceive all of these shades at once on those devices.  The optimum or most efficient for these medical devices has a 10-bit depth or 1,024 distinct shades therefore.  So there are just enough distinct shades to cover the range of human sight for the device without adding any unnecessary bandwidth and cost.

So, what about our average computer or smart phone screen?   Once again, for practicalities the standard devices that we use every day don't currently need a very high bit depth.  For a start these screens do not have as high a brightness or contrast range as a medical screen.  Therefore the number of effective JND shades is much lower than the minimum 720 shades possible with a medical screen.  In actual fact on typical computer screens used in typical indoor settings the average person is not going to be able to discern much more than 100 distinct shades at any given time.  As illustrated below 5 or 6-bit depth is not quite enough for a typical computer screen but 8-bit is certainly plenty.  It's no coincidence then that the Internet runs on 8-bit sRGB colour space.  If ever the public developed an appetite for higher bit depth images the whole industry from screen manufacturers to computer manufacturers and Internet providers would all have to increase bandwidth to cope with the extra demand.  This obviously all adds significant cost to all stages and so it probably wont happen any time soon.

In the meantime we can still take advantage of the higher bit depth captured in our digital RAW images by playing around with brightness, contrast and another lighting settings in Camera Raw and other programs as discussed HERE and HERE.


Digital Image Encoding, 12-bit and 10-bit
As indicated all forms of standard digital imaging technology from digital cameras and scanners to screens and printers are set up to capture and/or display images in 8-bits.  Most high end digital cameras however encode images in 10-bit or 12-bit instead of 8-bit.  Why?  The answer is Gamma, that adaptation of human vision where humans perceive a greater tonal range in the shadows than in the highlights.  Digital sensors record and encode digital image data linearly.  This data is later subject to a gamma correction for sRGB colour space.  In order to ensure there is enough tonal data to accommodate gamma correction without resulting in a noticeable postarization in the shadows as much as 3.5 bits of additional tonal data must be captured and encoded.  Hence the 10 - 12-bit encoding.  For more on gamma correction see HERE and HERE.   

No comments:

Post a Comment