Saturday 27 August 2016

Field Marks - Grey Scales and Gulls (Part 4)


This apparent Kamchatka Gull (Larus canus kamtschatschensis), rear right was photographed by David O'Connor in Co. Kerry, Ireland on 6th March, 2014.  Recent developments in the identification of the Larus canus complex, thanks to extensive studies by Peter Adriaens and Chris Gibbins (Dutch Birding Volume 38, No. 1) have made this identification finally possible.  The discovery of this, yet another potential far eastern gull, and possible another 1st for the Western Palearctic in Ireland has prompted me to return again to the subject of grey scales and gulls.

In this series of postings I have been concerned with trying to replicate the famous (among gull enthusiasts) Kodak Grey Scale in sRGB colour space.  The intention has been to try and directly measure gull upper-wing and mantle tones from digital images, in a manner consistent with studies using an actual Kodak Grey Scale card alongside a gull in the hand.  Why?  Because, upperparts tone can be instructive in gull identification, and if reliable measurements can be taken from digital images it will help in some gull identifications. To date I have written three other blog postings on the subject, parts One, Two and Three. These were more exploratory than anything else.  In this posting I aim to put this tool under much closer scrutiny.

A Simple Tool For Starters
I'll readily admit that I have approached this subject thus far with all the subtlety of the proverbial bull in a china shop.  To the uninitiated, replicating an apparently linear grey scale artificially on a computer screen seems like a simple enough task.  One only has to make a linear grey scale from stepped grey tones, right?  Starting with the simplest possible model, from white point (RGB 255) to black point (RGB 0) I created a straightforward linear scale with  equally spaced grey tone increments as illustrated below.  Both perceptually and numerically in terms of sRGB values it is a linear scale.

I started with 21 increments in the very first draft, one additional increment for black point (RGB 0), as I sensed that Kodak 19 isn't particularly black.   But, after obtaining promising results I have since reverted to just 20 increments, exactly as per the standard, where Kodak 19 has since been represented by RGB 0.  Despite the rather crude attempts, the results have been quite surprising effective, and seemingly reliable.


However it doesn't take much to find fault with this most simple of efforts.  For instance, without benchmarking, the definition of white, black or indeed any particular shade of grey is totally arbitrary.  These terms are entirely subjective, relative descriptions of different levels of brightness.  Then one has to ask if the scale we are trying to reproduce is actually linear or if this merely appears to be the case.  Perceptually two grey scales may appear to be linear, but perception isn't everything as we shall see.  There are a number of factors contributing to a rather confusing puzzle.

Linear Light Capture
The sensor of the camera actually captures light intensity linearly.  For an incremental increase in light hitting an individual photosite, there is indeed an equal incremental increase in charge, up until the point that the photosite becomes saturated with charge.  The dynamic range of the camera is defined simply therefore by, on the one extreme, the minimum light required to register a signal (typically just one photon of light), up to, at the other extreme, the maximum the photosite can absorb before saturation.  This represents a potential dynamic range of upwards of 23,000:1, or 14.5 stops of exposure, according to this reference.  So, despite being somewhat less versatile than the human visual system, the camera can still gather an amazing range of light intensity from its captured black point to its captured white point.  For more see HERE.

Human Vision Versus The Camera - The Luminosity Function
Viewing a RAW image file without any tonal correction, one would be struck by its darkness and lack of contrast.  RAW images must undergo at least a couple of transformations to make them approach the scene as perceived by human eyes.  So, while the RAW image may be a factually accurate representation of the scene light intensity as captured during the given exposure, it's still not perceptually accurate, in human terms.  Human's don't perceive light intensity linearly, but rather based on a non-linear Luminosity Function.

There are a couple of things about this that are particularly relevant here.  Due to the need to be able to see well across a wide ambient brightness range in daylight we developed the ability to distinguish between small changes in light intensity in the shadows, while at the same time accommodating intense sunlight in the open.  So, for instance when we look across a wide range of brightness we find it much easier to distinguish contrast between darker tones than brighter ones.  This means that, for the photographer it makes sense to try and ensure that a lot of shadow detail is captured, possibly at the expense of detail in the highlights (termed exposing to the right, ETTR).  It also means that it may be possible to selectively discard a lot of image RAW data involving the highlights without any noticeable loss in final image quality.  Hence a 16bit RAW image can be compressed into an 8-bit JPEG, post capture, after the camera (or someone editing in RAW) has 'selected' the details needed to make a reasonably representative and perceptually satisfying image.

In addition to weighting our visual perception towards the dark end of the tonal range, our senses perceive intensity along an almost logarithmic scale.  Double the noise, or double the brightness, or double the heat doesn't actually equate to double the sensation.  This means that a linear input capture of light intensity must be transformed into a near logarithmic output of light intensity in the final visual image.  That is, in order for it to appear perceptually accurate.  This also explains for example why in intensity terms middle grey, that point perceived to be mid-way between black and white, is not actually found mid-way along the light intensity or reflectance curve, but at approximately 18% reflectance.  And, this also explains why the reflectance of a grey card is 18% and why a camera's on board light meter has supposedly worked with middle grey (i.e. 18% reflectance) as it's centred reference point (though this fact is often disputed as I will come to).

In terms of colour brightness perception, humans have roughly as many green receptors in the eye (cone cells) as red cones and blue cones combined.  This in turn means that we tend to perceive greens as being somewhat brighter than they actually are.  This can impact our perception of the relative brightness of different colours that we see.  In this case the luminosity function can be broken out more specifically into different spectral sensitivities for red, green and blue cones, as well as for rods, the eye's receptors used for night (scotopic) vision.  The luminosity slider in an image processing software programme like Photoshop makes a correction for brightness taking account of this spectral disparity, to ensure that perceptual colour accuracy is maintained during image editing.  A simple brightness slider tool on the other hand may not factor in this spectral disparity and that in turn may affect the perceptual brightness of different colours on screen.  This is a subtle but important consideration when we are concerned about accurate colour management from the original scene to the camera, screen and printer.

Luckily for the purposes of grey scale analysis we don't need to consider colour at all.  In fact, I recommend transforming images from colour to greyscale for all analysis involving the Kodak Grey Scale.  Colour can be unnecessarily distracting.  It's also worth noting that humans are better able to perceive subtle changes in brightness (spatial sensitivity to luminance) than subtle changes in colour (chromatic sensitivity).  Video manufacturers for example capitalize on this point to store chromatic data at lower resolution to luminance data thereby conserving bandwidth (as discussed in this link here).

Gamma
Gamma correction is a transformation given to an image to iron out any inconsistencies in the display device, so that brightness levels appear perceptually accurate across the entire tonal range.  The original cathode ray tube (CRT) screens by sheer coincidence displayed tonal levels in a manner which was almost perfectly the reverse of the human luminosity function.  So, in effect the gamma correction for a CRT was more or less a mirror image of the luminosity function.  Modern liquid crystal display (LCD) screens have a linear response function, which means in effect they should not require gamma correction at all.  However, in order to be able to view archived images containing an encoded gamma correction, and in order to ensure the backward compatibility of modern images for those still using CRT screens, gamma correction remains an important consideration, and all images continue to be gamma encoded.  This is therefore just another layer of complexity to add on top of an already decidedly confusing picture.  For more on the continued relevance of gamma please check out this very useful article.  For more on the terminology of light see HERE and HERE.


Characteristic And Other Curves
Having undergone gamma/luminosity corrections the image may also have to undergo other minor adjustments in different areas of the tonal range.  In the original film-based photographs a film emulsion's specific sensitivity to light was expressed in terms of an s-shaped sensitivity or response curve.  The digital era has continued the fascination with curves.  This is the science of sensitometry. Digital sensors have their own limitations.  Curves tend to be used in the modern era to counteract a camera's deficiencies due to dynamic range, and to simply accentuate tones in different areas of the image to make the image more appealing.  Curves may also be used simply to mimic the characteristics of different film stock.  While as photographers and researchers we can deliberately avoid interfering with an image's response curve we have no way of knowing if the camera manufacturer may have programmed an in-built curve correction as part of image processing from RAW.  This is yet another departure from a linear representation of tones in digital images.  For more on the use of tonal curves in image editing see HERE.


Kodak Grey Scale Specification
Before trying to tackle the implications of the various points raised above, time for a quick look at The Kodak Grey Scale itself.  The card is a quality control tool, used primarily for assessing and reproducing specific tones in print reproduction.  The scale was intended to be incorporated in the photograph itself, to help identify accurate exposure for accurate reproduction.  Each patch is stepped 0.1 density units apart.  In exposure terms, each step represents a third of a stop, according to the specification.  The specification sheet accompanying the Kodak Grey Scale can be viewed HERE.

Note three of the patches are represented by letters, A, M and B, which together are used in instrument calibration for photographic printing.  The white of the Kodak card is referred to as A and has a reflective density of 0.05.  Kodak grey scale value 7, referred to as M, with a reflective density of 0.75, equates to a reflectance of 17.8%.  This is a little shy of the 18.4% reflectance standard grey card but it's a close approximation.  Lastly B, at patch 16 has a reflective density of 1.65.  An interesting point to note, the middle grey patch (M) is not half way along the scale.  It is at step number 7.  This in itself suggests the scale is non-linear.

To confuse matters somewhat Kodak Grey Scale patch density values are quoted somewhat differently in various places.  The Kodak specification sheet states that the patches have a density range of 0.0 (white) to 1.9 (practical printing black).  Whereas elsewhere one will find quoted a density range from 0.05 to 1.95.  So which is correct?  The whitest point on a print density scale represents D-min, or the lowest practical print density.  It's limited by the reflectance of the paper itself.  Densities above this value are calibrated relative to it, so presumably by quoting 0.0 here Kodak are referring to the relative densities of each patch after having zeroed the densitometer using the D-min patch.  The white patch therefore has a relative density of 0.0 but an actual density of 0.05.  For the purpose of this posting I am interested in the actual density of each patch, not their relative densities.






Grey Cards Versus White Balance Cards
Just a quick point in relation to grey calibration cards.  There are two different 'grey' cards used in photography.  The middle grey card (18% reflectance) is intended for exposure metering.  It tends not be perfectly neutral grey.  In other words, measured with a spectrometer one would find a slight colour bias in the green, red or blue channels, which would throw out a white balance correction.  18% grey cards are therefore not recommended for white balance correction.  True white balance cards tend to be lighter (approx. 60% reflectance) and are therefore not middle grey.  They are perfectly neutral grey so deliver perfect white balance correction.  Given that they are lighter than 18% grey cards they can only be used for exposure metering with the addition of a suitable exposure compensation.  Why the lighter grey target for white balancing?  Apparently it's got something to do with signal to noise ratio.  The lighter target is easier to expose with minimal noise, and therefore more accurate as a calibration tool for colour.  Exposure metering doesn't require such a high level of precision.  For those of you who, like me use a Colorchecker Passport for white balance correction check out this useful technical review.

For more on the difference between 18% grey cards and white balance cards see below.


Note it is widely claimed that camera meters work off an 18% reflectance target.  But there are also claims that this is incorrect and the true target is closer to 12% (darker than middle grey).  As if things weren't confusing enough!  Check out this commentary (link).

Whatever the truth about metering, if we are trying to compare references with the Kodak Grey Scale, which uses approximately 18% grey as it's middle grey (or M value), it probably makes sense to try and meter our own image exposure to the same target reference.

Light Reflectance and Transmission Measurement and Conversion Factors
The Kodak Grey Scale is a reflectance tool containing patches stepped in 0.1 density increments as outlined. Density is a simple base 10 logarithm of reflectance, measured using a device called a densitometer, the traditional quality control tool of the photographic industry,  So, while density is a linear scale, the real essence behind it, reflectance is non-linear.  A logarithmic scale is useful as it closely resembles (but does not perfectly match) the human luminosity function.  For this reason, density scales appear to the human eye to have a fairly uniform gradient.  Densitometers can be used to measure light reflectance from paper samples, or light transmission through film or slide, so they really are perfectly suited for photofinishing.

Densitometry is just one method however for measuring light reflectance and transmission. Colorimetry is the science and technology used to quantify and describe human colour perception.   So, colorimeters are weighted, taking account of the luminosity function. Colorimeters measure reflectance and display tristimulus values (green, red and blue).  Meanwhile spectrometry uses even more advanced technology to measure individual wavelengths and generate complete spectral signatures from colour samples, including but not limited to the human visual range.  So, between these different technologies we have the means to directly measure and convert the Kodak Grey Scale into whatever colour space we require.

It may also be possible to convert density values directly to sRGB using complex formulae.  I find the equations daunting so I've decided to revert to those who have gone and done this work before me.  I have found a couple of different resources online which are well worth a look but the one that I have found most helpful is linked HERE.  The authors at Berkeley, University of California have identified appropriate RGB values for each reflectance patch in the Kodak Grey Scale.


This digital representation of the Kodak Grey Scale depicts RGB white and black outside the range of the Kodak Grey Scale and this seems entirely appropriate.  After all, one commonly encounters whites which are brighter than paper and blacks which are darker than ink.  The Kodak middle grey value M, with a reflectance of 17.8% yields an RGB value of 116 which seems about right.  Standard middle grey as defined by the standard grey card should have a reflectance of 18.4% and a RGB value of 119. So it looks like we are on the right track.

So how does this revised data look when compared with the original linear model?

The revised model clearly isn't linear, exhibiting the sloped appearance one would associated with a logarithmic function.  The grey scale is now far more graduated in the mid-tone range, with a sharp step at the toe and shoulder where the scale begins and ends.

While trying to gather information on this rather challenging topic I found a few more useful links which are worth a look:-
Conversion Tool - CIE Colour Calculator
Principles of Surface Reflectance
Explanation of Density and Dot Gain

Comparison of Linear and Non-linear Digital Grey Scales
Having established what would appear to be the correct digital interpretation of the Kodak Grey Scale, it's time to compare it with the original linear grey scale.




 In the top image above I have corrected brightness slightly to allow me to make comparisons between the different mantle grey levels using my original linear model.  The results suggest the gulls all fall within their predicted Kodak Grey Scale ranges (references Howell & Dunn, 2007; Adriaens & Gibbins, 2016).

The new non-linear model requires that I substantially darken the image in order to take mantle grey level measurements using that scale.  In order to understand the impact of a brightness correction on the image I have added a grid with all 256 tonal levels in it to the original image (top left corner).  I later interrogate this grid to see what the brightness correction has actually done to the image across the whole tonal range.  The results are interesting.  The brightness tool doesn't merely darken the image linearly.  It applies a non-linear correction, appearing linear initially but sharply sloping upwards in the highlights region.  Clearly the brightness tool takes account of luminosity and gamma correction.  I have noted this before in an earlier experiment looking at the functionality of each of the Adobe lighting tools.  For more see HERE.

As for the gull mantle results.  The actual results obtained were very similar regardless of which version of the digital grey scale I used.  That in itself tells an interesting story.

Tone Reproduction
This exercise has been all about Tone Reproduction - the mapping of scene luminance and colour to print reflectance or display luminance but what we have ended up with is something slightly different.  I have been attempting to measure individual tonal levels (namely the tones representing the mantle shades of gulls).  At the same time, I have been trying to apply a measurement benchmark scale which is based on reflectance.  The question is, have I actually improved the accuracy of the tool or is the improvement merely illusory and overly cumbersome?  I think this particular journey has been the most technically challenging to understand and explain, and I don't think I have quite reached journey's end.  My instinct is that the original linear grey scale is more than adequate for our purposes but I think it is equally important to gain a better understanding of the underlying mechanics and to test as many assumptions as I can.

...to be continued.

With special thanks to David O'Connor for allowing me to analyse and use his image of that stunning gull.