Monday, 23 November 2015

Birding Image Quality Tool - Rev. 4.0 Field Marks

For the seasoned birder in the field many an initial identification may be based on hearing a call or knowing a bird's distinctive gestalt.  But if you stop and take a critical look at any bird, and certainly if you need to identify a bird from a photograph, field marks play a big part in the identification process.  For clarity I like to consider field marks as a bird's distinctive markings and colours alone.  Sometimes size and shape (broadly morphology) is also considered a part of what defines the term field marks.  But I like to keep a bird's markings and it's morphology separate for the purposes of this blog.  I cover morphology under the heading of gestalt.

In the blog I have a page devoted to the subject of field marks called A Spotlight - On Field Marks.  This year I have spent a good deal of time considering field marks for the purposes of identification from bird images.  I have concluded that there are two basic classes of field marks - The Bold and The Bland.  The crucial distinction between the two is that bold markings and colours can be appreciated in even the worst of images because they exhibit characteristics that make them stand out under pretty much all observational and photographic conditions.  Obviously bold field marks perform some vital signally function so it is not too surprising that even in the worst viewing or photographic conditions they hit the mark almost every time.  Take for instance the bright, contrasting fresh adult scapulars and coverts of this 1st calendar year European Turtle Dove.  Compared alongside the faded, diffuse brown gradient at the centre of the older juvenile feathers, these newer feathers create a bold impact.  The older feathers are clearly bland by comparison.  They probably form part of the bird's camouflage, and therefore not surprisingly they evade the camera just as effectively as they do the eye.  My analysis of the bold versus the bland has been consistent, whether the problem is image resolution, focus, exposure, colour accuracy, artefacts, or all of the above.

So, having recently expanded the Image Capture Quality Tool to include a tool that measures the quality of image lighting and another that measures the accuracy of colours, is it possible perhaps to generate a tool who's purpose is to gauge the overall quality of field marks captured in an image.  I believe it's possible and here is my first stab.

Essentially what I have done here is build upon the three other tools with a simple analysis of the effective capture of bold markings and bland markings.  Unlike the other tools however I have allowed the operator to deselect anything that one considers may not be relevant to the analysis.  

Bold Field Marks and Bland Field Marks
Your subject may be all bold (such as this stunning male Moussier's Redstart on the left), in which case the analysis of bland field marks is not applicable and can be deselected from the analysis.  Or visa-versa - this male Trumpeter Finch on the right, though no less stunning consists of subtle field marks (apart perhaps from the bill which could be considered bold).  So, one may decide to exclude bold field marks from the analysis if one so chooses.

Lighting and Colour
Lighting is critical to the accurate capture of field marks in bird images.  After all, light and shade can easily mirror the impression of a field mark to confuse the unwary (eg. HERE).  The Lighting Quality Tool captures all the key elements from lighting quality, direction and shadows to dynamic range issues and the effect of multiple lighting sources.   Colours can be of significance in some identifications but less so in others.  So once again I have given the option to exclude colour from our analysis if it is deemed more helpful to do so.  On the other hand, where colour analysis is critical to an identification the Colour Quality Tool provides a very good measure of accuracy.

Broadening the Image Capture Quality Tool out to include additional critical analysis tools has allowed me to draw a line under much of the work of the last two years in this blog.  The purpose of the blog has been to work on a manual to assist birders who are interested in identifying birds from photographs.  These tools aid that effort by getting the observer to focus on those factors that confound an identification, be it a problem with how an image was captured, or the lighting, or how accurately the colours and details have been expressed in the image.

Once again, all that remains now is to provide you with the tools so you can play around with them and have a go at scoring the quality of your own bird images.  Feedback is as always much welcome and appreciated.

(Note you will have to download the file and open it in MS Excel for the tool to work properly).

In the example below image capture was very good at a score of 98%.  Particularly good for an old digiscoping camera (the classic work horse Nikon Coolpix 4500).  Lighting wasn't bad but bright sunlight does create problems such as blooming artefacts, clipping and shadows.  So the score was lower at just 75%.  Though the colour quality looks quite good, as the image is a jpeg that has underwent a certain amount of manipulation including manual brightness, contrast and manual white balance adjustment, colour reliability is really quite poor overall.  This is reflected in a score of just 40%.  However all is not lost.  When assessing overall the quality of the field marks capture it could be argued that accurate colour isn't all that important for this particular species.  Booted, like a lot of Old World warblers is a fairly bland species.  So I have discounted both the bold field marks and colour quality elements of the field mark test.  Field marks are therefore scored on the basis of the bland field marks, the overall image capture quality and the overall lighting quality, yielding a pretty good score of 91% overall.

There is subjectivity in deciding whether a bird should be scored on the basis of all five field mark parameters.  At least by presenting all the data everyone can see how the score was arrived at.  I hope you find these tools of use not just for assessing your image quality, but also for drawing your attention to parameters that you might often overlook or take for granted when studying bird images.

Friday, 20 November 2015

Birding Image Quality Tool - Rev. 3.0 Colour

Having recently 'bolted on' a Lighting Quality Tool to the Image Capture Quality Tool I figured it was as good a time as any to drive on with a third tool, this time devoted to Colour Quality.

As with the Lighting Tool the Colour Tool is in effect a summary of the findings of the Spotlight - On Colour thread and a way of drawing a line under that chapter.  And, as with the other tools the Colour Tool attempts to provide birders with a representative, quantitative tool for analysis for colour quality and accuracy in your bird images.  

Before I start of course I have to point out once again that digital colour is only a representation of natural colour.  Of the potentially infinite array of colours produced by light humans can only perceive a certain colour gamut.  Within this range digital cameras can only capture a much smaller gamut of colours.  Finally, within that cluster of colours we have a smaller gamut again called sRGB colour space.  Most digital imaging devices including cameras, display screens, scanners and printers tend to operate in sRGB for the most part.  sRGB is also the colour gamut used by the internet.  So that is the colour space which I have restricted myself to in this blog.  

The colour parameters which I have selected for the tool all narrow down the accuracy with which colours are captured and selected within this sRGB colour space so as to approximate, as closely as possible, the colours captured in nature.  After all, we can expect no more than this from our camera equipment.

Sensor Calibration
No two cameras are identical.  Due to slight variations in the way individual camera lenses, sensors, filter arrays and processors capture colours every camera is unique.  This is our first stumbling block on the road to 'accurate colour'.  Professionals use a tool called the X-rite (formerly Gretag-Macbeth) Colorchecker Passport to get over this first hurdle.  The Colorchecker is a standard calibration tool.  Having photographed the Colorchecker in RAW, at 100 ISO the photographer uses software to assess the performance of the camera setup.  From this a special colour profile is created (called a DNG Profile) which can then be used to correct for any slight variations in the camera setup when compared with a recognised professional colour calibration standard.  This profile only needs to be created once for a given camera, lens and lighting setup.  Afterwards, any time a RAW file is opened in a RAW work flow the DNG profile can be selected and this will automatically calibrate the colours in the image to that recognised standard.   For anyone interested in bench-marking and analysing colours from their images this tool is a 'must-have'.  For more on DNG profiles see HERE.

White Balance Calibration
In theory a DNG profile should be the only calibration needed to capture colours as accurately as any camera can.  The problem is, as humans we don't see the world quite as it actually is in nature.  Sunlight is ever-changing owing to the position of the sun in the sky.  Humans can correct for this changing light using a white balance adaptation.  We also use this to correct for the unnatural colour of artificial lighting indoors.  Camera manufacturers needless to say aim to produce images which match the world as the human eye sees it so cameras are equipped with white balance correction.  Unfortunately cameras are not as adept at this skill and frequently get this calibration wrong.  The only way to be absolutely certain the camera has corrected white balance approriately is to use another calibration tool called a Grey Card.  White balance correction then becomes the second prerequisite for accurate colour capture.  

Of course white balance correction can be closely approximated, particularly if there is some reasonably neutral grey in an image.  But this approach can be a bit 'hit and miss', particularly if it is being done by eye and particularly if the display screen used is not itself perfectly neutrally calibrated.  I have made an allowance for manual white balance correction with Colour Quality Tool, the caveat being that one would hope the observer is exercising caution and that the correction is reasonably accurate.

The X-write colourchecker comes with a colour grid (shown above) also containing two neutral grey patches.  This panel flips over to reveal a large neutral grey card beneath, so the passport caters for this dual purpose of DNG profiling and white balance correction.  The white balance of this image from the Collins Bird Guide second edition was created very simply by placing the white balance eye-dropper cursor in Adobe Elements over one of the neutral grey squares and clicking the mouse.  Having also corrected the colours using the camera's DNG profile the colours of this image are now matched to a professional standard.  Note how pure and saturated the colours are and how neutrally white the pages appear.  For more on white balance see HERE.

Image Manipulation
The next challenge is that, having corrected colours as accurately as possible the temptation might be to start manipulating an image further to correct for slight lighting, shadow, or exposure issues.  If this is done carefully and with a great deal of attention it is certainly possible to improve an image and draw closer to accurate colour.  But it can just as easily go wrong and lead the observer away from the target objective.  Where the only manipulation of a RAW image is it's DNG profile and white balance correction that is considered a perfectly acceptable manipulation, as this is the minimum correction needed to calibrate colours.

If an image on the other hand requires some additional lighting, hue or saturation manipulation to try and draw out representative colours there is a risk of deviating on quality so the image scores lower in the Quality Tool.  If the image being measured is not a RAW file at all but a Lossless image file format like a PNG file then manipulations are going to result in some clipping of image colour data so once again the quality score is affected even more.  Lastly if the image being manipulated is a lossy file such as a JPEG image then the impact on colour quality is greatest so all forms of manipulation will damage colour accuracy and drive the colour quality score down to its lowest setting.  So to achieve the maximum quality score the goal should be to capture a good quality exposure that requires little or no manipulation other than colour calibration in the RAW workflow.

Lighting Quality
Lighting has a huge impact on colour capture.  Simply take the score obtained using the Lighting Quality tool and apply it here.  The lighting tool captures everything from the quality of scene lighting to lighting direction and shadows on the subject to dynamic range and multiple lighting issues.  The best lighting conditions overall provide the best colour capture.  For more on the Lighting Quality Tool see HERE.

Sample Point Quality
Last but not least we have to consider the quality of the sampling method.  Since coming up with an effective way to sample colour (see HERE) I have completed an analysis including coming up with an effective quality control method for choosing appropriate sample points (HERE).  It simply involves using artificial defocus to test sample homogeneity.

I have also tested the effect of varying image resolution on the effectiveness of the sampling method (HERE).  So far the analysis points to a sampling method that is very robust.

Summary and Conclusions
By gathering together various elements that define accurate colour capture and presenting them here as an Image Quality Tool I can now draw a line under this chapter.  Along the way I have learnt an awful lot about colour in birds and the processes involved in human and digital imaging.  No doubt I will continue to add further insights to this thread and may update the quality tool along the way.

Important to stress that the calibrations referred to here manipulate the image content to reveal accurate colours.  These individual colour pixel contains an RGB value and these values can be identified using any standard photo editing software (eg. MS Paint).   However there is one more calibration required to view your images properly.   Correct visual presentation of colour on a screen or printer depends on the quality of that device and its accurate calibration.  Obviously if you decide to bring your images to a lab to have them printed you are relying on the lab to have properly calibrated its printer.  If you decide to print them at home using a high quality inkjet printer at least you have ultimate control over its calibration.  

Having calibrated and sampled the colours of an image the obvious next step is to put a name to each colour.  You could simply decide to name the colours subjectively.  However, after having gone to so much trouble to calibrate your images at every step why leave the final stage to chance.  Through the blog I have developed the Birder's Colour Pallet as a standard reference tool for colour nomenclature in the sRGB colour space.  Using RGB values it is possible to assign a name using a scientifically repeatable methodology to any colour in your image.  The real beauty of this approach is that your display device doesn't even have to be calibrated.  The RGB values remain the same regardless of how they are displayed on your screen.  For more on the Birder's Colour Pallet see HERE

All that remains now is to provide you with the tools so you can play around with them and have a go at scoring the quality of your own bird images.  Feedback is as always much welcome and appreciated.

(Note you will have to download the file and open it in MS Excel for the tool to work properly).

Tuesday, 17 November 2015

Colour - Saturation Finally Explained

Not for the first time I stand corrected!  I have found it hard to get hold of a clear explanation for colour saturation.  Sure it's easy to visualize colour saturation when we see it illustrated graphically, as in this simple depiction below.  Saturated colour is rich and pure in appearance while desaturated colour looks washed out, fading eventually to greyscale.  But what actually is colour saturation and how is it measured by the camera?

In earlier postings I fell into the trap of assuming that colour saturation is not actually measured at all by the camera but is one of the camera pre-sets.  This statement is partially true.  Saturation, like contrast is one of the pre-sets that is laid down as the raw image data is converted to JPEG.  It is also one of the settings that needs to be adjusted in a RAW workflow using Camera Raw or whatever you have.  

It is also evident, in the design of the camera sensor that there are only two elements used to measure colour.  Each photosite directly measures luminance, one of the three parameters that defines a colour.  Then we have the Bayer filter array.  In the majority of cameras the Bayer filter array is the critical element that defines colours accurately in digital images.  Without it the image would be black and white.  Over each photosite sits either a green, a blue or a red filter.  The filter blocks all of the visible spectrum, apart from the region of the spectrum that corresponds to that filter.  So in effect each photosite measures light intensity over a limited region of the visible spectrum.  So, how does the camera actually decide based on this limited information what the hue and saturation for a given pixel should be?

The answer is interpolation, or more specifically de-mosaicing.  Each pixel on the final image does not correspond to a single photosite on the digital sensor but rather to a cluster of neighbouring photosites, generally two green filtered and one each of adjacent red and blue filtered photosites.  This starts with the creation of a Raw Bayer Image as illustrated below.

So, what is Colour Saturation Exactly?
Having been researching colour for some time I kept finding the trail going a little cold at this point in the journey.  Then, finally I found a proper explanation for what colour saturation actually is, and it all starting to fall into place.  When we look at a depiction of a saturated colour compared with a desaturated equivalent it is easy to assume that saturated simply means 'more of the same colour'.  That is kind of true, but more specifically what saturation is is 'purer colour'.  When we analyse the spectral distribution of a colour in nature what we find is that the colour is often made up of a range of different colour wavelengths, not just the wavelength that corresponds most to the colour we see.  We are all familiar with the concept of mixing paints to create different colours.  Colour is itself a mixture of different quantities of other coloured wavelengths.  Saturation is a measure of the purity of the most dominant wavelength.  If a colour is almost entirely made up of one wavelength of light (eg. a laser) it will appear richly saturated.  If however the colour consists of one dominant wavelength plus a lesser amount of a range of other wavelengths then that colour may still have the same dominant hue but it will appear less saturated, i.e. less pure.  What an extraordinary and somewhat counter-intuitive revelation!  And yet I kept missing this vital point while researching colour and saturation.  I finally found all of this neatly explained by the experts at Cambridge in Colour (once again) under their tutorial on Colour Perception HERE.

How do cameras measure colour Luminance, Hue and Saturation?
So its all finally fallen into place.  Cameras measure luminance directly at each photosite based on the number of photons collected.  Then, taking the colour information gathered from the green, red and blue filters the camera can measure both hue and saturation to a high degree of accuracy.  If for example the object being photographed is a pure, saturated green colour then only the green filtered photosites will likely register an image of it.  If however the object is a dull desaturated green then chances are its spectral signature will register to a greater or lesser extent across all three colour filtered photosites.  During interpolation this data can be combined to identify both the correct hue and saturation level for that colour.

Saturation pre-sets and other post-processing 
The story doesn't end there.  Most photographers would agree that digital images are not as well saturated as equivalent film or slide images.  Manufacturers leave the choice of saturation preferences up to the photographer.  Those who like more saturation in their images can select a stronger saturation pre-set if they prefer.  Different camera manufacturers are likely to put their own non-adjustable pre-sets in place depending on exposure and other factors to boost the overall quality of the images offered.  So there is probably a limit to how well we can really analyse the true saturation of colours based on digital images.  Hopefully though the overall accuracy is good enough for our purposes.

Colour management is a process that aims to maintain accurate colour from the scene to the camera or scanner, to the screen and ultimately to the printer.  Doing this properly takes a great deal of effort and starts with the proper calibration of the camera sensor.  No two cameras generate colours exactly the same way.  Sensors vary slightly and are not calibrated by the manufacturer to any globally recognised standard.  For some reason manufacturers don't think this is important.  The most well recognized tool needed to calibrate a colour sensor to a standard is the X-rite (formerly Gretag) Colorchecker Passport.  Meanwhile, in the field the human eye is constantly adjusting to changing light temperature (white balance).  The camera also tries to compensate accordingly but often fails in that regard.  Once again there are ways to correct camera white balance properly and to a recognized standard.  I have discussed these various processes in detail HERE.

In summary
So, once again I have finally jumped another stubborn hurdle on my journey of discovery.  I have found colour saturation to be one of those conundrums that didn't quite fit in to my understanding of light and of digital photography.  Having established that the key word in the definition of colour saturation is 'purity', finally it all now makes sense.  The camera can indeed measure very effectively all three parameters that go to make up what we call a colour.

Saturday, 14 November 2015

Forensics - Bit Depth Limitations

Contrast Ratios and Bit Depth
If you think about it, the only meaningful way to describe the brightness of light, other than by direct measurement, is by making comparisons between different brightness levels.  We experience and refer to this difference as contrast.  An obvious question might be, how well do humans perceive a subtle change in brightness?  Human sight is arranged to buffer or mask global and local changes in lighting.  On entering a building from outside for instance the eyes adjust almost instantaneously and imperceptibly to the large drop in brightness by widening the aperture or iris of the eye.  The indoor environment might still be perceived as just a little darker than the outside environment, perhaps still visible through windows of the room.  In reality the brightness indoors is only a small fraction of the brightness outside.  This difference can be measured with a light meter or lux meter.  Typically illuminance might be 400lux in a bright room but as high as 100,000lux on a bright day outside.  This represents a 250 fold difference.  We don't tend to perceive the gap as being quite so large.

While humans may not be good at detecting subtle changes in global lighting because of this adaptation, at the same time when faced with an image we can be very good at detecting very subtle brightness differences at the local level.  By placing tonal swatches beside one another and asking observers to detect a difference in brightness it should be relatively easy to assess the capabilities of human vision.  The CIE have been studying these capabilities for almost 100 years and, as a result we know quite a lot about the capabilities and limitations of our eyesight.

The manufacturers of digital display devices obviously have a great interest in the capabilities of human vision and how to provide the clearest, sharpest and most efficient imaging technology.  Rather surprising then that there is no accepted standard covering all of this (see HERE) allowing manufactures of TV's and other screens to exaggerate the capabilities of their products.  But surely there must be some benchmark?  What if the ability to perceive detail on a screen was a matter of life or death?  In the medical industry display devices must meet certain standards.  I found this useful reference while looking at medical imaging devices.  Using a term called "just noticeable difference" (JND) it turns out that most people can detect a minimum of 720 different shades of grey on medical displays which have an output brightness range of between 0.8 - 600 cd/m2.  So when deciding what bit depth to apply to these screens 8-bit (256 shades only) is clearly not taking enough advantage of the available contrast ratio while 16-bit (65,536 shades) goes way beyond what is required, as humans simply cannot perceive all of these shades at once on those devices.  The optimum or most efficient for these medical devices has a 10-bit depth or 1,024 distinct shades therefore.  So there are just enough distinct shades to cover the range of human sight for the device without adding any unnecessary bandwidth and cost.

So, what about our average computer or smart phone screen?   Once again, for practicalities the standard devices that we use every day don't currently need a very high bit depth.  For a start these screens do not have as high a brightness or contrast range as a medical screen.  Therefore the number of effective JND shades is much lower than the minimum 720 shades possible with a medical screen.  In actual fact on typical computer screens used in typical indoor settings the average person is not going to be able to discern much more than 100 distinct shades at any given time.  As illustrated below 5 or 6-bit depth is not quite enough for a typical computer screen but 8-bit is certainly plenty.  It's no coincidence then that the Internet runs on 8-bit sRGB colour space.  If ever the public developed an appetite for higher bit depth images the whole industry from screen manufacturers to computer manufacturers and Internet providers would all have to increase bandwidth to cope with the extra demand.  This obviously all adds significant cost to all stages and so it probably wont happen any time soon.

In the meantime we can still take advantage of the higher bit depth captured in our digital RAW images by playing around with brightness, contrast and another lighting settings in Camera Raw and other programs as discussed HERE and HERE.

Digital Image Encoding, 12-bit and 10-bit
As indicated all forms of standard digital imaging technology from digital cameras and scanners to screens and printers are set up to capture and/or display images in 8-bits.  Most high end digital cameras however encode images in 10-bit or 12-bit instead of 8-bit.  Why?  The answer is Gamma, that adaptation of human vision where humans perceive a greater tonal range in the shadows than in the highlights.  Digital sensors record and encode digital image data linearly.  This data is later subject to a gamma correction for sRGB colour space.  In order to ensure there is enough tonal data to accommodate gamma correction without resulting in a noticeable postarization in the shadows as much as 3.5 bits of additional tonal data must be captured and encoded.  Hence the 10 - 12-bit encoding.  For more on gamma correction see HERE and HERE.   

Wednesday, 4 November 2015

Birding Image Quality Tool - Rev. 2.0 Birds and Light

When I initiated this blog in early 2014 I started with the concept of an Image Quality Tool for birders to use to quantify the overall quality of bird images based on key image capture quality parameters including resolution, focus, exposure, colour and artefacts.  Soon after I began to explore the first of my in-depth analyses, namely on the subject of Birds and Light.    The journey has been rewarding and I think it's time to gather the main findings and put them to some use.  One way I have chosen to do this is to augment the image quality tool with another quantitative tool devoted to scene lighting quality.

Once again I have chosen a series if parameters intended to provide a good overall representation of the subject.  Here I will go into some detail on each of these parameters and direct readers to some relevant postings.

Light Quality
There are a number of things that determine the quality of light, perhaps the most important of these being the sun's position in the sky.  As we rapidly head towards winter solstice the light here is already starting to decline in quality.  As I type it is roughly 1.00pm on a very unusually balmy Irish November day.  The sun is as high in the sky as it will get and yet there is a very noticeable yellow tint to the light.  By this hour in mid-summer the sun would be much higher and the light would be harsh, crisp and strongly white.  From here until early spring there won't be too many days in the field when lighting quality will be quite as reliable as it was mid-summer.

I really enjoyed constructing this animated gif.  It represents the changing quality of light during the year.  Imagine you are lying on your back with your head pointing north and your feet south, watching the sun trail across the sky for the day.  Well, in Ireland this animation depicts what you would witness over a 24 hour period. The long days of summer bring a constancy to the quality of light that we quickly take for granted.  In mid-winter we must make do with an ever-changing pallet from a reddish dawn to a yellowish morning and evening light.  Because the sun is low in the sky the light must travel through denser atmosphere to reach us.  The atmosphere scatters the shorter blue wavelengths of sunlight (Rayleigh scattering), with the result that the light is richer in the longer yellow and red wavelengths.  Hence the changing colour of sunlight and the blue colour of the sky.

Light quality can effect bird images in a number of ways.  The most obvious impact is on colour.  Human's have the in-built capacity to adapt to changing light colour temperature by correcting colours to maintain colour constancy.  We call this colour balance or white balance correction.  Cameras are equipped with white balance correction but accurate white balance can sometimes be hard to achieve.  So this is a quality parameter we need to watch for.  In the image quality tool I have captured white balance under the colour parameter and in the lighting tool it makes another appearance here under the concept of lighting quality.  For more on white balance see HERE.

The other key impacts on images are the angle of sunlight and the intensity of light.  I have tended to elaborate more upon these aspects in my various postings on lighting environments.  For instance here Against The Sky the low angle of the sun in the morning can totally alter the appearance of a bird in the sky when compared to later in the day.

Whereas, On Snow And Ice the major issue to watch for is albedo or surface reflectance.  So light intensity is the big factor to consider in that particular lighting environment.

Overall, when we start to analyse lighting quality we begin to establish the optimum lighting quality conditions for birding and photography.  Hopefully through my analysis I have arrived at a good categorisation and weighting system for lighting quality in bird images.  For more on this subject see HERE.

Light Direction
Having analysed the overall quality of the light we are dealing with its now time to take a look at the direction of light.  As observers and photographers we are all acutely aware of the benefit of putting the sun to our back so that our subject is uniformly lit from the front.  Unfortunately we can't always control this.  When faced with a tricky bird identification the lighting conditions are often a very significant factor.  We might feel that judging the sun's direction should be fairly easy in a photograph but very often it is not.  In the posting on Lighting and Shadow Direction I looked at a couple of methods to help pinpoint lighting direction with a finer degree of accuracy.

If the sun is shining and we have positioned ourselves fairly well in relation to the sun and our subject then we may find that the sun's position can be pinpointed due to the specular highlight in the bird's eye.  We may not have an exact three-dimensional direction but we can at least draw a conclusion within the two dimensions of our digital image.

Failing that, if we are lucky we might be able to establish light direction based upon surface normals.  The principal, Lambert's Cosine Law is that light reflection is at it's brightest where the light source hits a surface at a 90 degree angle (or 'normal' to the surface).  Here by drawing surface normals at the brightest points on the surface perimeter of this Ring-billed Gull (Larus delawarensis) I have been roughly able to establish the direction to the light source.  The analysis is consistent with the eyeball method above.  Were the bird to have been lit from the side or rear, so that it's eye were in shade, this technique would offer a workaround to establish lighting direction.

So why is lighting direction of relevance to us?  Well, if we know the lighting direction we can establish the direction in which shadows are falling.  Then when it comes to an analysis of field marks we are in a better position to analyse for potential anomalies caused by shadow.

From the perspective of the image lighting quality tool I have kept this analysis fairly simple for now.  It should be possible to establish if a bird is front lit, back lit or side lit without having to resort to such a fine level of analysis.  But I have presented these finer tools here should such a more critical analysis be justified.

Shadows, as just stated regularly cause confusion during bird identification from photographs.  In fact shadows fall so consistently within the topographical recognisable areas of the plumage that I think an argument can be made for a Shadow Topography.  And in fact, that is just what I have proposed in the posting of that name.

What if the day is overcast I hear you ask?  Well as it happens I have spent a great deal of time teasing out this and related questions, the most recent of which was the posting entitled Lighting and Perspective (Part 2).  Through experimentation I have been able to demonstrate how effective cloud cover is as a diffuser of light.  The simple answer to the question of light direction on a cloudy day is light is scattered in all directions.  Therefore from the perspective of the camera the subject is being lit from the front at all times.  Meanwhile shadows are also being cast in every direction.  The shadows are also soft and diffuse because they are being diluted in effect by the scattered light.  This is why, in terms of observation and photography a bright overcast day will tend to trump a bright sunny day every time.

Dynamic Range
This is one of the technical aspects of photography that can sometimes be difficult to get ones head around.  Put simply it is the range of light intensity that can be captured by an imaging system from the brightest point to the darkest point.  A good way to start thinking about this subject is to consider how well ones eyes adapt when suddenly faced with bright sunlight or total darkness.  Our eyes have an incredibly broad dynamic range, much broader in fact than any digital camera.  And yet we have our limits.  We have evolved methods to adapt to changing lighting, some of which take time to kick in.  In a way a digital stills camera is at a bit of a disadvantage as a still image is not dynamic.  The camera has a brief moment to capture an image.  After that the camera can do no more with the lighting presented to it.

That said there are techniques available that can boost the dynamic range of the camera and indeed simple ways of adapting to the available light, just as our eyes can.  The simplest of these is of course exposure compensation.  By adjusting exposure time, aperture and/or ISO settings a photographer can peer into the brightness or the gloom and even see beyond the dynamic range of the human eye.  But the camera can only shift exposure beyond the human range.  The camera's overall dynamic range has not improved.    For instance, a camera may be capable of capturing the detail on the surface of the sun and also capable of peering into the dimmest corners of the universe.  But it cant do both at the same time.  If it could it would have a dynamic range far beyond that of the human visual system.

Below I have attempted to combine an understanding of camera exposure and dynamic range in one graphic.

There are also techniques that allow the dynamic range of the camera to be boosted.  These are referred to as High Dynamic Range Imaging techniques or HDRI.  I have explored some of these techniques in a posting HERE.
Obviously what makes dynamic range of relevance to us is that, much like the challenging light that makes it difficult for our eyes to see properly, if a camera's dynamic range is exceeded image content suffers and the challenge for bird identification is made much greater.  So how do we establish if an image has suffered due to dynamic range issues?  The answer can be found by studying the image histogram (see image above).  A histogram is simply a graphical representation of all the tonal levels in an image.  If clipping has occurred the graph will show a spike at one or both edges of the histogram.  On an exceptionally bright day, such as the early winter's day when this European Robin (Erithacus rubecula) was photographed the harsh light exceeds the dynamic range of the camera.  In this case the brighter levels (the highlights) have come close to clipping while there is also a considerable accumulation at the dark end of the histogram (the shadows).  In the original image on the left the middle of the histogram is quite low and flat and this reflects the high contrast of the image.   HDRI techniques can be remarkably good at restoring a balance to an image, reducing the overall contrast and in doing so bringing out detail from the mid tones.  The right hand image was created using a HDRI tool.

So, in the context of our overall lighting quality tool we are looking for evidence of high dynamic range issues.  Is the image high in contrast?  Is there evidence of loss of details in the mid tones and is there evidence of clipping?

Multiple Lighting
Without having given much thought to the consequences of Rayleigh scattering or cloud cover light diffusion one might assume that there is only one light source in the heavens.  But when we take a much closer look it turns out that an image is often made up of lighting of different sources and colours.  Take for instance a normal sunny summer's day.  In the sun the light colour temperature will be very close to perfectly white.  But in the shade the light temperature is very different.  The blue sky canopy scatters light into the shadows on a sunny day and renders the shadows blue in colour.  This also has a baring on the appearance of our subject.  I have studied the lighting qualities of different lighting environments in great detail in the posting Lighting Under The Microscope.

This dual lighting can be very frustrating.  There is nothing worse than being presented with a perfect portrait opportunity, like this mega rare Eastern Olivaceous Warbler (Iduna pallida) only to be frustrated by the lighting.  There are effectively two different white balance settings to choose from in this one image, the white balance for the shade and the white balance for the sunlit area.  This image also illustrates the dynamic range challenges created by bright sunlight.  There is some clipping and blooming (an artefact associated with highlights clipping) both evident around the top of the tail and rear toe and claw of the left leg.

Summary and Conclusions
By gathering together various lighting aspects and presenting them here in the form of a concise image quality measurement tool, hopefully I have met my objective of summarising and drawing a line under the chapter Birds and Light.  That said I have found myself repeatedly coming back to this subject because I find it so interesting.  No doubt I will add more postings to this page but I'd like to think I have gathered enough information to provide this broad summary.

No doubt the measurement scales can do with some fine tuning but I am happy for now at least with what I have achieved here.  For those who wish to download and play around with both the Image Capture Quality and Lighting Quality Tools please follow the link below.  Feedback as always welcome and much appreciated.

DOWNLOAD Birding Image Quality Tool Rev. 2.0
(Note you will have to download the file and open in MS Excel for the tool to work properly).

Here is an example of both the Image Capture Quality Tool and Lighting Quality tool in use using one of my favourite images ever - a displaying and aptly named Sunbittern (Eurygypa helias) from Venezuela.