Sunday, 23 November 2014

Forensics - High Dynamic Range Imaging (HDRI)

Dynamic Range and HDRI
Typically high dynamic range comes into play on a bright, sunny day.  A camera's dynamic range cannot cope with the full range of light intensity from details captured in deep shade to details in highlights under such lighting conditions.  For those who are not very familiar with Dynamic Range and High Dynamic Range Imaging (HDRI) here is a really nice video that will quickly bring you up to speed.

Scope and Objective
If you open any field guide and look at the plates, the illustrator in almost every case has chosen to depict a bird as it might appear under ideal, neutral and relatively low intensity lighting.  Most birders, starting out would find the challenge of bird identification made all the more difficult due to the ever-changing nature of normal ambient light.  If ones first experience of birding were a day in the field on a sunny winter's day, it might well turn out to be ones last experience of birding, such is the added challenge posed by the often harsh, forbidding light conditions in winter!  

The diagram above hopefully illustrates the potential use of HDRI for forensic image analysis.  From the perspective of bird identification from digital images, we hope to use HDRI to bring the lighting in an image more in line with the ideal lighting we are familiar with from illustrated field guides.

HDRI Using Exposure Bracketing
Most HDR images are created by using exposure bracketing to make three or more exposures in quick succession with different exposure times, designed to capture three discrete exposure brackets within the dynamic range of a scene.  When I photographed the subject of the next few images it was a wonderful day for birding, except that, due to the time of year, the sun was never very high in the sky and the light was very harsh at times.  In other words, the ambient lighting exceeded the dynamic range of the camera.  I grabbed three exposures of this Robin Erithacus rubecula sitting on a black-capped white pillar late late in the afternoon.  These were created using exposure bracketing.  Due to the camera's limited dynamic range, the light was too contrasting, and consequently none of these exposures worked out great.  The bird was stood facing the sun, so from the point of view of the camera the bird was side lit.  This image most closely matches the lower of the four images displayed above and is a good candidate for HDRI.

What would it be like to combine the three bracketed exposures, extract the good bits of each exposure and disregard the bits that are either over or underexposed?  Well that is the whole basis of HDRI.  The goal is to try and flatten out the contrast in the image, bring up the detail of the parts of the bird that are currently in shade, and subdue the brightness and saturation of those bits that are in full sun.
Using Adobe Elements Photomerge tool I have attempted to create a HDR image from these three exposures.  But, it has not worked out.  Why?  In the milliseconds it took to create the three exposures I moved the camera and the bird also moved.  This is a common problem with HDR images and the main reason why many people don't bother trying to create and use them.  Well there is an alternative solution.  

We know that RAW format images contain a lot of hidden detail.  Why not create multiple exposures using one RAW image file?  The three images to the left below were all created from the same RAW image.  All of their image settings are identical with the exception of exposure.  I adjusted exposure to roughly match the exposures I had made in the field.  I have now combined them in Photomerge to create a HDR image from them.
Here is a comparison between the original JPEG and the HDR image made from RAW.
The main difference between these images is a reduction in contrast in the HDR image.  A HDR image is the scene's dynamic range compressed to fit within the dynamic range of the camera / display device.  In turn, this more closely matches, or rather mimics the dynamic range of the human visual system.  While this hasn't quite flattened the image to the point that it might appear ideally lit, there is certainly a marked improvement.  It is much easier to appreciate fine details and subtle colours throughout the tonal range of the subject from highlights to shadows.  

Could this HDR image have been created from RAW following the normal RAW work flow from a single image, without the need for all of this multiple-exposure merging?  The short answer is yes!  Further down I have done just that.  But there is a bit more digging to be done first, so please keep reading.

HDRI versus a Contrast Tool
As we have seen, the major change brought about in a HDR image when compared with an original JPEG is in the contrast of the image, what is the difference between HDRI and simple Contrast tools?  Here are a couple of images comparing HDRI and the Contrast Tools in Adobe Elements and Camera Raw.
I made the above comparison image using the 256 tonal grid I had used earlier for the Adobe Lighting Tools posting (HERE).  The HDR image (created from the three images on the right) and Contrast Tool set to maximum contrast reduction (-50) look quite similar but the histograms show there is a bit more going on in the HDRI image.  As noted in the posting linked above, the exposure tool in Adobe Elements doesn't treat brightening and darkening of an image in quite the same way.  Darkening an image produces a flatter histogram than an equivalently lightened image, for whatever reason.  This possibly accounts for the jumbled HDRI histogram.  So, perhaps exposures created using Elements do not make the best example.

The HDRI and Contrast images below have both been created from RAW so that is possibly a more valid comparison than one made in Elements.  The HDR image and the image created by merely flattening contrast certainly look much more similar and their histograms support that conclusion.  The HDR image just looks that little bit more compressed.

This image emphasises the clear link between HDRI and simple Contrast tools.  A Contrast tool can be used to create a rudimentary HDRI image.  For a nice example comparing a HDRI image with one obtained using a simple contrast tool skip to the end of the page linked HERE.  Clearly, if done right, a HDRI image produces much better tonal detail and vibrant colours.

HDRI versus a RAW work flow
As the image above illustrates, most of the work done by Photomerge could have been replicated by simply using the contrast slider in Camera Raw.  So where is the added benefit in creating multiple exposures and combining them using a bespoke HDRI software?  That is certainly a legitimate question and one I have sought to address below.

HDRI Software
Firstly, lets take a look at perhaps the best regarded of the HDRI software packages, Photomatix Pro.

There are quite a number of HDRI software packages and plug ins on the market, most intended for artistic/aesthetic image-finishing purposes.  These are powerful editing programs.  Could they offer us any advantages as image forensic tools?  

I have come across another very good video using the same multiple exposure from RAW technique that I used above.  While this video is much more about the aesthetic/artistic value of HDRI, hopefully it provides another interesting insight into HDRI and the powerful image editing tools provided by some of these programs.  Clearly, there is more to HDRI than a simple merging of exposures.

Adobe Elements Photomerge V Photomatix Essentials
Having seen what Photomatix Pro can do I first downloaded the trial version of Photomatix Essentials.  Below I have compared the results using Photomatix Essentials with the results I previously had obtained using the simple Photomerge tool in Adobe Elements.

The upper two images compare the results obtained with the three bracketed exposures.  In theory HDRI images from bracketed exposures should offer the best HDRI results because each exposure bracket is a proper standalone image, with minimal noise and maximum tonal range preserved.  But the camera wasn't held perfectly steady and the Robin moved.  This is the big problem with HDRI from exposure bracketing.  The resulting 'ghosting' renders the Photomerge tool pretty useless for HDRI using bracketed exposures.  Photomatix has a tool to remove ghosting but it does not work perfectly, and, not surprisingly introduces some artefacts of it's own.  In this case there is a big improvement but there is still slight ghosting of the bill, and the rear crown is also affected.  

Comparing the lower two images, the Photomerge Tool does a good job with the three copies from the same RAW image.  Photomatix Essentials produces a slightly better, more natural result, with more detail but I am not convinced that Photomatix Essentials would so a better job than a normal RAW work flow (see below).  Photomatix Essentials costs around €/$40.  

Verdict:- possibly not worth the money unless working from exposure bracketed images.

Camera RAW V Camera RAW + Photomatix Pro
Photomatix Essentials' more expensive cousin Photomatix Pro obviously has a lot more useful functionality but at around €/$100 has the price tag to match.  The question is, does this added functionality push it far enough beyond a normal RAW image work flow capability to warrant adding it to the forensics tool bag?  Full credit to the manufacturers of Photomatix Pro - one can freely download and play around with the full package on trial and this is what I have done here. 

For this comparison test I have created the lowest contrast, yet reasonable result I could manage in Camera RAW.  The process of creating a single HDR-like image from a single image is called Tone Mapping so this is what I have done in Camera Raw.  I have then taken that image and used the additional Tonal Mapping functionality in Photomatix Pro to try and bring out more detail and tonal quality from the resulting image.

Verdict:- Once again it is hard to justify paying extra money for arguably no real improvement in image quality.  As with Photomatix Essentials the real benefit of having Photomatix Pro would only come about where in-camera bracketed exposures were obtained in the field.  

In Summary
The high contrast light that characterises bright sunny days challenges and often defeats the dynamic range of digital cameras.  The solution is High Dynamic Range Imaging (HDRI).  However there are some practical difficulties in obtaining good bracketed exposures.  One might be better off taking the time to shift position and get a better angle on the subject relative to the sun rather than trying to create steady bracketed exposures!  As for HDRI software?  There are certainly some high performance software packages out there but they don't appear to offer anything above the standard Camera Raw workflow, unless of course you have obtained good bracketed exposures, in which case Photomatix should easily outperform the Camera Raw work flow which is based on a single exposure.

Last video...a Tone Mapping example to show the HDRI capabilities of Adobe Lightroom from RAW.  It might be time I upgraded from Elements!

Saturday, 22 November 2014

Birds and Light - Arid and Semiarid Areas

Light and Shade in Arid and Semiarid Areas 

Characterised by low rainfall levels, sparse vegetation and high temperatures, arid and semiarid areas  are among the most extreme environments for life, and one of the most challenging environments for birding and photography.  Birds in arid and semiarid areas tend to be nomadic, which adds to the challenge and reward of desert birding.

Desert Sparrow Passer simplex (Morocco) is typical of many desert species.  It's field marks are subdued and colours subtle, which makes it a difficult species to photograph properly.  Changing lighting and shadows have a more dramatic effect on subtle plumage tones.  Harsh light, particularly extreme ultraviolet light damages feathers and bleaches out pigments.  Water loss is greater due to higher temperatures.  Insects and other animals are also consequently harder to find during the heat of the day.  Not surprisingly, most desert birds try to keep out of direct midday sun.  On the rare occasion that I have seen small passerines asleep during the day, it tended to be in arid environments.   Only mad dogs and birders risk life and limb being out during the intense heat of the day (as I have painfully learnt).  The other big issue of course with birding in the desert is heat haze, which can begin to manifest very soon after the sun comes up.

The main characteristics of lighting in arid and semiarid areas are as follows:-  

High Contrast
Midday sun is high in contrast,  While birds are much harder to find at this time of the day, it doesn't stop birders looking for them.  Images captured under these conditions often have high dynamic range, with burnt out mid-tones.  Heat haze can also affect image sharpness.

Male Moussier's Redstart Phoenicurus moussieri, Morocco.  

While most desert birds are subtle in appearance there are also desert and semiarid species which exhibit high contrast plumage markings.  Is this related to the high lighting contrast typical of these areas?   Note the strong wear on the wings and tail of this male, photographed in mid-April as the first clutches are about to hatch.

Colour Balance
Because birds tend to be most active in arid and semiarid areas around dawn and dusk, this is often the time when photographic opportunities present themselves.  The low position of the sun in the sky adds a yellow to reddish tone to images.  This can be further enhanced by the natural tones of soil, sand and rocks.  Birds also tend to be active during twilight hours when the lighting is strongly blue in tone.  This colour can be either enhanced or possibly even masked by the surrounding terrain (making it hard to detect in photographs).  When we are reliant on white balance for accurately gauging subtle plumage tones, these lighting conditions can present a big problem.

'Desert' Olivaceous Warbler Iduna pallida reiseri, Morocco (slighly defocused & overexposed).

Identification of Iduna species (formerly the pale brown Hippolais warblers) is very tricky.  'Desert' Olivaceous Warbler, as the name suggests, is confined to the more arid areas of North Africa.  They differ from the more widespread Isabelline (alternatively Western Olivaceous) Warbler Iduna opaca in having a shorter, narrower bill, and possibly based on some very subtle plumage colouration, such as generally warmer-looking ear-coverts, lores and rear flanks.  Identification of a putative reiseri requires good low-contrast light with minimal external colour influences.  Mid-morning might be the best time to be looking for one!  

Dull Light
Dull light is usually only an issue in the desert around twilight, or on the rare occasion that rain or a sand storm threatens.  This creates monochromatic lighting conditions.   For many arid and semiarid species which have subtle plumage tones, this simply means they blend in even more with their surroundings.  Identification of a great many desert species relies on subtle judgements of plumage markings, size and structural features.

Crested Lark Galerida cristata, Morocco.

Few species pairs are as difficult to separate as Crested (G. cristata) and Thekla (G, theklae) Larks.  With a multitude of races, identification comes down to subtle shape, plumage markings and some subtle plumage colours.  Once again, good lighting is key!

Ultraviolet Light
Ultraviolet is invisible to humans but many birds can see well in UV.  Digital camera sensors are naturally sensitive to UV so camera equipment which lacks or has poor UV filtration (eg. some digital camcorders) can produce images with unnatural colours.  UV can be really intense in desert areas so this is something to consider.  Most modern digital cameras including most DSLRs have good UV filtration to prevent this normally unwanted light from reaching the sensor.  Note however that some Nikon cameras do not have good UV filtration so perhaps might be more prone to this problem.  For more on UV see HERE.

Looking for Larks, Pipits and Wheatears beside the Tagdilt Track, Morocco, mid-April, 2006.  Not a great year for some of the more nomadic species but still plenty of jewels to be had among the rough - just adds to the spice of desert birding!  Note the presence of a high level of UV can manifest in the form of a hazy looking backdrop.  An additional UV filter may boost contrast slightly and remove some of this haze.  However, in the desert most of the haze is probably being caused by dust, not by UV.  More on filters HERE.  Find your UV exposure risk HERE.

Temminck's Lark Eremophila bilopha, Tagdilt Track, Morocco

Seebohm's Wheatear Oenanthe seebohmi,  Tagdilt Track, Morocco.  Recently split from Northern Wheatear O. oenanthe.

All images were taken with a Kyocera Contax U4R.  Bird images were all digiscoped with the aid of a Leica Televid 77 scope and zoom eyepiece set to 20X.

Thursday, 20 November 2014

Birds and Light - On Snow & Ice

Light and Shade on Snow & Ice

At Sea birds are simply moving around in an environment consisting of water and air.  In many ways, a snow and ice environment is much the same.  The key difference between observation and photography in a watery environment when compared with snow & ice is albedo, or surface reflection.  Freshly fallen snow, with a surface reflectance of 80 - 90% has the highest albedo of any naturally-occurring environment on the planet.  But this can rapidly decrease as snow melts and absorbs soil and other material.  Water has among the lowest reflectance from above but as we know is highly reflective when viewed at angles increasingly approaching the horizon (i.e. at a high angle of incident light).

A group of Antarctic Terns Sterna vittata resting on drifting glacial ice in a Chilean fjord make for an interesting juxtaposition between ice, which has among the highest albedo values and water which has among the lowest.  But water has an unusual property in that surface water reflectance varies depending on incident light angle.  Glacial ice also has a distinctly blue colour which sets it apart from the white ice and snow on land. Glacial ice is also not as bright as other ice.  It actually differs from sea ice (HERE) and it's blue colour is explained HERE.

The intense reflection from snow and ice makes observation and photography a real challenge, particularly as the sun gains height.  Snow and ice particles scatter white light which can have the effect of illuminating a subject from all directions. This is rather like the fill lights used in a portrait studio.  It can be an advantage under the right conditions.  The problem occurs when subjects are not very uniformly lit and the contrast between very bright patches of light and poorly lit areas is high (high dynamic range).  If dynamic range exceeds the camera's abilities tonal detail will be lost through clipping - especially likely for a pied subject like this Snow Bunting Plectrophenax nivalis.  That said, if shooting in RAW, much of this detail may he recoverable and this type of scenario actually suits a form of image optimisation referred to as ETTR (exposure to the right), as referred to HERE.

However a much more common problem involving snow and ice scenes is underexposure.  This occurs due to incorrect light metering, whereby the camera's selected exposure overcompensates for the brightness of the terrain and the subject is underexposed as a result.  For more on light metering and related exposure issues see HERE.

The upshot of all of this for identification of birds from photos is to be mindful of the lighting conditions, time of day, location and latitude and the image exposure.

Howell and Dunn (Gulls of the Americas, 2007), for those who have access to it, discuss under Environmental Factors (pages 13-18) the difficulties of trying to photograph gulls in this type of environment.  Plate I.19 on page 17 for me is a good example of an image which has been underexposed.  It is a particularly interesting image also because there is a fill light element to the image which, to me appears like it could be due to low sunlight reflecting off an ice wall or bank behind the photographer (as opposed to a camera flash for example).  The overall lighting effect is eerie and almost other-worldly.  The bird's mantle shade appears too dark for Vega Gull Larus [argentatus] vegae but it's bill and legs and the ice around the bird are also oddly dark, and this reveals the actual exposure issues with this image.  This is a good example of the kind of bizarre lighting anomalies that can be associated with snow and ice.  I'd like to thank Amar Ayyash for drawing my attention to this image.

Wednesday, 19 November 2014

Forensics: Gaussian Analysis - White Balance

The very Gaussian nature of white balance should be recognisable to most.  A Grey Card is like a compass where white is magnetic north and there are three axes instead of two.  Manual white balance correction is much like navigation using ones senses, without the aid of a compass.  Those experienced in navigation without a compass will be able to make do under most conditions but bias can easily throw us off course, so a compass is required for accurate, reliable navigation.  For instance, it has been shown that if a person is blindfolded (or navigating in a snow storm or in fog) and asked to walk a long straight line, they will end up walking in a wide circle.  The other senses, trying to compensate, will lead us astray.

The white balance which we are most familiar with is Kelvin colour temperature - the blue-yellow axis (B-Y).  This is the colour tint introduced by the sun's position in the sky (Rayleigh scattering).  HERE I discussed another familiar scenario where green light transmitted by a foliage canopy bathes a scene in a soft, diffuse, green light.  This can only be corrected for by having access to the green-magenta axis (G-M).  Normal manual white balance tools do not cater for this axis.  In Adobe Photoshop this axis is referred to as Tint and is provided for in Camera Raw.  

Red colour tints are common in nature, in desert areas where the soil and sand has a high iron content.  Coming from a temperate climate myself, it was immediately noticeable to me how the red soil of Australia transforms the colour of everything, even of the blue sky on that continent, such is the volume of red light reflecting from the land.  So, what of the third axis red-cyan (R-C)?  If we have access to the colour temperature axis (B-Y) and the tint axis (G-M) we don't actually need to have access to R-C.  In my experience we are not very good at navigating via the R-C axis anyway.  When I worked in photofinishing I observed that everyone in the lab naturally drifted towards colour correction using G-M and B-Y.  Studies of the eye reveal that we have twice as many green sensors (cones) as we do blue and red.  This might explain why most people seem to be drawn towards correcting the green-magenta axis before any other.  Despite having fewer blue cones I guess we are also quite well tuned in to the B-Y axis due to our daily experience of observing the behavior of sunlight.  For more on the anatomy of the eye (and camera sensor) see HERE

I recently came across this animated gif which I created some years ago.  It neatly shows the compass-like nature of white-balance correction and the trial and error of navigation without a compass.  CLICK on the image to enlarge and run the animation from the start.

In summary, if we are interested in the forensic analysis of white balance we need grey card calibration.  Manual white balance correction can be done with experience but cannot be 100% relied upon.  For more on white balance see HERE.

Monday, 17 November 2014

Colour Sampling - Sample Homogenity and a Defocus Analysis Technique

Colour Sampling Technique
Colour reproduction and analysis is complex.  I have gone into it is reasonable detail in a number of postings.  If you are coming at this for the first time you might like to start at the page I have devoted to colour, HERE.

Lets assume that we have properly calibrated everything and gotten our colour management right.  Now we want to sample and analyse the colours in an image.  In the posting HERE I outlined a simple and effective technique for sampling colours consistently from digital photographs.  

Challenges of Lighting and Defocus
What happens if an area which we wish to sample is defocused or hidden in shadow?  Should we even sample it?

We know that light and shade affect every object in an environment.  Lighting is complex and hard to analyse at times.  Colour sampling must take account of the lighting conditions and suitability of patches being sampled.  For more on some of the complexities see HERE.

In this posting HERE I explained the mechanism by which defocus works.  I also elaborated on the analysis of defocus HERE.  We know that defocus (out of focus areas of the image) affects every pixel in defocused areas, reducing contrast as well as spreading out and influencing other pixels around it.  Defocus can even potentially influence neighbouring pixels within apparently sharply focused areas (if there is for instance a defocused object between the infocus subject and the camera).  If defocus blends and merges colours how can we be sure that the areas we are interested in sampling have not been tainted by the colours around them?  Or worse, could there be defocused objects between the subject and camera that we can't even see as they are defocused to the point they are for all intents and purposes, 'fully dissolved'?  And if so, could these, hidden objects taint the colours we are sampling?

A Simple Solution
I am going to use the Gaussian Blur tool to create a blurred copy of an image and then, in turn compare this copy with the original, looking for differences in homogeneity between the two.  The Gaussian Blur tool is a simple transformation which reduces the luminosity and saturation values of each pixel.  If the patches being sampled are already homogeneous then a slight Gaussian Blur wont affect the patch too much (perhaps a slight reduction in luminosity and saturation, hue hopefully unchanged).  If however there is a big change it might indicate that there is much more going on within the sample patch than initially meets the eye - perhaps it is not an ideal sample patch afterall.

The theory is that defocus blending radiates out from the defocused parts of the image and the extent to which it radiates out is dependent on the level of defocus applied.  Further defocusing the entire image might enhance some of this blending.  It might also reveal if a sample patch is dangerously close to the edge of other colour patches or markings that could taint the purity of the colour being sampled.

Step 1  Create a blurred layer
I am using Adobe Elements for this analysis.  Other packages may have similar tools.  First step is to open the image and duplicate the image as a separate new layer (renamed 'Blurred Layer' above).  To the new layer I have added a Gaussian Blur of radius 3.0 pixels.

Step 2  Sample and postarize the same points on both layers
I am using the MS postarizing tool 'Cutout' for sample postarizing.  Basically this homogenises the colour patch to make it easier to sample correctly.  It is quick and effective and accessible in any of the MS office suite, though I tend to use MS Powerpoint, 2010.  As demonstrated HERE I select an area to be analysed then copy and paste that area into, say MS  Powerpoint.  I process the patch using the Cutout tool then I cut and paste the patch into MS Paint where I read its Hue, Saturation and Luminosity values.

The additional step here is that I do the same for both the original and blurred image layers.  By keeping the patch selection open in Adobe Elements and merely toggling between the original and blurred layers I can retain exactly the sample selection area.  This in turn allows for a direct comparison of the colour of the original image patch versus the blurred copy that I have made.

Step 3 Comparing the results
Taking this Great Shearwater Puffinus gravis image as an example.  I have selected a patch of colour on the bird's crown.  The patch is on a brightly-lit, uniform part of the bird, well clear of the edges with nothing in the foreground to influence the image.  The hue, saturation and luminosity values remain very similar after defocusing the image, so this should be a good, reliable location to sample colour from, even though the bird is slightly out of focus in the original image.

The next point which I have sampled is on the breast.  Obviously the breast should be white but this patch is in shadow, lit by blue sky light.  What is interesting here is there is a notable difference between the values in the original versus the blurred image.  This I believe is due to the amount of variation in the selected sample area - i.e. the sample area is not homogeneous  This variation in turn increases the difference between the original and blurred images.  Though the original feather was white, there is a complex tonal gradient laid down by the shadow pattern.  So I am getting some useful feedback here, telling me that this may not be an ideal sample location.

The third patch (located on the rear secondaries) was the smallest of the three sampled patches in size yet it has produced the biggest variance between the original and blurred images.  There are a couple of separate things going on here I think.
- Firstly, the patch is very close to an edge between two very different colours.  Defocus has slightly blended these colours.  The fact that the original image was also defocused should raise alarm bells.  The original image probably suffers from some colour blending in this area.
- The patch is also in shade, so as with the earlier breast patch, the tonal gradient possibly adds further variation across the sample patch.  So, clearly this is not a good patch to sample for colour.

In summary, the technique has a few facets to it.  It helps us locate a good, homogeneous sample point.  It also flags up potential sample impurities such as the potential for defocus and variable luminance to blend colours.  It might even help to detect hidden objects between the subject and the camera which may have gone unnoticed when the photograph was taken and are now 'fully dissolved' by defocus.  Even though hidden, their ghost impressions may register as slight tonal gradients or similar anomalies, thus registering as an impurity or drop in homogeneity across the sample patch.

Quality Control when sampling colours from digital images
The original sampling method presented HERE didn't consider the quality of the colour patches chosen for sampling.  As it turns out, one of the novel advantages of this method is that it introduces a measure of quality control to the sampling process.  When sampling colour, we are trying to minimise variation throughout the patch being sampled.  Postarising the patch later removes any remaining variation but, in doing so may inadvertently mask impurities in the colour being sampled.
This simple Defocus Analysis introduces a QC check to the colour sampling process, which can only be a good thing.

Saturday, 15 November 2014

Forensics: Gaussian Analysis - Defocus

Scope and Objective
In an Introduction to Gaussian Analysis (HERE) I discussed the anatomy of different image quality parameters and observed how very many of them follow a Normal or Gaussian distribution around a position of optimum quality.  In the posting Focus Anatomy I explored some of the general concepts and terminology around focus and defocus ('out of focus').  I explained that a perfectly focused point (Airy disk) does not exhibit a perfectly Gaussian signature but displays qualities of both particles and waves.  Around a very Gaussian central focused point of an Airy disk lie concentric rings which are formed due to diffraction (a wave property of light).  There are also anomalies introduced by the camera lens and processor which further distort focus.  These include the aperture, the type of lens and the demosaicing step during processing.  Despite all of this, focus can still be very closely approximated using Gaussian analysis as outlined below.

The Gaussian Signature for Defocus (the Gaussian Blur tool)
The Gaussian Blur tool essentially presents the Gaussian signature for defocus.  This tool has long been available in Adobe Photoshop, for artificially creating defocus in images.  When we use the Gaussian Tool in Adobe Elements and compare the results with images created naturally there is a startlingly similarity between the two.

However, as explained above, defocus does not exactly follow a Gaussian distribution.  Defocused points tend to have a less than perfect shape and don't tend to have a perfectly neat Gaussian gradient from centre to edge.  Some characteristics of natural defocus and the Gaussian blur tool are very similar however, such as the overall rate at which objects lose overall contrast and, more especially, edge contrast (acutance) as they are progressively defocused and dissolved.  There is also a relationship between the contrast of a defocused point and it's relative size - objects expand in size as they are increasingly defocused.  The effect is quite like the expansion of a baloon.  When a mark is placed on a deflated baloon and then the baloon is inflated the mark gets bigger but at the same time becomes lighter and less clearly defined.  The analogy isn't exactly the same as defocus but there is similarity.

The value of all of this is that we can, in theory re-engineer a defocused image in an effort to understand some of what is going on.

Lens Tool in Adobe Photoshop/Elements
There is a new tool in the Adobe tool bag which may offer an even closer approximation to real camera defocus for an individual camera lens. With this tool it is possible to distort the blur effect taking into account aperture and some lens distortions, even noise and specular highlights.  I could envisage a situation in the future where, with careful study of the characteristics of lenses someone might develop a ready-reckoner for use with the Lens Tool. One could select the lens used and various lens settings and the ready-reckoner would spit out a list of corresponding settings for the Lens Blur tool to help recreate the focus conditions of the lens.  A good exif programme (like Opanda iexif 2) can provide much of the lens information needed for such a ready-reckoner, including lens type, f-number, aperture value and focal length.  Unfortunately I havn't been able to find any ready-reckoners for the Lens Tool online.  Playing around with focus using the Lens Tool is liable to introduce more uncertainty than clarity so for now I advise sticking with the much simpler and more reliable Gaussian Blur tool.

The Plan
Here is my proposed recipe for analysing the Gaussian signature for Defocus.

Step 1 Careful Review of the Images
Once again it pays to spend time carefully pouring over all of the available images, even if they are out of focus.

Step 2 Re-engineer Defocus
Because we are not used to studying things defocused it can be difficult to appreciate the extend to which defocus reduces contrast, increases the relative size of things, and ultimately dissolves detail.  Perhaps the best way to appreciate all of this is to take a sharp image, artificially defocus it, and compare the results with the defocused image we are analysing.

Take for example these South American (Magellanic) Snipe from Chile Gallinago paraguaiae magellanica.  Lets for argument's sake say that someone has queried the rear, out of focus bird as being a Puna Snipe Gallinago andina.  We probably don't have enough information from the image to say one way or another but as an interesting exercise it might help to put the bird in front into the same level of defocus to see how they compare.

Using Adobe Elements I have isolated the snipe in the foreground using the lasso tool and defocused it using the Gaussian Blur tool.

Interestingly, while the level of defocus is very similar, if we look closely we can tell that the bird in the background has a different defocus pattern.  It's bokeh is less appealing owing to halos in the defocused image.  There is for example a pale halo around the eyering.  This example shows some of the drawbacks in this analysis.  The Gaussian blur tool will only go part of the way to replicating the focus parameters of the lens or other conditions.  On this other hand, this example shows how an anomaly such as this 'double eye-ring' can be explained as a lens design issue (or possibly motion blur - rear bird may have moved during the exposure) and not a genuine field mark.  Luckily I captured other images of the rear bird which helped clarify it's true identity, and that this is mark around the eye is indeed just an focus anomaly.

This might not be the most inspiring example but hopefully it illustrates the main point.  We frequently come across cases where field marks are obscured or missing in out of focus images.  The question arises - what were these marks like 'in focus'?  By re-engineering defocus from a sharp image we can at least understand the circumstances a bit better and hopefully make a reasonable assessment of the evidence.

Step 3 Consider other factors
Because we know that Gaussian analysis of focus only goes part of the way to explaining defocus, we must be mindful of other factors, including the effects of lens design and aperture.  Also, defocus isn't the only form of blurring in images.  Motion Blur due to movement of the camera and/or subject during image exposure can look quite similar to defocus but has quite a different blur pattern and mechanism behind it.  Motion blur often produces two ghost images offset from one another with a hazy smeared image between them.  The rearmost snipe and background above appear to show a double image.  It is possible that both the snipe and the grass were in motion during this exposure though bare in mind that lens design and in particular aperture shape can produce a similar effect.

Step 4 Document 
We document our methods and results so others can check and verify the evidence.

Thursday, 13 November 2014

Forensics - Focus Anatomy

From Bokeh to Airy Disks

Focus is a complex area within the science of Optics.  However at the heart of it are some fairly simple concepts which we will draw on here.

Two strands of Optics 

Light has the characteristics of both particles and waves and their relationship is still not totally understood.  Geometrical Optics focuses on the more straightforward, particle nature of light and allows for a simpler, though less exacting analysis of optical phenomena.  Physical Optics includes the more complex, wave properties of light including interference, diffraction and polarisation.  

If we consider the focus of a discrete point of light by a lens, an Airy Disk represents the best focus possible by a lens for that finite point of light.  Presenting an Airy disk graphically its distribution is close to but is not quite Gaussian in shape.  This is an example that demonstrates both the particle and wave nature of light.  As the link shows, an Airy disk has a prominent, circular centre, fading towards the edge, which fits a Gaussian pattern. But outside that are radiating concentric rings of dark and light bands.  Much like the penumbrae circling a shadow (HERE). these radiating bands (caused by diffraction - a wave property of light) are generally too faint to see, so a point of light focused by a lens normally appears to have a simple, circular shape, clearest at the centre and fuzzier towards the rim.  If we leave aside the radiating rings which are rarely visible anyway, the centre disk of an ideally focused point can be represented graphically very closely as a Gaussian distribution around the centre point.

Focus Anatomy

Defocus simply means out of focus.  When correctly focused, light waves from an in focus point on the subject will sharply converge at a point on the sensor (referred to as the film plane).  If the object is Inside Focus it means the light has converged at a point between the lens and sensor.  The waves meet, then diverge again before they arrive at the sensor defocused.  Outside Focus means that the light waves have a trajectory that would see them converge beyond the sensor (film plane) so they hit the sensor before they ever meet and are therefore again defocused at the film plane.  In an ideal lens, inside and outside defocused images will look very similar - if they seem different it could indicate a lens aberration.

Note in the image above there are various points on the image referred to by letters V, P and F.  These are referred to as Cardinal Points and are an essential part of Geometrical Optics, or more specifically Gaussian Optics.  These can be used to work out lens focal length, approximate distance to the subject and so on.

Depth of Field is the range of distances from the lens that appear to be in simultaneous focus in an image.  Depth of field can be adjusted by changing the size of the aperture in the lens through which light passes on it's way to the sensor. A wide open aperture produces a narrow depth of field while a narrower aperture produces a wider depth of field.  The trade off with a narrower aperture is that less light gets through the lens so the exposure time must be lengthened to compensate.

Aperture itself actually plays a fundamental role in the search for the ideal focus point (Airy disc) because the shape of the disk created by a point of light is related to the shape of the aperture.  A hexagonal aperture will produce hexagonal shaped points of light on the image.  While this may not be very noticeable in bird images it is a particularly important point for those interested in sharp focus at fine points of light (eg. in astronomy).  Defocus enhances aberrations such as this.  Those who have photographed using mirror lenses will be familiar with the donut shapes evident in out of focus regions of photographs.  This is due to lens aperture.

Bokeh is the aesthetic quality of blur.  Most people would consider "good bokeh" to be an image which shows smooth transition between sharp and defocused (out of focus) parts of an image and where the defocused parts of the image do not distract from the main subject.  "Bad bokeh" would include off-putting defocus such as halo-shaped patterns from mirror lenses or other oddly-shaped or distracting anomalies. While the aperture is a common cause of bad bokeh, different lens types and aberrations within lenses all contribute to a less than perfect image. For more on all of these elements see this nice posting HERE.

Defocus Layering

When we consider defocus we have to consider the relative position and distance of objects in an image.  Objects which are closer to the camera obscure objects which are behind them.  If we are focused on a closer object, those in the background will defocus without impacting on the foreground object.  If however we focus on a background object, the foreground object will defocus, and in doing so the foreground has the potential to spoil the background image as illustrated below.

Note however that if the foreground object is small enough, and defocussed fully, it will effectively dissolve out of shot.  No doubt there will always he some residual impact and this shouldn't be forgotten, but the impact left by a fully dissolved, defocused object is often nearly impossible to detect.  This is why so many film-makers use this cool technique in movie-making.  It has a magical property to it.

I find it useful to consider this effect in terms of layering.  For those familiar with layers in Adobe Photoshop and other imaging packages layers work like pages in a book.  The top page gets priority and masks what is below it.  This is how defocus works also.  Defocused objects can only affect those objects behind them in an image.

Image Formation and Subsequent Analysis

Having obtained a pin sharp focus on the film plane, captured by an expensive lens, it may be frustrating to discover that the first thing the camera processor does is blurs and distorts the image during a process called Demosaicing.  I have explored this process in more detail HERE.  After demosiacing a digital image is created which consists of discrete pixels that are each considerably larger than the smallest point originally resolved by the lens onto the film plane.  Regardless of the quality of the lens and exactness of the focus disk obtained by the lens, the final image is now resolved in the form of tiny square pixels which have been created following a fairly high degree of processing and interpolation.  Faced with the realisation that digital image formation and focus is already far from perfect, it shouldn't be too much of a stretch to conclude that a Gaussian analysis of focus in digital images should be a reasonably acceptable compromise.

Eliminating Defocus Forever

A relatively new technology called Light Field Photography is set to revolutionise this whole medium.  If it becomes the norm, in future there may be no need to consider defocus at all as part of digital image analysis and quality.  An image obtained by a light field camera has a life of it's own.  The most recent camera from the manufacturer Lytro is reviewed objectively HERE.  If you are not familiar with this exciting new technology, I recommend you check it out.

For my sci-fi take on where this is all headed check out my 'just-for-fun' camera to end all cameras posting HERE.

Sunday, 9 November 2014

A Camera to end all cameras

Behold the CEC 2100

Last night I had a dream...

It is a minute to midnight on new year's eve, 2099.  While most revellers are choosing to have a traditional centenary celebration, geeks the world over have organised CEC parties and are in eager anticipation of the midnight launch of the world's best ever camera - the CEC.  They have heard all the rumours but CEC Inc. have been smart in their campaign and only released a single image to promote their groundbreaking new camera.  And here it is.
In the minutes leading up to midnight the manufacturer has been flashing specifications for the CEC across the internet and the mouthwatering list is epic.

- The outer coating is an unbreakable, bullet-proof 100% transparent polymer.
- Beneath this sit over 10 trillion sensors covering the entire surface of the CEC sphere.
- The sensors sit on a carbon-silicone matrix that transfers data from the sensors to the 100 terabyte processor beneath.
- There is no memory in the camera.  The camera streams constantly to the ether.
- The power source is electromagnetic energy.  The energy which the camera records is the energy which powers the camera!
- Each sensor is a powerful spectrophotometer with some added features.  It can record all imaging wavelengths of the electromagnetic spectrum from microwave - soft x-ray.  It can also accurately record the distance from wave emission source to the camera.
- There is also a pin sharp 360 degree surround sound recording chip which can be married with the visual images to produce a highly powerful video log.
- The neatest kit of all is an ion pulse system which prevents dust, grease and moisture resting on or clinging to the surface of the sphere.  Holding the sphere in one's hand, the feel is of soft velvet when the ion pulse unit is powered up.
- The CEC comes with mounting accessories so it can be mounted in different ways for different purposes.  Some of the mounts contain lenses which bend light so none of the 360 degree recording range of the camera is lost.
- The CEC can effectively focus from 0.1 mm to infinity but its high resolution range is approx. 200 metres. 
The CEC has no lens, no shutter, no viewfinder and no button of any sort.  It records a full 360 degree image every 10,000th of a second and comes with a 100 year warranty.

Some CEC geeks with more money than sense have already purchased the 'CEC Lifetime Package' for their children - 100 years of ether storage backup guaranteed so their children can record and access their entire lives with this device if they wish.  Universities and businesses are already contacting their lawyers seeking to ban the use of CEC devices within their walls!  Human Rights groups are mobilizing across the planet.

Birders on the other hand are in eager anticipation of this product because of an accessory that comes with it, the CEC spectrum.  Finally it is time to hang up the bins and scope.  Swarowski, Leica, Canon, Nikon and other formerly major players have taken out plenty of shares in CEC Inc and are now focused on other areas of their business.  The CEC is the only show in town!

The real power of the CEC is in it's user interface.  Birders in the early 21st century would be familiar with street-view in Google Maps.  The interface of the CEC is similar but much faster and sleeker, and the image has smooth movement and surround sound!  In live view, using the CEC spectrum head set, complex recognition software is available which is capable for identifying birds based on plumage, voice and gestalt out to a range of up a 1km.  Voice command recognition will immediately zoom the CEC spectrum to a chosen target and anti-motion software keeps the image motion-free, even on a boat!

The genesis of this project starts with the complete re-think of the digital camera in the mid-21st century.  Manufactures decided to stop trying to make cameras that emulate human vision and start to think well outside the box.  The sky is the limit!  

A closing word from the CEC CEO at the end of his centenary launch.  "By 2121 CEC will have reached critical mass.  Every square inch of every major city on the planet and every corner of every national park and wildlife reserve will be on permanent record.  No crime will go unpunished.  There will be nothing left to be discovered".  

By 2021 "the CEC sphere" has uncovered 100,000 new species for science, including a new species of bird living on Manhattan Island!

Saturday, 8 November 2014

Forensics : Gaussian Analysis - Overexposure

Scope and Objective
In An Introduction to Gaussian Analysis (HERE) I outlined how most image quality parameters exhibit a Gaussian distribution around an optimum quality standard.  This presents an opportunity to look for 'Gaussian Signatures' left behind in digital images.  Here I am analysing the Gaussian signature for overexposure.  I am looking for evidence that would indicate if data (eg. subtle field marks) may have been lost due to overexposure in an image.  I am also looking to prove the opposite - in the absence of a Gaussian signature for overexposure is it reasonable to assume that there is no loss of detail due to overexposure clipping?

The Gaussian Signature for Overexposure
Overexposure works rather like an image brightening tool and has the effect of pushing the histogram to the right (for more on histograms see HERE).  However, unlike a brightening tool which stacks detail up on the right hand side of the histogram, overexposure, like a conveyor belt, simply pushes tonal data off the edge of the histogram (clipping).  Progressive overexposure causes image fine details and colours to simply vanish.  Before detail vanishes it will get progressively paler and approaches pure white in colour (sRBG R=255, G=255, B=255).  This becomes the Gaussian signature for overexposure.

In the experiment above I have highlighted the Gaussian signature of overexposure in blue.  Before our target vanishes it is consumed by sRGB white.  I equate the signature to an Amoeba devouring a food item, or a flood submerging objects throughout an environment.  There are two useful conclusions.

(1) It is always worth checking images for blown highlights (generally anything above level 250 might be considered a blown highlight).  Blown highlights indicate overexposure and may indicate masked detail.  Bare in mind though that detail will not clip evenly in all channels.  Blue and Red channels may be clipped (level 255) but Green may still harbour a ghost impression of the detail (level <255).

(2) Having identified blown highlights it is worth analysing the distribution of sRGB pure white.  This is the proper clipping line (all channels at level 255).  The presence of sRGB white indicates clipping - image detail has been lost beyond this point.  The absence of sRGB white should indicate that clipping has NOT taken place in the image, so the remnants of fine detail and colour should be present, if somewhat masked and hard to discern.  Refer to postings HERE and HERE for tips on how to recover hidden image data from RAW and JPEG images respectively.

The Plan
Here is my proposed recipe for analysing the Gaussian Signature of Overexposure.

Step 1 Careful Review of the Images
Once again it helps to carefully review all the images first.  Look for lighting and exposure patterns and document.

Step 2 Analyse clipping
HERE I outlined how image histograms can be used to analyse clipping.  I recommend the Adobe Elements Levels tool.  By selecting the white dropper tool in Levels and by pressing the Alt key while the dropper is over the image the tool will automatically highlight the clipped areas of the image.  Individual colour channels can even be selected for individual scrutiny.  Very handy!
Requirements: Adobe Elements

Step 3 Replacing sRGB white with a tracer
Here I recommend using the Color Quantizer tool (freeware available online) to replace sRGB white pixels with a colour tracer.  This is done by first postarizing the image to 4096 colours.  Next arrange the colours by luminosity in ascending order.  Finally right click on the top left hand colour (which should be sRGB white) and replace this with a vivid colour to act as a tracer.  Finally re save the image as a new file.  The tracer colour in the new image highlights fully blown highlights.  
Requirements: Color Quantizer tool (freeware).

If it is suspected that detail in an area of the image has been completely lost due to clipping that area of the image should appear completely 'submerged' by the tracer colour.  On the other hand if the area is not submerged it should indicate that clipping has not occurred in that area of the image.

Step 4 Document
Once again, in order for others to be able to verify the results it is important to document all steps.

Friday, 7 November 2014

Forensics : Introduction to Gaussian Analysis

Resolving and 'Dissolving' data 
While tuning a radio, static (signal noise) will give way to a barely coherent signal, followed by progressively less static until a clear signal is resolved.  Continuing to tune beyond this point will produce an opposing pattern of increasingly poor signal to noise ratio, until once again all that remains is static.  This is a typical Gaussian or Normal signal distribution.

A digital image is also created from electromagnetic radiation, and the various parameters that go into making up a digital image also, for the most part, follow Gaussian distributions.  Of the five image quality parameters used in the Image Quality Tool, only Resolution works rather differently to the rest.  In theory there is no limit to the maximum resolution of a digital image, though beyond a certain point lens quality starts to max out image sharpness so image quality reaches a ceiling.

The other parameters, focus, exposure, white balance, and the occurrence of digital artefacts all largely follow a Gaussian distribution, as illustrated below.

Not only do these quality parameters share a similar distribution around a 'normal' or optimum image, many of them transform images in a very similar way.  Reduced resolution, defocus and both under and over-exposure all have the effect of basically 'dissolving' images in one way or another.  The result is a gradual loss of image definition and acutance (edge contrast), tonal range, until ultimately detail and colours become obscured.

I believe that it may be possible to study the Gaussian patterns of each of these image quality parameters and find characteristic signatures that we can then use to estimate the level of image data loss occurring.  For example, in a photograph where bright sunlight pushes dynamic range beyond a certain point, it might be possible to establish, based on the degree of brightness of pixels in an area of an image, the likely level of image detail loss that might be occurring in that area.  Or, based on the level of defocus in an image, we might be able to establish if a feather fringe is likely to have been obscured or not.

Obviously, if data is lost or never existed in the first place we can never retrieve it, or be 100% certain it existed.  But perhaps we can still make useful inferences based on the conditions under which the image was produced.  For now I am just introducing the concept.

Birds and Light - Translucency

When light hits an object it is either absorbed by it, it is all, or in part, reflected off of it, or it all, or in part transmits through it.  In this way objects can be described as opaque (they do not transmit light), translucent (they partly transmit light) or transparent (they fully transmit light).  

Feathers and even the bare parts of birds are translucent.  Their level of translucency is dependent on their thickness and the density and colour of pigmentation.  The amount of light transmitted also depends on the angle and brightness of the light source behind the feather.  For most birds, it is only the flight feathers, remiges (primaries and secondaries) and the rectrices (tail feathers) in flight that are noticeably translucent, because the other feathers tend to hug the body most of the time.

The image above neatly shows how a subtle change in the angle of a bird's wing relative to the position of the observer and sun can totally alter the type of lighting illuminating all or part of it. Direct sunlight, reflecting off of both the upper and under-wing surfaces simultaneously produces a very similar pattern on both the upper and under surfaces of the primary feathers of this Common (Black-billed) Magpie.  Contrast this with the less bright, predominantly blue light from the sky which is transmitted through the translucent primaries to the camera.  The light makes it through the white portion of the feather with minimal disruption, so much so that light can be seen to pass through the white portion of two overlapping feathers.  The light doesn't get through the outer vane or the tips of the primaries quite as well.  The black pigment in these areas is far more effective at blocking the light, though there is the faintest impression of blue light transmission on the inner vane, nearest the feather tips.  Clearly, light passing through feathers can complicate things greatly!  For more on the properties of translucent materials see (HERE).

I have conducted an experiment using pigment targets on white paper in which I illuminated the targets first from in front of the page (creating a reflected image), then from behind the page (creating a transmitted image).  In both cases I created a series of camera exposures from completely underexposed - a fully black image, to completely over-exposed - a fully white image.  The results are presented below and show there is really no major difference in the characteristics of image exposures regardless of whether the light is reflecting off of a surface or transmitting through it, other than that the internal structure of the material is illuminated by light passing through it.

See also HERE

Sunday, 2 November 2014

Birds and Light - Image Lighting Tools

Grey is the new Black...and the new White

Black and white paints are rarely applied neat to canvas because, one of the first things we are thought in art class is that highlights and shadows usually contain some colour.  In art, black is mixed with a colour paint at various concentrations to create various shades of that colour.  Similarly white paint is added to create tints.  The same basic principal applies to our digital colour pallet and to our photographs.   To adapt an old adage, nothing in life is ever black and white, it exists in shades of grey...256 shades to be exact in the context of a standard digital image.

Tonal Range and Contrast Ratio
In sRGB colour space we have 256 discrete tones from black to white.  This means the contrast ratio of sRBG colour space is 256:1.

Note: For some reason there are only 241 luminosity increments in MS Paint.  In Adobe Photoshop there are only 100 luminosity increments, as luminosity is measured as a percentage in that software.  However there are in fact 256 discrete tones, and these increments can be individually reproduced using RGB values rather than using luminosity sliders.  In the image below I have reproduced all 256 discrete tones as a 16 X 16 grid.  For more on contrast ratio see HERE and for more on tonal range see HERE.

In printing terms white and black are called DMIN and DMAX respectively.  They represent the minimum and maximum densities of ink that can be applied to a given paper.  DMIN and DMAX can be measured using a densitometer, used for example in the photofinishing industry to calibrate printers for different paper stock and to account for variations in print quality due to process chemistry.

Challenges in reproducing tones accurately
I created the test image below using MS Paint.  The background is sRGB white (level 255, or R=255;G=255;B=255).  The lower right hand target is sRGB black (level 0, or R=0;G=0;B=0).  The other targets are luminosity = 60 (level 63) , 120 (level 127) and 180 (level 191) respectively.

I printed this image on standard photocopy paper and then both scanned and photographed the resulting print.  The comparison below highlights the variation and loss of quality during reproduction.  
  • The printer wasn't able to reproduce the tonal range perfectly (though visually I can confirm the printer did a pretty good job).  
  • The scanner reproduced white correctly but the other tones were all below par, the black target being reduced to well below the 180 luminosity target of the original image.  Overall therefore the scan tended to overexpose the image.
  • The digital photo did better than the scanner but only with a fair degree of help in Camera Raw.
  • It was not possible to reproduce a perfect white in the photograph without drastically losing tonal range in the other targets.  The lack of natural contrast in the photograph was down to the ambient light when the photograph was taken.
Basically, the conclusion is that replicating and maintaining tonal range and contrast during reproduction is a major challenge, requiring ideal lighting, ideal exposure control and careful calibration of imaging equipment.

Tonal relevance for bird identification
We know there is a lot of intra-specific variation in relation to plumage tone, due to factors ranging from physiology and metabolism to genetics. Then, plumage tone is never static.  Wear and fading due to light and the other elements is an ongoing process.  If that wasn't enough, bright light and dark shadows obviously change the apparent tones of plumage and bareparts.  So, experienced observers are naturally wary of placing too much emphasis on tone in the field and indeed from photos.

On the other hand, intra-specific clines frequently follow a tonal gradient so it can help to try and maximise accurate tonal reproduction to try and find an individual bird's position on that gradient.  Gull enthusiasts and those trying to get to grips with very similar species (eg. Chiffchaffs in Europe) will also probably find that accurate or standard tonal references are really useful.

So what tools do we have at our disposal to try and bring this complex area under some degree of control.

Step 1  Use of Greyscale
Firstly, in the initial stages of an investigation at least, it helps to switch to grey scale.  Colour can overwhelm our visual system and make it difficult to distinguish levels.  It is far easier to appreciate level differences when working in monochrome.  Working in monochrome eliminates Hue and Saturation from the image equation, leaving just Luminosity.  Luminosity levels are all we have to form an image - i.e. this is the 'birds and light equation' expressed at it's purest.

Step 2  Use of Image Lighting Tools
Adobe Photoshop uses three different sets of lighting tools and is a good, flexible software package for image editing and analysis.  Elements is cheaper to buy than some of the other Adobe packages and is ideal for our needs.  Each of the lighting tools manipulates image tonal data, or levels in various different ways.

The function of each of these tools is outlined below.  Obviously they are intended mainly for photographers who want to make the most of their images.  For our purposes, we are particularly interested in how they can be used to enhance identification features in images.  The test image is the grid shown above, consisting of each of the 256 levels in sRGB.  Beside that is a histogram.  An image histogram is just a graphical representations of levels showing the number of pixels at each level.  The original image shows a more or less flat histogram as there are a more or less equal number of pixels at each level.  This is useful as it allows us to understand the actual impact of each tool on tonal range. 

The Brightness Tool
This tool most closely replicates the effects of increasing or decreasing camera exposure time.  It affects the whole tonal range.  As it pushes all pixels towards the left or right edges of the histogram they begin to pile up towards the edge.  Interestingly, it appears from this experiment that the algorithmic logic applied to brightening an image differs to that used for darkening an image.  Fully lightening the image has stacked the pixels more towards extreme white and the graph is steeper.  Whereas fully darkening the image isn't quite as extreme.

The Contrast Tool
This tool most closely matches the effects of low or high dynamic range in ambient lighting terms (eg. a foggy day manifests as low dynamic range while a bright sunny day results in high dynamic range - often in fact, well beyond the range of the camera sensor). Reducing image contrast results in a slight piling up of pixels in the middle of the histogram whereas an increase in contrast forces the pixels towards both extremes of the histogram.

Highlights Tool
As the name suggests, this tool focuses mainly on the right hand side of the histogram.  Darkening of highlights has the effect of pushing all the highlights pixels towards the centre of the graph but it doesn't tend to influence the left hand side to any great extent.  This tool is useful for peering into highlight detail without disturbing the shadows too much.

Shadows Tool
This is the exact opposite to the highlights tool, allowing peering into shadows without significantly disturbing highlights.  

Midtone Contrast Tool
This tool works much like the contrast tool but it excludes pixels at the extreme ends of the tonal range.  This too could be a very useful tool for exposing detail hiding in the shadows or highlights of images.

Black Point Tool
Levels can be a useful broad stroke tool for analysing images.  The Black Point slider or dropper is used to set the black tonal level.  Once set, everything to the left of that point is discarded, so it needs to be used carefully.

White Point Tool 
As with the black point tool this must be used carefully.  This tool in particular can actually be useful for another purpose, other than to simply define pure white in an image.  By adjusting the white point slider back and forth it can sometimes be used to study the direction and intensity of illumination within a photograph.

Midtone Level Tool
This is similar to the brightness tool but it mainly influences the midtones range and therefore tends to produce more useful and less drastic changes.  It has an interesting impact on the histogram and seems to be programmed to have more or less the opposite effect to the brightness tool.  See a comparison of these histograms below.

It is also useful to compare contrast and midtone contrast tools here.

Here is a summary of Adobe Elements Lighting tools highlighting, within the histogram, the primary areas of influence of each tool.

In summary, Lighting Tools are a great little toolkit for forensic analysis of digital images.  With a bit of practice it should be possible to use the finer tools (Highlights, Shadows and Levels) to extract and present specific details without too drastic a change to the overall image.