Tuesday 14 April 2015

Forensics - Analysis of Lighting and Shadow Direction

The quality of light determines the accuracy with which field marks including colours are portrayed as outlined HERE.  Bright light is high in contrast and this challenges the dynamic range of the camera, resulting in a loss of tonal range.  Put another way, bright light implies brighter highlights and darker shadows, both of which can obscure detail and colours.  Bright light is therefore far more challenging than dull or diffuse light.  In our efforts to overcome the challenge of harsh, bright light anything that helps us to understand the lighting in an image can aid our cause.  This may include judging the direction of the light source, and with it, the direction of opposing shadows.

We live in a three-dimensional environment.  Everything in 3D can be plotted along three axes, X, Y and Z.  If we consider the world from the perspective of our digital images we can only really appreciate two of these, the Y and the Z axes.  The depth in an image, or the X axis can be particularly tricky to work with.  Obviously, when it comes to outdoor images the sun is, more often than not, out of shot, and of course from the perspective of the observer and camera, it appears infinitely far away.

Analysing lighting starts with an understanding of the direction of the light.  While we may often have a rough idea where the lighting is coming from in an image we can often be mistaken.  In the posting Lighting and Perspective I explored some of the reasons behind this. In that posting I also discussed the issue of optical illusions (more discussed HERE).  Part of what often confuses us about light and shade in an image is that we often fail to recognise that shadows have a three dimensional geometry as explained HERE.  Here I outline two tools to help us establish light direction more objectively from an image.

The Eye Technique
Though the shape of the eyeball differs among species, the cornea of the eye has a near-spherical surface.  When the sun is shining it often appears as a specular highlight on the surface of the cornea.  In theory we could use a spherical coordinate system to work out the exact position of the sun in 3D space from a single image containing a spherical reflector such as this.  But we don't have a way to work out size or proportion for the sphere involved which presumably we would need for a reasonable accurate calculation.  In fact however, we may not need such a high level of accuracy for our purposes, for reasons I will explain below.  We can gauge the sun's direction in the Y and Z axes at least by taking a line from the centre of the eye through the specular highlight.

The Edge Technique
Though the eye method is very useful it cannot always be relied upon.  The sun may be to the side or behind the subject.  It may be obscured by partial cloud or foliage.  The bird may have it's head turned.  Or, the image resolution may be too low to properly assess specular highlights.  Or, simply we may want a second method to help confirm our initial impressions of the direction of light.

The edge technique works by ignoring the X axis and looking for the direction of the light source in the Y and Z axes alone.  Why would we discount the X axis for our analysis?  Well, firstly there is no easy way to factor in the X axis as we cannot accurately measure depth in an image.  But, it turns out we don't really need the X axis lighting component for our purposes.  

Firstly, in a two dimensional camera image our narrow line of sight determines what we can actually see.  The surfaces of our subject which lie perpendicular to the lens, i.e. along the X axis are plainly visible.  Whereas, the surfaces which face the Y and Z axis are not visible at all in our image.  The illumination along our line of sight mostly determines the quality of the overall lighting and exposure in the image - not surprisingly as this was the light metered by the camera.   Generally speaking, the shadows which arise from our subject along the X axis fall behind the subject, out of our line of sight.  The exceptions to this are generally straightforward and easy to interpret.  For example, a bird faces the camera and it's bill casts a shadow onto it's throat.  Or, for example we have some obstruction or reflective surface in front of the subject which alters the lighting and shadow in some way.  More often than not we may have some idea what that is.

It could be argued that the components of the lighting and shadow which most often concern us are the Y and Z axis lighting components, running alongside and close to the plane of the image, close to the point where features are being obscured due to our angle of sight.  Because these features are harder to see clearly we may be more invested in understanding the lighting and shade in these areas of the image.

The image above is an illustration of the edge technique.  The principal behind the technique is that light intensity increases as the angle of incident light approaches the surface normal (i.e. perpendicular to the surface). So, if we can locate and isolate the brightest points along the surface edge of our subject we should be able to judge the angle to the light source.  We can sample a number of points of high luminosity to cross check and confirm our angles.

This may not be a 100% accurate method, as it can be hard to judge angles exactly, but we may be happy with a 10 to 20 degree deviation for our needs.  After all, this tool is merely helping us with other forms of qualitative analysis.

Of course, based on our understanding of how photographs are made, we are not actually measuring incident light intensity here.  A digital image consists of a combination of (reflected) light intensity, which we need, and surface reflectance, which we don't need.  If a bird's plumage consists of patches of very bright, reflective feathers and poorly reflective, dark feathers, judging the angle of the light source will be far more challenging because reflectance will confound our results.  This is not necessarily a show-stopper - just something to watch out for.  In some cases however this method simply won't work.

So how did we create the edge profile and how did we identify the brightest pixels around the edge?  We start by opening up the image using Adobe Photoshop, Elements or some other programme (Paint.net is free).  We make a new layer and fill it black, then make the black layer slightly transparent so we can just about make out the image layer beneath.  While still working in the black layer we then select the eraser tool and trace around the edge of the subject.  Finally, we make the black layer opaque again.  We now have our narrow edge profile.  Last step is to turn the image layer to greyscale and then save the whole image as a new image png file.  

Now that we have our file we need a way to identify its brightest pixels.  The tool of choice is Color Quantizer (another freely available tool online).  By postarising the image to as few as 16 tonal levels and recolouring the brightest level with a colour tracer we can quickly pinpoint the brightest pixels.  Note we may find that 16 levels is too few so we can retry at 32 levels, 64 levels or whatever we need to help discriminate the very brightest pixels.  I used this tool and technique before to differentiate tonal levels HERE and HERE,

The light source should be at a 90 degree angle to the surface edge where these pixels are located, but bare in mind the pitfall that is surface reflectance as noted above. Ultimately these methods may never provide the same dramatic or definitive results obtainable through 3D modelling in a virtual lighting space (HERE), but this method requires far less effort and yet is surprisingly accurate and useful enough for our needs.


Much of this posting and these techniques was inspired by the work of Prof. Hany Farid and his team at Dartmouth College.  Software tools have been developed to use these and similar techniques to authenticate digital images.  This is not free software.  Some may not even be available to the general public.  And some may not even apply too well to bird images.  For instance a technique to pinpoint the light source in 3D space is based on the dimensions of the average human eye.  Avian eyes on the other hand clearly vary from one species to the next, not only in measurements but also in morphology.  It is worth watching one of Prof. Farid's entertaining lectures online.  The video below provides additional insight into the science and maths involved plus the worrying proliferation of fake images in politics and the media.

On a final note, while experimenting with light and perspective I found through experimentation that in diffuse light such as overcast conditions the lighting direction cannot be assigned.  This is because, the predominant lighting of the scene comes from the sky dome, not the sun hidden by the cloud blanket.  For more see HERE.

No comments:

Post a Comment