Tuesday, 30 September 2014

Image Quality Tool - Modified Images - Before and After

Photofinishing

Quality is all about consistent standards.  Normally, in terms of photographic quality, we mean an aesthetically pleasing image with good exposure and white balance.  But, when it comes to bird identification we are referring to something slightly different.  An image may be of an acceptable quality overall, but the subject of interest may be skulking in the shade, or bathed in the green glow of a foliage canopy.  More often than not, if an image of a bird presents an identification challenge it is because the overall image quality is poor - i.e. the image is not well exposed, the focus is off, and/or the white balance is wrong.  So, frequently bird identification images can do with some additional photofinishing to bring out salient identification features or rule out a similar species. 

Here I have taken a selection of very mediocre to poor record shots and used a recognised photofinishing tool (Adobe Elements 12) to try and improve the images for identification purposes.  I have particularly focused on digiscoped and video grab images in this posting.  

I am interested in the effect of these modifications on the scoring of the images in the Image Quality Tool, so I compare the original, unaltered JPEG image (left) with the best I could manage in Elements 12 (right).  I have included the scoring of both images and some detail around the images, how they were created and how they were modified.  

Note, in order to avoid too much lossy compression I started by saving the original JPEG as a PNG file, which is the lossless, compressed format I use for all files on this website.  The major advantage with PNG over JPEG is that changes can be made and the PNG image can re-saved repeatedly with minimal loss of quality.  Constant resaving of JPEG files results in progressively more compression, more artefacts, and more, permanent loss of data with each and every save.  If your images are JPEG it is really important to protect the original files by making working copies for modification and leaving the original file stored in a safe place.  Avoid at all costs modifying and re-saving the original JPEG files unless you are comfortable with the loss of fidelity and an increased level of JPEG lossy compression artefacts.




Proving that a bird may be identifiable even from the poorest quality image, this moulting adult White-winged black tern was captured as a video grab.  Aside from the low resolution and poor focus there are very obvious and disruptive artefacts including lossy compression and "combing" artefacts caused by poor deinterlacing of the video.  When making video grabs it is really important to use a good software package that deals with interlaced video properly.  There really is very little that can be done to improve the quality of this image, other than a slight white balance correction.























Older format video grabs tend to be of poor quality. Low resolution, sharpening halos, purple fringing and compression artefacts are the order of the day.  This grab was taken with a Sony DCR-PC330E which uses Mini DV tape format and Sony's bespoke media card, Memory Stick.  It has the facility to create video grabs on the go which are saved to the memory stick as compressed JPEGs. It does a pretty good job deinterlacing the video when creating grabs and this is about as good as we might expect from this, by now, well outdated, 1990's technology.

Swifts are notoriously difficult to photograph correctly.  Almost invariably, images of swifts photographed against the sky will be underexposed, unless great care is taken with Metering and Exposure.  Here I have increased brightness, contrast and clarity (mid-tone contrast) to try and bring out the contrasting throat/breast area.  Interestingly, both the modified and original image score the same using the tool.























High Dynamic Range lighting is very challenging for observation and photography.  While the overall or net image exposure of the original image was slightly underexposed, parts of the scene are moderately overexposed (in sun) and other parts drastically underexposed (in shade).  Digital cameras have a narrower dynamic range when compared with human vision, so photography on very bright days can be very hit and miss.  When scoring image quality we are looking only at the subject.  In this case we see major exposure issues (both over and underexposure) so this image scores a negative 10 points on exposure.  Although this is perhaps the only major fault with the image, it is a really big deal from an identification point of view, given the species involved, so the image requires a major overhaul in photofinishing terms to draw out salient features.  

Working with an original RAW image offers some hope of retrieving detail and colour from shadows and highlights.  In fact, the exposure of his particular image suits this process well (see ETTR HERE).  However the camera involved (Kyocera U4R) does not generate RAW files for edit and so we only have JPEGs to work with.  JPEG images have a much lower tolerance for major exposure adjustment and, consequently it took a lot of effort to draw out details and colour from this image in Adobe Elements 12.  The modified image looks pretty terrible and unnatural, though at least it allows for an analysis of some the finer ID points.

This image demonstrates the benefits of using photofinishing tools for ID from photographs.  But in this case, the image actually scores more poorly following modification, in particular because image noise, blooming and sharpening halos have become more pronounced in the final edit.  The Image Quality Tool only serves to measure image quality parameters - it takes no account of how identifiable the bird is in one image versus another.

A word of caution!  With such a high level of image manipulation there is a high level of risk and with that comes responsibility.  If I was to receive the right hand image in an email for ID consideration without any awareness that the original image was high in contrast and had been heavily manipulated, it could be very misleading.  Perhaps there should be a health warning on such images.  During analysis, it is always best to start with the original image and work with that so that you can understand the context in which the changes were made and their affects upon the image.  Occasionally, I come across unnatural-looking images online which must have undergone a lot of work, but it can be hard to judge what exactly was done or why.























Another nasty, underexposed image, owing to a combination of a dull day, a shy bird and digiscoping (this time the Nikon Coolpix 4500 with a Leica scope).  Standard modification for an underexposed image is to brighten it, increase contrast and perhaps adjust white balance.  In this case there is no net change in image quality, though identification clues are a small bit easier to see.























The "auto levels" function in Elements 12 did a very good job in correcting brightness, contrast and white balance in this image.  As a final adjustment I toned down saturation a bit and sharpened the image slightly using unsharp mask, to mask slight motion blur and improve image acutance (HERE).  The net effect is a significant improvement in image quality and perhaps a slight improvement in the display of key identification features.























Photographing birds under a canopy of foliage might seem like a complete waste of time, but HERE I was able to demonstrate that the diffuse yellow-green light transmitted by foliage can be surprisingly uniform in colour and can therefore be corrected for using a proper white balance tool.  Here I was able to use the white balance dropper with a point on the bird's white and grey flank feathering to reach a fairly neutral white balance position. 

The canopy is not completely closed.  Consequently there is some extremely bright dappling of the crown, mantle, tertials and tail.  Occasionally, I see dappled light and shade like this being referred to as a digital artefact and this is incorrect.  This is merely a lighting effect and is intrinsic to the original image - i.e. it is not an image distortion.  Such bright light can however result in blooming artefacts and there is probably some of that going on alright.  For more on this see HERE and HERE.

Though the modified image may not be perfect, it is much better than the original and deserving of the much improved image quality rating it receives.























2005 was a good year for Grey-cheeked Thrush in Ireland!  The original image is a pretty typical digiscoping JPEG.  I am not quite sure why many camcorder and some older compact digital cameras produced dull, greenish digiscoped image files.  The cast has all the hallmarks of a white-balancing error and can easily be resolved, but I suspect it may actually be a colour cast picked up from the scope or perhaps an indication of inferior demosaicing or deinterlacing interpolation by the camera processor (more HERE and HERE).   I hope to establish it's causation at some point as it certainly annoyed the hell out of me for years! 

Once again "auto levels" in Elements 12 has done a great job with this image and I topped it off by reducing colour saturation slightly. There is a slight net improvement in image quality and the bird is certainly more clearly identifiable as a result.

Saturday, 27 September 2014

Image Quality Tool - Initial Analysis

Now that I have completed my initial research around the key image quality parameters and put up some of my findings on the blog, it is time to put the Image Quality Tool through some tests.  Before that I would like to summarise some of the initial analysis I have been doing.

Fool the Tool

In an early posting (HERE) I started to question ways in which the controls put in place by the tool might be intentionally bypassed in order to fool the scoring.  One for example might quite easily re-size and image larger in order to bring up the resolution scoring in the tool, thus possibly moving a low resolution, poor quality image up a quality notch or two, without having actually improved the image quality.  This got me a little concerned so I gave some further thought to the various other ways in which images might be manipulated to fool the tool.  I summarised these in the video below.



My overall conclusions were that, provided image modification improved the quality of the image, the effects were generally positive.  It is of course possible to make a dog's dinner out of any image photofinishing attempt, especially if someone saves a bad attempt and loses some vital data in the process.  All these tools need to be used correctly and sparingly and care must be taken not to lose the original file data.  

In the next posting I am going to be testing the image scoring using a number of bird images of varying quality.  Out of interest, I will be taking an image produced by the camera without modification and comparing it with one in which I have have attempted to improve the image quality for identification purposes (modified from JPEG, not RAW).  It will be interesting to see how the scoring is affected and whether or not there is a link between an improved quality score and any genuine improvement in the ability to discern useful additional ID features from a modified image. 

Scoring Logic

As a result of some of my initial analysis I have brought the scoring logic on a bit from my original draft and it currently looks like this.  


Clearly, some parameters have been weighted quite differently to others.  Some score heavily above the line and others below the line.  This may need a bit more tweaking to get it just right.

Milestone Reached

I have just reached an important milestone with this blog.  I have completed my initial research into each of the five key image quality parameters, resolution, focus, exposure, colour and artefacts.  I have published an individual page covering each of these in turn so the page links to the right of the main blog on screen have started to take shape.  This research is by no means complete and I will continue to develop these pages as more useful information comes to light.

Quite a number of new lines of enquiry have opened up and I will start broadening the blog into these areas in the coming months.  Before doing this however I will put the image quality tool through it's paces a bit using some sample images.  The next few blog postings will contain this analysis.

By now I am happy to report that interest in the blog has started to spread a bit, thanks mainly to the postings on UV and birds.  I don't get too many direct comments to the blog but I do get the occasional interested email and I would like to take the opportunity to thank those who have offered constructive advice and sent on some valuable research leads.  I hope those who regularly pop in for a look are getting something useful from the blog and as always encourage anyone with questions or an interest in getting involved to get in touch.



Here is a breakdown of the traffic including the most visited posts/pages thus far.  It is a rather specialist blog so I am extremely gratified by the interest.

Regards

Mike O'Keeffe
Ireland

An Artefact or Not an Artefact

HERE I listed the most common image artefacts.  An artefact is anything that distorts an image, so artefacts are introduced from the moment the light leaves the subject until the final edited image is created and saved.


In the table above I have broken down the formation of a digital image into five stages and identified the artefacts that are introduced at each of those stages.  I have also listed imaging aspects that are not normally considered digital artefacts.  People may disagree with the distinctions I have made here and if so I would like to hear some alternative arguments.  The guiding principal I am using is that if the original image is not distorted (or can be retrieved) there is no artefact.  

There are various limitations with all imaging systems, including the human visual system.  Most of the time these limitations amount to a deletion or loss of image detail, clarity and/or colour.  The distinction between this loss of information and an artefact as I see it is that an artefact tends to introduce false information, data or colour.  Therein lies the problem.  Artefacts can confuse an identification or worse.  It may even lead us to make incorrect assumptions and identifications.  That is not to say of course that the omission of details or colours due to data loss couldn't also result in an incorrect identification.  But, we can only work with what we can see.  We just need to be aware of the potential for image data loss and be mindful of it. 

Lighting and Dynamic Range

HERE I explored the importance of lighting and composition in digital images.  Where there is light there is shade.  Occasionally shading is referred to as an artefact and this is clearly not an appropriate use of the term.  Dynamic range is another interesting one.  Harsh light with high dynamic range challenges the camera's ability to properly expose images and glare or reflection can also obscure detail.  Again I don't see these issues as digital artefacts.  There is a loss of image detail but nothing new or false is introduced.

Image reconstruction

HERE I explored the similarities between how images are captured by the camera and by the human visual system.  Clearly both imaging systems drastically transform an image - the original light from the subject is absorbed by photo-receptors and turned into an electrical signal that is later reconstructed as the image we see.  Ultimately, when it comes to studying a digital photograph, the original image has gone through both of these transformations - first by the camera and computer, then our own visual system.  

We would not consider the process of transforming and reconstructing the image as an artefact in itself.  We look for telltale glitches in the process and theses glitches are the artefacts.  Mostly artefacts are found around the edges, or at the borderline between contrasting colours and tones in the image.






Thursday, 25 September 2014

A Guide to Digital Image Artefacts

Introduction

The simplest definition of photographic artefacts is that they are distortions of the original image.  Every distortion has the potential to influence the identification process which is why I have included artefacts as one of the five quality parameters in the Image Quality Tool.  Most artefacts are microscopic and only influence fine detail.  Many are familiar to most people but a few need a bit of explanation.  Here is a guide to the most commonly encountered digital image artefacts and their likely effects on bird images.


An image starts out as a series of parallel light waves that travel a path between the subject and scene and the camera.  The first distortions of the image happen along this pathway and include the influences of atoms and molecules in the air, Dust and other Air Pollution and lastly the interaction of light with this collection of matter.  The greater the distance, and thus the greater density of air between the subject and the camera the greater the influence of these parameters.  Intensity can also be simply a matter of the scale of the artefact - eg. the higher the temperature, the greater the Heat Haze.  The higher the humidity, the greater the Moisture density in the air.  Glare depends on the angle of the sun relative to subject and camera.  These artefacts all tend to reduce contrast and clarity in the image.  Heat haze as we know tends to defocus and distort fine detail due to the random movement of air molecules.


From the moment the image light waves hit the lens there is the potential for another series of artefacts to distort the image.  Defocus (an out of focus image) is of course the most obvious and recognisable image artefact associated with the lens and requires adjustment of the focus wheel and or the lens aperture as appropriate.  

Dust and Foreign Bodies on the lens can show up as fuzzy blobs on the final image, though most of the time they are so small and defocused as to be transparent and not noticeable. 

Chromatic Aberration is caused by refraction of the light as it passes through the lens.  A prism as we know splits up white light into a rainbow of spectral colours.  All lenses have this tendency due to the fact that each wavelength of light will refract at a different angle as it passes from one material into another.  The solution to chromatic aberration is a corrective glass element with an opposing refractive index.  The combination of crown and flint glass elements is called an achromatic doublet.  As lenses get lighter the glass elements are being replaced with plastic elements which also must be manufactured with this problem in mind.  Lens coatings can also help with the problem.




It is worth noting that, although most lenses are well constructed to more or less eliminate this problem, when it comes to IR and UV light some chromatic aberration may still occur.  This is part of the reason why cameras are fitted with an IR/UV blocking filter which absorbs this light before it reaches the sensor.  For more on this hidden light and it's influence see HERE

Image Distortion occurs as a consequence of lens design.  It is most obvious in wide-angle (fish-eye) lenses but can also occur in zoom lenses and poorly constructed or cheap lenses where it can go largely unnoticed.  The edges of an image have a different magnification to the centre (focus is unaffected).  Often, it is only detectable if one intentionally photographs a grid of equal sized boxes or straight lines.  Lens distortion can trip up the unwary, particularly when it comes to size comparisons and critical measurements within photographs.  For more see HERE.


Vignetting is a darkening of the image around it's periphery and is associated with long lenses and some inferior lenses.  Birders who have taken to digiscoping would be very familiar with this image artefact.

Atmospheric conditions continue to influence the lens in the form of Moisture such as rain and condensation on the objective or on internal lens elements.  In extreme cases ice crystals may form on lenses.  The effect can be a softening or defocus of the image throughout or in spots within the image.

Lens Flare occurs due to light scattering inside the lens, often when the sun is just offset from the camera's line of sight, or if the sun reflects brightly off something in that general field.  The effect is usually pretty recognisable but, I guess, depending on the image content, could be mistaken for something more tangible in the subject matter.  


At this point the light has made it through the lens and filters and is reaching the camera sensor.  Here is where the greatest variety of image artefacts come into play. 

Shutter speed and light intensity determine the exposure of the image on the sensor.  If the camera isn't stable, or if the subject moves during the exposure, Motion Blur gives rise to blurry images, the most familiar artefact at this stage in the process.  

The electrical charge on the camera sensor can attract Dust particles which land on the sensor's surface and appear as tiny, generally white specs on digital images.  Modern DSLRs have inbuilt dust cleaning capabilities but dust can be a real pain for those who use compact cameras.  

Too little light reaching the sensor leads to Underexposure while too much leads to Overexposure.  Both of these could be considered artefacts but one must bare in mind that, depending on the luminosity of different elements in the scene small parts of the image may naturally tend towards under or overexposure.  Dynamic Range is a related factor.  On very bright days the dynamic range of the ambient light may be completely at odds with that of the camera and under those circumstances High Dynamic Range might be referred to as an artefact.  However I would consider it more of a technical deficiency rather than an image distortion.  The important point is whether or not the subject is correctly exposed.  There are two additional artefacts commonly associated with exposure.  

Noise is the fine dark or coloured grain associated with underexposed and high ISO images.  ISO adjustment is merely an amplification of image data, or, put another way, an increase in the sensitivity of the camera sensor.  The greater the amplification the greater the signal to noise ratio (SNR) and therefore the greater the amount of noise in the image.  

At the other end of the scale, overexposure gives rise to another artefact called Blooming.  Blooming occurs when photosites overflow with charge and stray electrons spill into adjacent photosites.  The effect is much like atmospheric glare from the sun or lens flare.  For more on exposure and related artefacts see HERE

When the image is initially formed by the sensor it starts out as a "Bayer Raw" mosaic of green, blue and red pixels.  The process of turning this raw mosaic image into a raw colour image is called Demosaicing and gives rise to a number of different artefacts.  Depending on the type of demosaicing interpolation used, the image may suffer Defocus (blurring), Aliasing (which is like a smoothing of  of jagged image edges) and/or may display image Sharpening Halos, particularly around contrasting edges.  These artefacts could be described as a singular demosaicing artefact or individually.  The use of image sharpening tools result in the creation of the same type of artefacts. Interpolation is also used when an image size is adjusted.  For more see HERE.



Demosaicing involves reconstruction of an image from a mosaic. Deinterlacing is much the same.  It is used to reconstruct an image when Interlaced video images are captured as grabs.  Interlacing is a technique used to increase video frame rate without increasing bandwidth.  Essentially it takes two interlaced images to create a single reconstructed video grab.  As there is a short time delay between the capture of two consecutive interlaced images a problem occurs due to the movement of subjects between the two interlaced frames.  Combing is a particularly recognisable artefact produced by deinterlacing video.

Image format and resolution have some more artefacts associated with them.  Images created from the Raw image data are compressed in different ways.  Lossy compression tends to compress files the greatest but as the name implies it results in a permanent loss of data.  Another consequence of over-compression is the phenomenon of Lossy Compression Artefacts.  These are the tiny blurred and pixellated blobs and appear most notably in highly compressed JPEG images.

Moiré is an artefact associated with image resolution.  It can be produced wherever two regularly occurring patterns overlap.  For more see HERE,




Purple Fringing is a broad term to describe a number of different but similar artefacts. It is commonly caused by aliasing where a dark edge comes in contact with a bright background, such as the sky.  A simple demosaicing interpolation will give rise to this and can be avoided by processing the image from raw using a more complex interpolation (for more see HERE).  Purple fringing may also be symptomatic of chromatic aberration or may even simply be noise associated with dark areas and shadows of an image.



A White Balance Error could be considered an artefact if it hasn't been corrected.  White balancing is required in order to try and replicate a facility in the human brain that allows us to filter out minor changes in light colour temperature.  Unfortunately the camera is not particularly good at correcting white balance on its own, hence white balance errors are common.  Note, in the image above there is a yellowish cast to the bird and tree.  This is in fact the correct white balance for the scene as the sun was setting at the time the image was taken.  Removal of the yellow cast for a more neutral white balance could be considered an acceptable alteration of the image.  An obvious white balance error on the other hand would be a strong bluish cast, which wouldn't make sense given the scene lighting.  It takes considerable practise to develop an eye for white balance errors and their correction.  For more see HERE.

Conclusions

Hopefully this brief summary covers all the commonly encountered image artefacts.  If I have missed any please drop me an email and I will update this posting and page.  At first this collection of strange anomalies might seem a bit daunting but one should bare in mind that most artefacts have little or no impact on identification and we really need only concern ourselves with the ones that we encounter when they directly challenge a difficult identification.



Monday, 22 September 2014

Birds and Light - Lighting and Composition

The 6th Image Quality Parameter

Birders might use the term "Viewing Conditions" to describe the quality of an observation of a bird in the field.  The nearest equivalent in photographic terms is "Lighting and Composition".  I have been wrestling with the thought of including lighting and composition as an additional image quality parameter from the very start.  Eventually, I decided that the Image Quality Tool was complicated enough without trying to factor in such a challenging aspect.

Through the eye of a photographer, lighting and composition is what puts an individual's stamp on an image and sets the tone or emotion of the scene.  When it comes to bird identification images however we are far less concerned with aesthetics and more interested in the accurate capture of colour and fine detail.  We are also looking for that perfect angle or composition that leaves us in no doubt about the veracity of the evidence.

Lighting 

The quality of light has a huge impact on focus, acutance, exposure, colour and artefacts.  In other words, it strongly influences virtually all aspects of image quality.  


The diagram above pretty much summarises the main lighting conditions encountered by birders in the field.  I have put a number of these situations under the microscope HERE, particularly with a view to understanding the influence of different lighting conditions on colour and white balance (HERE).  

But light also affects the image in various other ways.  Direct sunlight tends to produce higher contrast images and therefore improves acutance (the impression of edge sharpness).  On the other hand, cameras do not handle lighting contrast as well as the human eye and, as a result, bright sunshine challenges the dynamic range of the camera and makes it more difficult to obtain a good exposure (for more see HERE).

Diffuse light is far more comfortable to work in, both for the birder and for the photographer.  A bright, overcast day offers perhaps the ideal lighting.  Clouds scatter white light evenly and create low contrast images with soft, grey shadows and even, saturated colours.  The position of the sun in the sky determines how shadows fall and as the sun nears the horizon the colour temperature and brightness of light shifts dramatically.

The advice from every photographer and birder is the same - for the best results try and keep the sun to your back when viewing or photographing a bird.  With little practice it is possible to read the lighting in virtually any photograph.  From the length, position and contrast of the shadows to the sun's glint in the bird's eye, it pays to pay close attention to the lighting in an image during the identification process. 

Composition

Here, what we are interested in is the perspective, or angle of view.  The image below depicts some typical photographic perspectives.  If we exclude the influence of lighting for the moment, there are clearly some viewing perspectives that are more useful to us than others for identification purposes.  Side profile (or slightly offset) seems to be consistently the best viewing angle.


But remember, a bird has joints and can move it's head, it's wings, it's feet and legs, and even individual feather tracts in different directions.  So, the ideal composition tends to become an image where the body, head, legs and wings are all aligned in profile.

An unstable partnership

Now let us consider the combination of lighting and composition together.  It doesn't take much thought to conclude that ideal lighting and composition only exists fleetingly and requires a lot of luck and persistence on the part of the photographer.  Conversely, more often than not, when faced with an identification puzzle involving a difficult set of images, poor lighting and/or composition often plays a big part in the challenge.

Shadows are NOT image artefacts

I have occasionally noticed shadows being referred to as image artefacts in bird identification discussions.  I feel I must point out that this is an incorrect use of the term.  An artefact is anything which distorts an image, ie, it impacts the light after it has left the subject.  Light and shade are both intrinsic parts of the image itself and are not distortions.  Artifical shading (eg. vignetting) is an artefact as it alters the image as it passes through the lens.

I think a much more appropriate description of these image or identification anomalies would be simply what they are - i.e. "Lighting Effects" or "Shading Effects".


The left hand image might suggest that his bird has a dark head but it is purely a shading effect, as clarified by the right hand image.  Incidentally, at the time this bird was photographed it was perfectly identifiable as a Subalpine Warbler (Sylvia cantillans).  However, thanks to Lars Svennson's ground breaking paper and proposed 3-way split of the species, (British Birds 106:651 - 668), even with images like these it may no longer be assignable to form, shadows or no shadows!

Despite the subtle shadow effects that can happen during any photographic setting or session, the above images were taken in lovely soft, diffuse lighting, approaching sunset.  Contrast these images with that of an even rarer warbler which turned up around the same time on the same island (the famous Cape Clear Island in Co. Cork).


This Eastern Olivaceous Warbler (Iduna pallida) is a mega rarity in Ireland.  It's subtle plumage colour tones are a key part of it's complex identification.  This bird was very confiding, defending it's preferred Sycamore for well over a week.  It afforded superb opportunities for frame-filling, portrait compositions but unfortunately the lighting in almost every shot was pretty terrible.  Again, the dappled light and shade is not an image artefact but just a normal part of lighting.  As stark as the light was, it might be argued that the dynamic range of the light was beyond the range of the camera at times - the tail tip is a blown highlight for instance and there is a hint of blooming (artefact) on the left hind claw.  Therefore perhaps one might argue there is a distortion of the true image here.  I would still probably refer to it as extreme lighting rather than use the term artefact to describe the high contrast discrepancy.

This image is also useful because of it's challenging white balance.  There are in fact two competing ambient light sources here.  Firstly, we have the sunlit part of the bird, which is extremely bright and overexposed in this image and of a very pure white colour.  Then we have the shade, which is correctly exposed but which is effectively lit by the sky and therefore has a blue colour cast to it.  I have actually white balanced this image by eye but the technically correct thing to do would have been to introduce a grey card to the shade of the tree and capture an image of that for white balance purposes.  Given that the part of the bird in the shade is the correctly exposed part, the correct white balance in this instance is the part of the bird in shadow.  Then again, had I been following the ETTR model of image exposure I would have been under-exposing the image, trying to avoid blown highlights.  Well, I never claimed to be a great photographer!  For more on dual white balance see HERE and for more on ETTR and other thoughts on image exposure see HERE and HERE.

Friday, 19 September 2014

Birds and Light - Metering and Exposure

In an earlier post (HERE) I explored some of the challenges facing a birder trying to obtain a decently-exposed image of a bird, including the difficulties of working with long lenses, wide apertures, fast shutter speeds and high ISO.  I also looked at related image artefacts, noise and blooming.  Lastly, I looked at some methods for optimising image exposure including ETTR and exposure bracketing.

Here I am going to take a closer look at light metering, the brains behind automated camera exposure.  Firstly why do we need light metering at all?  In an ideal world the camera would be sensitive enough to record exactly what it sees and with minimal fuss.  Unfortunately digital sensors and indeed film stock are less sensitive to light than for example the human eye, and digital cameras are also incapable of capturing the entire dynamic range of every scene we encounter.  Dynamic range is the contrast between the brightest parts of the scene and the darkest parts.  Outdoors, on a bright day, the light contrast can far exceed the dynamic range of any camera.  On a dull, overcast day, chances are, everything in a scene falls well within the dynamic range of the camera and image exposures tends to be far easier to get right.  Metering is used to adjust camera exposure in order to capture a certain contrast range so that at least the part of the scene we are interested in (i.e. the bird) is well exposed.  The key is to be able to meter just the subject which we wish to capture.

A photographic image is possible because incident light coming from the sun, the sky and surrounding objects all reflect off a subject and that reflected light then enters the camera to be recorded by the sensor.

There are two important and distinct components at play here.  Firstly we have the reflectance of the subject we are photographing, and, secondly we have the intensity of the incident light hitting the subject and reflecting off of it. Reflected light from the subject is therefore a result of both of these components working together.  It stands to reason that a highly reflective surface will reflect more incident light and will therefore appear brighter to the camera.  The light meter however merely measures the amount of reflected light reaching the lens.  This presents a big problem.  Because a camera cannot distinguish a bright day from a brightly reflective object there has to be a trade-off, and the trade-off is this. A camera's default is to consider the world and everything in it as being of a uniform pale grey reflectance (approx. 18% grey).  When a camera meters light it treats the reading as though it were measuring incident light, i.e. the light hitting the subject, not the light reflecting off of it.  Hopefully the illustrations below explain this more clearly than I can.


As the illustration above demonstrates, there are alternatives to using a camera's on board light meter.  Handheld light meters and grey cards both work by giving a proper measure for incident light intensity.  Armed with this information a useful image exposure can be worked out.  This approach is somewhat more reliable than on board exposure metering measuring reflected light, but obviously there is a limit to it's practicality for most bird photography.  So, lets assume for the most part, light metering in bird photography is mostly done by the camera and involves reflected light only.


At it's most basic, camera metering using one spot can lead to results that are way off the mark and this is entirely down to the camera's inability to measure reflectance.  However, if the scene is highly variable and highly contrasting it may help to spot meter and then use exposure compensation to make up for an expected discrepancy.  For example when photographing gulls, spot metering of the grey mantle of a European Herring Gull (Larus argentatus) might produce a reasonable exposure but trying to use the same method to photograph a Lesser Black-backed Gull (Larus fuscus) will tend to produce over-exposed images, because the back of a LBBG is much darker than 18% grey.  The solution might be to meter off the white instead and apply a standard exposure compensation, or else use an evaluative light metering method that takes into account far more points in the scene, including background.  Again however, none of these metering methods may produce perfect results.

Metering options include spot-metering as illustrated above, evaluative metering, based on a large number of metered points in the scene, partial which has fewer points and centre-weighted, which meters more points but they are all around the centre of the image, where the subject is most likely to be.  None of these options are fool proof.  One of the big advantages of digital cameras is that the photographer has the ability to check and adjust metering and exposure and hopefully go back for more images until a satisfactory exposure is nailed.



There is obviously a lot more to metering and exposure but at the moment I am only touching on the highlights, primarily to illustrate just how difficult it is to get exposure just right.  For those interested in reading on check out THIS nice tutorial on the Cambridge in Colour website and also check out links HERE and HERE.

Having split up camera exposure into a number of it's complex components, how do exposure time, light intensity and dynamic range, subject reflectance and camera dynamic range all relate?  Here is an attempt at presenting all these elements together.


Our ambient lighting is ever changing.  In dull conditions it is low in contrast and dynamic range but on bright days it's dynamic range far exceeds that of the camera.  Depending on where the subject is positioned in it's environment (directly in sun, or in partial or total shade) a certain light intensity will be shining on the bird.  The reflectance in the bird's plumage and bare parts will determine how much of this incident light reflects towards the camera.  While the subject's reflectance is fixed the actual intensity of light reflecting from the bird is in direct proportion to the intensity of incident light hitting it.  The camera records this reflected light using the on board light meter and uses this information to try and gauge an appropriate exposure time.  If the exposure is correct then subject's reflectance will be in line with the camera's dynamic range.  Otherwise there will be a mismatch, with the result that the image will be either under or overexposed.  If this is severe enough there will be an irretrievable loss of detail and colour and image artefacts like noise and blooming will be introduced.  So, accurate metering of the subject is the critical component in all of this.

Saturday, 13 September 2014

A closer look at Image Exposure

For most photographers, image exposure was traditionally the most important image quality parameter.  However modern digital image processing software has made it possible to correct exposure errors to a considerable extent at the finishing stage.  So, for me exposure is just about as important as some of the other image quality parameters in the Image Quality Tool.

Exposure is simply the amount of light hitting the sensor or film, giving rise to the image and measured as illumance times the exposure time (lux seconds).  Strictly speaking therefore the parameters which impact on exposure are those which increase or decrease the available light, namely the lens and filters, the aperture size and the shutter speed.

ISO is also generally considered part of the exposure equation and increasing ISO has a similar effect on the final image as decreasing aperture or increasing shutter speed, however ISO adjustment involves processing (amplification) of the image data and therefore takes place after the initial exposure has been made.

Exposure and Bird Photography

In terms of bird photography, as we are dealing with small, fast moving and often not particularly approachable subjects, image capture and exposure tends to be a particular challenge.  Bird images are often produced under the following, rather difficult conditions:-

A LONG LENS 
To capture a reasonable sized image of the small, distant bird we tend to require a long lens, 300mm or longer.  The longer the lens, the lower the transmission of light through to the sensor.  Digiscoping is the ultimate in long lens photography and can be frustratingly difficult.


Digiscoping was probably at the peak of it's popularity in Ireland in 2004 when this stunning spring male Hawfinch (Coccothraustes coccothraustes) turned up in a birder's garden in County Cork.  Rarely does such a perfect photographic opportunity present itself.  Yet, I can still remember the frustration of trying to obtain satisfactory images with my Nikon Coolpix 4500 through a Leica scope.  With a DSLR an image of this quality would be relatively straightforward.  While the composition may not be great, the exposure of this image is quite good, all things considered.  There are still some telltale digiscoping clues including a  shallow depth of field and vignetting (dark edges).  This was one of only two acceptable images obtained in over an hour while having this stunning bird at near point blank range!  Still, one can hardly complain with views like this, not to mention the kind hospitality of the finder - this was photographed through a kitchen window with a cup of tea in one hand!

HIGH SHUTTER SPEEDS
To capture a fast-moving subject with a long lens while avoiding motion blur the shutter speed must be very fast - perhaps 1/1000th of a second at times.  This further, greatly reduces the amount of light available to create an optimally exposed image.  

APERTURE PRIORITY
The aperture of a lens determines the depth or field of the image, i.e. the depth of the scene that appears in focus.  A greater depth of field ensures more of the subject will be in focus and becomes more of a challenge the closer the subject is to the camera.   However the greater the depth of field, the smaller the aperture and therefore the lower the light transmission.  Many photographers trade a shallow depth of field for a lower ISO and higher shutter speed in order to obtain a sharp and reasonably exposed image, only opting for a higher depth of field on brighter days or where lower shutter speeds allow.

HIGH ISO
ISO is a setting that allows the light sensitivity of the sensor to be increased.  This is done by amplifying the image data so it is part of post-production.  The downside is increased noise, though modern electronics and processing algorithms have greatly improved the quality of high ISO images.  As some modern compact cameras produce better quality images at higher ISO, plus offer the ability to shoot in RAW, this has prompted some birders to return to digiscoping as the preferred option for capturing bird images.  Not only is a compact digital camera and digiscoping setup far cheaper than a DSLR setup, it is also far more compact and convenient for the field.  On the flip side, if you have ever tried to digiscope a small, fast moving passerine or a bird in flight you will soon discover the limits of your patience!

Optimum Exposure, Light Metering and Dynamic Range

Human vision can adjust to bright sunlight and very low light conditions however we struggle to deal with both at the same time.  Our eyes must adapt by pupil constriction (in bright condition) or dilation (in dull conditions) and there is also a delay while some biochemical changes take place.  A camera can adapt to changing light more quickly but, digital cameras can only deal with light intensity within a narrow band, referred to as the camera's dynamic range.  An optimal exposure is one in which the entire range of luminance of a subject being photographed falls within the camera's dynamic range.

Adapting for different light intensity requires the camera's on board light meter and camera exposure is adjusted accordingly.  But in high dynamic range settings the camera will simply not be able to capture all highlights and shadows within the same scene.  High dynamic range scenes consisting of patches of bright light and deep shadow tend to produce some of the least acceptable photographic results and quickly reveal the limited dynamic range of a digital camera.  In this scenario careful use of the camera's light metering is essential in order to capture a useful image of a bird in the middle of such a high contrast scene.

Put another way, if the camera is set up to judge camera exposure based on the average lighting throughout a whole scene, consisting of bright sunlight and deep shadows, it is probably unlikely that the chosen exposure setting will be suitable to capture the subject properly.  The bird will either be too light or too dark in the final image.  More than likely the photographer will have to select spot metering and aim the metering spot directly at the bird to have some chance of obtaining a correct exposure.

Lastly, it is important to be aware that a light meter is calibrated to a grey patch of approximately 18% grey.  If the meter is aimed at a white gull or some other bright white object the result will be an dark, underexposed image.  Similarly, if the meter is pointed at a black crow or other very dark object, the result is a bright, overexposed image.  So, if spot metering is being used, one must also use exposure compensation to adjust for the luminance of the actual subject being photographed.  All, in all it should be pretty clear that obtaining a perfect image exposure of a bird is no mean feat!  Very often, when we are dealing with a difficult ID involving a poor quality image, the subject has not been optimally exposed.  This is very commonly the case with digiscoped / phonescoped images, images of contrasting scenes or birds with contrasting plumage markings.

Optimal Exposure Techniques (eg. ETTR) and Exposure Bracketing

"Exposure to the right" or ETTR is photography intended for optimised exposure of the subject in which the photographer studies the histogram obtained from the most recent image or in live histogram if the camera has that function and then adjusts exposure in order to push the image histogram to the far right of the graph, usually just avoiding clipping image data.  Why to the right?  Digital images tend to allow a greater "Latitude" for recovery of underexposed detail than overexposed detail.  Hence, exposing to the right ensures highlight details are preserved, possibly at the expense of some shadow detail.

ETTR should be done with ISO set to the minimum (ISO 100) and therefore implies good lighting and a cooperative subject.  With the camera set up this way the exposure is optimal for the subject being captured and noise and other artefacts are hopefully kept to a minimum.

Despite the potential for dark, underexposed images with this method, amazing results can be obtained with this technique when processing in RAW even in high dynamic range images and low light situations, provided one is prepared to spend time working on images in post-production.  For more see HERE.

Exposure bracketing is another useful technique for improving one's chances of obtaining a preferred exposure,  It can be used in conjunction with rapid continuous shooting and basically produces of sequence of 3 or more images with different exposure setting in succession.  It is space and time intensive as one must be prepared to pour over and dump most of your images. Preferable, and assuming the subject is being cooperative, it is better to take the time to compose and meter for accurate exposure. For more on bracketing see HERE.


In the diagram above I have tried to present the overall effects of under and overexposure on highlights, mid-tones and shadows plus image artefacts.  The top row of discs represent a range of luminance patches captured at optimal exposure.  The middle row are the same discs captured in an underexposed image.  There is a net loss in contrast, there may be clipping of image data in the deep shadows and noise increases, all as a result of underexposure.

The bottom row depicts overexposure of the same range of discs.  Clipping may occur in the highlights and a bloom artefact may be introduced as electrons spill over across photosites in extreme cases.  Deep shadows are reduced and the overall image is reduced in contrast.  On the plus side, noise may be reduced during overexposure but this doesn't tend to make up for the irretrievable loss of data in the highlights.  As stated above, there tends to be much greater latitude for retrieving image data in underexposed images than in overexposed images.





This image displays and summarises the effects of exposure on image detail and colour.  It echoes the comments above regarding latitude in over and underexposed images.  Note especially the loss due to blooming of subtle colours and fine detail, an artefact caused by overexposure.  By contrast, fine detail and acutance remains reasonably good in underexposed images, though colour does suffer quite a bit.  ETTR encourages slight underexposure in order to preserve highlights.  Using the lowest ISO setting (ISO 100) during ETTR minimises noise, an artefact associated with underexposure.

See also
Lighting and Composition
Metering and Exposure