Saturday, 25 April 2015

Colour - Blue Tit UV Reflectance

UV reflectance as a form of photography has been around for some time.  However owing to the expense and technical challenges involved it has been primarily the domain of specialist photographers and researchers, until now.  I came up with a simple and cost-effective solution which I researched, experimented with and promoted through the blog over a series of postings last summer. My postings on UV reflectance (eg. HERE) have been among the most popular on this blog to date.  Last summer I failed to get any images of Blue Tit (Cyanistes caeruleus) so I decided to wait until this spring, when breeding plumage would be at it's best, to revisit this species.

There has been quite a lot of UV reflectance work carried out with Blue Tits over the years.  Spectrophotometry is invariably the method used for scientific research, requiring capture of birds and careful measurement of UV reflectance from the blue crown feathers using a spectrophotometer (for an example of this in action see HERE).  In one study (HERE) Anderson et al found that male Blue Tits have on average a higher UV reflectance than females, indicating sexual dichromatism in the UV spectrum.  So, while humans have great difficulty visually separating the sexes, it is thought that Blue Tits can easily tell the difference as they can see in UV.  The males would appear to have a brighter cap than the females.  Of course, just how UV light appears to birds we can only speculate.

Whatever about being able to tell the sexes apart based on UV reflectance I was curious to see just how bright and reflective the crown would appear using my UV camera setup.  The peak UV reflectance per Anderson et al lies at around 370nm, which should be well within the range of my camera.  During earlier testing of my rig (HERE) I was able to obtain a bright UV image from a Peedar UV lamp, which the manufacturer states has a peak brightness at wavelength 365nm, roughly in line with the peak UV reflectance for the crown feathers of the Blue Tit.

This morning I obtained the images shown in this posting.  The colour images were taken with my Canon 70D and 300mm lens, while the UV reflectance images were taken with my Sony DCR-TRV270E, modified only by a Baader-U filter in front of the lens as depicted HERE.

I am quite certain that I filmed both males and females as I encountered a number of pairs this morning.  The results kind of speak for themselves.  Once again I have found UV reflectance in birds to be extremely subtle.  I had hoped that UV reflectance in this species would be a bit more dramatic, but sadly not so.  I wouldn't rule out the possibility that if I had male and female Blue Tits together in the hand it might be possible to detect a subtle difference in UV reflectance, but it is certainly not obvious from the images I obtained in the field.  

The evidence from studying flower nectar guides and butterflies (most notably the Common Blue Polyommatus icarus) has shown that this method works and produces some really intriguing results.  But so far I have to say the results on the avian front have not been very exciting.  I have no doubt that there is a lot to discover about UV in birds, as research using the far more exacting spectrophotometric method has already revealed.  But UV reflectance photography may not be about to offer the kind of insights I had been hoping for.

Marsh Marigold (Calta palustris) demonstrates a striking nectar guide.  Each petal has a UV reflective tip and a UV absorbing centre.  The centre of the flower is also UV absorbing.

Friday, 24 April 2015

Field Marks - False Contrast

Contrast is not recorded by the camera sensor.  It is one of the corrections given as part of the process of creating an image from a RAW data file.  Together with saturation, focus and white balance, the degree of contrast correction applied is typically set by the camera, with limited input by the photographer.    Most modern digital cameras make a reasonable stab at contrast adjustment when automatically generating JPEG images, but for various reasons, photographers may like to further adjust contrast, often together with the overall brightness of the image.  Or, contrast may be inadvertently changed due to other forms of post-processing of images.  The inter-relationship between contrast, saturation, brightness and focus correction during post-processing is surprising, as explored HERE.  Alternatively, and preferably of course, contrast can be added while working directly from RAW.

Contrast may be naturally low, for example on a foggy or overcast day.  Or, it may be naturally high, for example on a bright sunny day or, exacerbated by a bright reflective environment such as snow, ice or a white sandy beach.  Cameras cannot replicate the dynamic range of human vision, so bright days give rise to unnaturally high contrast images, with a significant loss of tonal range.

Faced with all these challenges, image contrast is not an easy thing to get right.  More often than not image contrast doesn't paint a totally accurate representation of the subject.

So lets face it, "false contrast" is common place.  Why does contrast matter?  Well, in terms of field marks, contrast can be very important.  For the most part, the preference for appreciating field marks correctly would be a naturally low contrast, high dynamic range image, where all colours and field marks are represented by as wide a tonal range as possible.  For more on high dynamic range imaging (HDRI) see HERE.  In categorizing field marks broadly into the Bold and the Bland, contrast is identified as a key parameter distinguishing the two.  Bold field marks tend to exhibit high contrast relative to features around them, while bland field marks are often low in contrast, blending in with their surrounding features.  Bland features also tend to require a broader tonal range so high contrast is not compatible with the full range of tones required to properly display many bland field marks.

It is also possible to demonstrate through an analysis of field marks that various image quality parameters impact on contrast, which in turn impact on the appearance of field marks.

Of course, the primary focus of this blog is bird identification from digital images.  Poor contrast correction in digital images can have a real baring on bird identification.  Lets take for example the identification of Catharus thrushes here in Europe.  The main confusion species on this side of the water is the Song Thrush (Turdus philomelos).  Song Thrush is noticeably bigger than any of the Catharus species but size can often be difficult to assess in the field, and of course is generally not possible at all from photographs.  One of the useful features separating Song Thrush from all of the Catharus thrushes is the contrasting pattern of the ear-coverts (auriculars).  In Song Thrush there is a noticeable contrast, whereas in the Catharus species, while the pattern may be similar it is more subdued.  If image contrast is poorly corrected it can have a baring on a difference like this.

In the first image below the contrast of the Song Thrush image has been lowered.  More specifically, the mid-tone shadows have been brightened with the result that the contrast of the ear-coverts has been reduced together with the shadows.  Subtle corrections like this can easily go unnoticed.  The contrast in the ear-coverts is less obvious than it might have been in life.  This is one of the problems with HDRI if it is done without consideration for features that are obscured within the shadows.  For more on lighting tools see HERE.

In the image below the contrast in both images has been dramatically increased.  Effects like this can occur naturally due to harsh lighting.  In fact the image of the Song Thrush looks quite natural, and indeed it is.  The image was taken on a cold winter's day.  The lighting was naturally quite contrasting when the image was taken. The image of the Swainson's on the other hand looks a bit off and that is because the high contrast added does not tally with the visual cues in the image.  

It is important to state of course that, while we may think we have a good sense for the lighting in images, humans are actually not well equipped for this type of analysis and we frequently get it wrong.  

In addition to the natural contrast caused by lighting, contrast can be artificially added or removed in an attempt to compensate for poor exposure, or even due to the over-sharpening of images using unsharp mask.  

While a very experienced observer would still manage to identify this tricky pair, even from these gaudy images, someone less experienced, perhaps focused too closely on one or two field marks could easily fall foul of images such as these.  

Whether excessively low or high contrast is due to natural conditions, camera settings or the over-use of post-processing tools, the message is the same.  It pays to take stock of the overall quality of an image including lighting and composition before delving into the finer details.  Field marks can't be looked at in isolation.  It is also important to allow for the possibility that what we perceive from an image may be, at times, a poor representation of what we see in the field, and more importantly, a poor reflection of how the bird would look, up close and in good light.  Excessively low contrast can make bold field marks appear bland.  Excessive high contrast on the other hand can obscure bland field marks completely but at the same time might elevate subtle patterns into false bold field marks.

Saturday, 18 April 2015

Gestalt - An Overview

By now this blog has finally started to take some proper shape.  Organically, the search for answers has opened up a number of fronts, with spotlights variously on birds and light, colour, field marks, forensics, human bias and the image quality tool that started it all off.  This has all arisen from a simple germ of an idea - that it might be possible to create a digital guide to identification from photos, specifically for birders.  These broad areas of research can all be explored as individual pages from the top right-hand corner of the blog.  To these topics I have now added Gestalt.  

'Gestalt' (or 'G.I.S.S.' - 'general impressions of size and shape' - also spelt 'JIZZ') is the name we give to the recognisable 'feel' of an individual species in the wild.  It is a combination of it's structure, how it moves and it's behaviour.  As birders gain experience in the field we quickly become aware of the gestalt of common species we encounter regularly.  When a new species appears in a familiar setting, very often it's presence is first signalled by it's gestalt - something unusual about it's size and shape, or the way it feeds or moves about.  Of course a common species with an uncharacteristic behaviour or shape can fool an observer into believing they are watching a different species.  It is also very difficult to describe a bird's gestalt in any objective or measurable way.  Thus this subtle field craft has it's pros and cons, it's strong advocates and those who are more into field marks.  Most experienced birders would tend to use gestalt a lot in forming an initial impression but combine that with topographic field marks to form a solid identification.

It would be wrong to say that gestalt can be properly captured in an individual photograph but video can go a long way to capturing it.  Then again, if a video is merely a collection of photos, surely a flavor of a bird's gestalt is captured in every single photograph or frame of video.  The question is, how do we reliably and consistently draw out from our photographs this 'essence', for want of a better description.

General Impressions of Size and Shape
Sometimes size and structure are wrapped up together in the definition of field marks and I don't think this is helpful, so I am intentionally splitting out structure (or morphology) and size (or biometrics) from the field marks question and tackling them separately here under gestalt.  In a Spotlight On Field Marks you will note I have deliberately separated structure and size from the patterns, colours and markings which I think forms a clearer, more concise definition of field marks.  
It might be tempting to think of a photograph as a good representation of size and proportion but this can be a very unwise assumption.  I have looked at this in some detail already HERE and I will be building on this analysis in future posts.

Tools and Guides
In a Gestalt Field Guide I looked at how the subtle question of gestalt has been handled in standard field guides.  Possibly the first overt attempt to deal with this came with the publication of Birds by Character by Rob Hume and illustrated by Ian Wallace, Darren Rees, John Busby and Peter Partington in 1990. 

Though the book is clearly a concise field guide, the sketchbook styled plates mark it's utter uniqueness among field guides.  Experienced birders will certainly appreciate how well gestalt has been captured in this book.  But this type of guide doesn't necessarily lend itself to an analysis of birds or gestalt from digital images.  The key issue here I think is that field guides, by their nature cannot capture enough of the essence of a species to allow them to be used for direct comparison with photos.  One book has gone much further than any other I have seen, and that is Hawks at a Distance by Jerry Liguori.  But it is interesting to note for instance that, despite showing fewer stills of Northern Goshawk, there was more of the behaviour of that species captured in Birds by Character than in Hawks at a Distance.  So there is clearly a lot to consider here and no easy approach to tackling and illustrating gestalt in a field guide. 

I actually gave a bit of thought to the subject of how best to characterise gestalt from images very early on in the development of this blog and I came up with a simple matrix to help describe the overall quality of an observation based on a single image.  Perhaps this is a better starting point.  So far I have concluded that most bird images fit into one of nine categories, where a bird can be described as either in near side profile (generally the best viewing angle), near front/back profile, or clearly offset from one of these.  I then categorised birds in images as either being at rest, though perhaps "on the deck" is a better way of putting it as it doesnt preclude birds from actively feeding, preening or whatever.  Alternatively birds may be in flight.  Or, lastly birds may be captured about to take flight or land - I referred to this as "open wing" in the matrix (below).  While I may find this to be an oversimplification in time, I am going to use it as a starting point for further analysis around this whole question.

I am in little doubt that the best tool for studying gestalt is a moving image.  Video offers it all - size, shape, behaviour and, when put together, that essence that defines a living species which we call gestalt.  I have only just touched on this so far but I have a good collection of video gathered over the years in various formats and in various countries and environments.  I look forward to starting to train the blog in the direction of that stash of material.  Not only is there plenty of room to look at ways to extract and study gestalt, such as the useful animated gif technique which I showcased a while back on the blog.  There is also plenty of scope to look at image analysis techniques in the video sphere.

So, in summary, this is a very big field in itself with plenty of room for exploration and further twists and turns.  I will be digging into various areas and looking forward to seeing how the blog develops and where the research takes us.

Friday, 17 April 2015

Field Marks - False Malar Stripe

In this series of postings I have been exploring avian field marks from various perspectives.  One aspect which is of particular concern to us from a bird ID perspective is the potential for false field marks.   I touched on various aspects of this HERE.  Shadows give cause for confusion, particularly as the lighting in the field often means the bird bares no resemblance to the image in the book.  Experienced birders get over this and quickly learn how standard avian anatomy tends to result in some standard lighting tricks, which we can account for.  Most of the time light and shade fail to mimic field marks exactly.  But certain field marks are more prone to being 'falsified' by the light and other factors.  One in particular is the malar stripe.

The malar stripe is a fairly common marking.  It falls along the line between the throat and the submoustachial feather tracts and usually consists of markings on feathers belonging to the throat.  There seems to be a trend towards getting rid of the term malar stripe and replacing with lateral throat-stripe as summarized by David Allen Sibley HERE.

There happens to be an underlying apterium (featherless patch of skin) between these pterylae (feather tracts), called the submalar apterium.  So, a cleft will form here when these feathers are parted.  Difficulties arise due to the relative movement of these feather groups, with the result that the malar region may be difficult to properly assess at times, particularly in images.  Not only do we see shadows forming here but we also see the downy bases of feathers and perhaps even the submalar apterium itself exposed on occasion.  The impression of a false malar stripe and other markings is easily created under these circumstances.

Take this singing (Common) Nightingale Luscinia megarhynchos photographed in Morocco.  The deep cleft between the throat and the submoustachial in the right-hand images marks the position of the submalar apterium.  This is one of the iconic avian songsters but it generally sings from deep cover.  A photograph of one in song is a neat challenge.  The nominate western subspecies megarhynchos  differs subtly from the eastern race hafizi (golzii) and from the closely related Thrush Nightingale L. luscinia by, among other features, it's lack of a malar stripe.  But this feature is somewhat obscured when the bird's throat feathers are ruffled, as is the case for instance when it sings.  So, ironically, though it's song is distinctive, a photograph if it singing, frequently isn't!

Tuesday, 14 April 2015

Forensics - Analysis of Lighting and Shadow Direction

The quality of light determines the accuracy with which field marks including colours are portrayed as outlined HERE.  Bright light is high in contrast and this challenges the dynamic range of the camera, resulting in a loss of tonal range.  Put another way, bright light implies brighter highlights and darker shadows, both of which can obscure detail and colours.  Bright light is therefore far more challenging than dull or diffuse light.  In our efforts to overcome the challenge of harsh, bright light anything that helps us to understand the lighting in an image can aid our cause.  This may include judging the direction of the light source, and with it, the direction of opposing shadows.

We live in a three-dimensional environment.  Everything in 3D can be plotted along three axes, X, Y and Z.  If we consider the world from the perspective of our digital images we can only really appreciate two of these, the Y and the Z axes.  The depth in an image, or the X axis can be particularly tricky to work with.  Obviously, when it comes to outdoor images the sun is, more often than not, out of shot, and of course from the perspective of the observer and camera, it appears infinitely far away.

Analysing lighting starts with an understanding of the direction of the light.  While we may often have a rough idea where the lighting is coming from in an image we can often be mistaken.  In the posting Lighting and Perspective I explored some of the reasons behind this. In that posting I also discussed the issue of optical illusions (more discussed HERE).  Part of what often confuses us about light and shade in an image is that we often fail to recognise that shadows have a three dimensional geometry as explained HERE.  Here I outline two tools to help us establish light direction more objectively from an image.

The Eye Technique
Though the shape of the eyeball differs among species, the cornea of the eye has a near-spherical surface.  When the sun is shining it often appears as a specular highlight on the surface of the cornea.  In theory we could use a spherical coordinate system to work out the exact position of the sun in 3D space from a single image containing a spherical reflector such as this.  But we don't have a way to work out size or proportion for the sphere involved which presumably we would need for a reasonable accurate calculation.  In fact however, we may not need such a high level of accuracy for our purposes, for reasons I will explain below.  We can gauge the sun's direction in the Y and Z axes at least by taking a line from the centre of the eye through the specular highlight.

The Edge Technique
Though the eye method is very useful it cannot always be relied upon.  The sun may be to the side or behind the subject.  It may be obscured by partial cloud or foliage.  The bird may have it's head turned.  Or, the image resolution may be too low to properly assess specular highlights.  Or, simply we may want a second method to help confirm our initial impressions of the direction of light.

The edge technique works by ignoring the X axis and looking for the direction of the light source in the Y and Z axes alone.  Why would we discount the X axis for our analysis?  Well, firstly there is no easy way to factor in the X axis as we cannot accurately measure depth in an image.  But, it turns out we don't really need the X axis lighting component for our purposes.  

Firstly, in a two dimensional camera image our narrow line of sight determines what we can actually see.  The surfaces of our subject which lie perpendicular to the lens, i.e. along the X axis are plainly visible.  Whereas, the surfaces which face the Y and Z axis are not visible at all in our image.  The illumination along our line of sight mostly determines the quality of the overall lighting and exposure in the image - not surprisingly as this was the light metered by the camera.   Generally speaking, the shadows which arise from our subject along the X axis fall behind the subject, out of our line of sight.  The exceptions to this are generally straightforward and easy to interpret.  For example, a bird faces the camera and it's bill casts a shadow onto it's throat.  Or, for example we have some obstruction or reflective surface in front of the subject which alters the lighting and shadow in some way.  More often than not we may have some idea what that is.

It could be argued that the components of the lighting and shadow which most often concern us are the Y and Z axis lighting components, running alongside and close to the plane of the image, close to the point where features are being obscured due to our angle of sight.  Because these features are harder to see clearly we may be more invested in understanding the lighting and shade in these areas of the image.

The image above is an illustration of the edge technique.  The principal behind the technique is that light intensity increases as the angle of incident light approaches the surface normal (i.e. perpendicular to the surface). So, if we can locate and isolate the brightest points along the surface edge of our subject we should be able to judge the angle to the light source.  We can sample a number of points of high luminosity to cross check and confirm our angles.

This may not be a 100% accurate method, as it can be hard to judge angles exactly, but we may be happy with a 10 to 20 degree deviation for our needs.  After all, this tool is merely helping us with other forms of qualitative analysis.

Of course, based on our understanding of how photographs are made, we are not actually measuring incident light intensity here.  A digital image consists of a combination of (reflected) light intensity, which we need, and surface reflectance, which we don't need.  If a bird's plumage consists of patches of very bright, reflective feathers and poorly reflective, dark feathers, judging the angle of the light source will be far more challenging because reflectance will confound our results.  This is not necessarily a show-stopper - just something to watch out for.  In some cases however this method simply won't work.

So how did we create the edge profile and how did we identify the brightest pixels around the edge?  We start by opening up the image using Adobe Photoshop, Elements or some other programme ( is free).  We make a new layer and fill it black, then make the black layer slightly transparent so we can just about make out the image layer beneath.  While still working in the black layer we then select the eraser tool and trace around the edge of the subject.  Finally, we make the black layer opaque again.  We now have our narrow edge profile.  Last step is to turn the image layer to greyscale and then save the whole image as a new image png file.  

Now that we have our file we need a way to identify its brightest pixels.  The tool of choice is Color Quantizer (another freely available tool online).  By postarising the image to as few as 16 tonal levels and recolouring the brightest level with a colour tracer we can quickly pinpoint the brightest pixels.  Note we may find that 16 levels is too few so we can retry at 32 levels, 64 levels or whatever we need to help discriminate the very brightest pixels.  I used this tool and technique before to differentiate tonal levels HERE and HERE,

The light source should be at a 90 degree angle to the surface edge where these pixels are located, but bare in mind the pitfall that is surface reflectance as noted above. Ultimately these methods may never provide the same dramatic or definitive results obtainable through 3D modelling in a virtual lighting space (HERE), but this method requires far less effort and yet is surprisingly accurate and useful enough for our needs.


Much of this posting and these techniques was inspired by the work of Prof. Hany Farid and his team at Dartmouth College.  Software tools have been developed to use these and similar techniques to authenticate digital images.  This is not free software.  Some may not even be available to the general public.  And some may not even apply too well to bird images.  For instance a technique to pinpoint the light source in 3D space is based on the dimensions of the average human eye.  Avian eyes on the other hand clearly vary from one species to the next, not only in measurements but also in morphology.  It is worth watching one of Prof. Farid's entertaining lectures online.  The video below provides additional insight into the science and maths involved plus the worrying proliferation of fake images in politics and the media.

On a final note, while experimenting with light and perspective I found through experimentation that in diffuse light such as overcast conditions the lighting direction cannot be assigned.  This is because, the predominant lighting of the scene comes from the sky dome, not the sun hidden by the cloud blanket.  For more see HERE.

Wednesday, 8 April 2015

Forensics - 3D Analysis

In THIS posting I came up with a formula for analysing shadows based on a study of the contours of the bird and the distribution of subtle colour tones.  

Here is a far more sophisticated approach using 3D modelling by a renowned digital forensics expert Prof. Hany Farid from Dartmouth. This analysis looks at one of the most infamous photos in American history, the 1964 Oswald Life Magazine cover.

While this may not be a practical method for our purposes right now it demonstrates what can be achieved using 3D modelling.  3D modelling may still seem like an unlikely tool for the average birder but check this next video out.  Whether this software and video is 100% real or not, software like this may not be too far away and we may be employing 3D analysis before too long.

Sunday, 5 April 2015

Colour - Saturation (The Bold And The Bland)

With the spotlight on field marks I reached the conclusion that bold field marks and colours were more 'resilient' to image quality deterioration than bland field marks, as discussed HERE.

I have already tested exposure HERE but I didn't look closely enough at colour saturation.  When we apply the recent analysis of colour saturation to the field mark question does the analysis match up?

Samples 1, 3 and 5 fit the description of bold colours while samples 2 and 4 are bland.  Though this is only a small sample size the analysis and, more importantly, the research reveals an unexpected result.

Colour Saturation and Luminance
It might appear visually that colour saturation should reduce with a reduction in exposure quality (as we are used to seeing this when we adjust brightness levels in images), in fact there is more to saturation than meets the eye.  While recently researching this area (HERE) I found that saturation control is governed at least in part by the processor.  The camera sensor does measure both hue and saturation (as explained HERE) but there may be more going on post production than meets the eye.  In actual fact, what we find is that under and overexposure may both produce more saturated colours depending on how the processor is configured. However, some of the earlier analysis does still stand.  We see for instance bland colours are clearly clipped in very overexposed images whereas bold colours can withstand a higher level of both over and underexposure.

Saturday, 4 April 2015

Colour - The Effects of Post-Processing of Images

Brightness, Contrast, Saturation and Sharpening Tools
I came across an excellent posting by Mike Chaney looking at the interaction between these four simple image modification/analysis tools HERE.  Proving that there is a subtle relationship between all forms of image post-processing modification, it is interesting to consider what happens to image colours when an image undergoes some basic post-processing modification using any one of a range of simple tools.

Difference between RAW files and 8-bit images
It is important to state from the outset that these parameters interact differently during the original creation of an image for viewing from the RAW image data file.  When we are working with an actual image file we are post-processing the image. Because RAW images contain far more data than a final JPEG, PNG or other typical 8-bit file there is a lot more latitude to adjust these parameters individually within a RAW-to-image workflow.  If you use Camera RAW or a similar RAW workflow package you will find that these and various other subtle modifications appear to work quite independently of one another.   However when working with a far more compressed, 8-bit JPEG or similar file we see that modifications have greater reach. 

King Penguin (Aptenodytes patagonicus) Falkland (Malvinas) Islands, makes a nice subject for a comparative analysis.  It is hard to resist increasing contrast and/or saturation to enhance the beautifully rich colours and markings of these birds.  But sometimes we need to resist such temptation to aid careful colour analysis.  The image above was created from RAW.  I boosted colour saturation a little bit but also was careful to keep the fine shadow detail in the white areas of the breast.  As stated above, there is latitude for this while working in RAW but not when working with a JPEG or other 8-bit file.

Post-processing of JPEG or other 8-bit images
The images below were made by post-processing a JPEG image file.  We can compare the impact of each type of adjustment on the JPEG and see how each adjustment has an affect on all four parameters simultaneously.  An adjustment of brightness for example changes colour saturation and, where it clips blacks or highlights it also therefore clips colours and reduces overall contrast.  Contrast is closely related to saturation and both impact on the saturation of colours.  The key difference between contrast and saturation is that an increase in contrast compresses all tones whereas an increase in saturation only compresses colours, and does not alter luminance levels at all.  Extreme high contrast and high saturation adjustment results in a loss of mid-tones while contrast reduction can clip blacks and whites.  Image sharpness is largely about acutance or edge contrast and this experiment neatly shows how the effects of brightness, saturation and contrast adjustment all impact on acutance.  For it's own part, the over use of sharpening tools like unsharp mask can introduce sharpening halos which in turn alter the colour and tones around the edges of colour patches.  This can drastically alter the colour of small or narrow markings. 

From left to right above we have darkened, normal and brightened images.  Next we have a contrast-reduced, normal and contrast-increased image.  This is followed by a saturation-reduced, normal and saturation-increased image.  Lastly we have an artificially blurred image, followed by a normal and lastly an artificially sharpened image.  These were all created from the same JPEG image.

All of these adjustment tools are part of the normal processing of a RAW digital file into an image for viewing.  But, as there is greater latitude to work within RAW these corrections can be made far more independently of one another at that stage.  The extent to which these tools or parameters are applied automatically from RAW depends on in-camera settings and manufacturers preferences.  However, alternatively we can work with the original RAW file using Camera RAW or a similar package, where these can all be adjusted manually to suit our needs.  If we decide to leave our image processing to the end and work with a JPEG file instead of a RAW data file we find that our modifications have greater consequences.

So the key message once again as regards the accurate capture of colour for sampling and analysis is to work in RAW, work towards maximizing tonal range and calibrate colours properly using a DNG profile and grey card.  Be careful with the post-processing use of brightness, contrast, saturation and sharpening tools.  Keep these adjustment within the RAW editing workflow to minimize loss of tonal range and, above all to prevent clipping.  Lastly, be judicious with the use of these key adjustments.

See also HERE and HERE.

Thursday, 2 April 2015

Colour - Colour Saturation Analysis

HSL - Hue, Saturation and Luminance

When we talk about colour, we probably instinctively think of the bright, vibrant hues which make up a rainbow or the bright colour pallet of a child's paint set.  Colour vision allows us to appreciate these vibrant colours.  But there is far more to this subject than vibrant colours alone.  And, our vision is complex enough to appreciate the finer subtleties.

We are all familiar with the effect light has on the brightness of objects that we see.  Parts of an object which are lit are brighter and have brighter, more obvious colours.  Whereas, objects which are in shadow are darker, possessing darker colours.  This attribute is referred to as the luminance of colour.

There is a third element to the RGB colour triangle which we may be less conscious of and which is less clearly understood by most people.  This is referred to as colour saturation or the related term chroma.  Saturation is a measure of the colour purity or colourfulness of a colour.  It is directly linked to hue and is a measure of the purity of that hue, created by mixing hue with a certain quantity of neutral grey of equal luminance.  I must admit that I have found this concept a little hard to get my head around but I finally cracked it and I recommend reading THIS POST before continuing to read the detail below.

It is only when we start to really analyse colour theory that we begin to understand the relationship between the vibrant hues that we all easily recognise and their linkage to the bland, often ill-defined tones we commonly encounter in nature.  In the Birders Colour Pallet this linkage should be clear enough.  Olive for instance could be described as a desaturated yellow.  Similarly, maroon and crimson are both merely ruby with its colour saturation toned down by different levels.  If we accept this logic then it begs a simple question.  What factors influence colour saturation and hence our accurate analysis of colours in digital images?

Image Formation
In THIS posting I compared the structures if the human eye and camera sensor.  There is a remarkable similarity between the two.  Clearly the design of the camera sensor was based in part on the human visual system.  Both models require the filtering of the image through three types of colour receptor, green, red and blue in a ration of 2:1:1.  While the mechanism by which humans form an image beyond that point is not clear, we know how digital images are made.  RAW image data consists of two measurements - the light intensity reaching a cluster of photosites (a measure of luminance) and the colours of the filters associated with those sites (analogous with hue and saturation).

Saturation and Lighting
Lighting rears it's head in every discussion in this blog.  On a bright day colours appear bright and vibrant, whereas on a dull day colours are more subdued and less saturated.  Digital images typically are less saturated in appearance than conventional film or slide film. Many cameras offer three or more saturation defaults which an operator can choose from depending on individual taste.  Saturation can also be adjusted using post processing software.

Taking the bright, well exposed image below, it may appear that the part of the breast which is in shade is somewhat desaturated relative to the parts which are well lit.

In actual fact, both the well-lit and shaded parts of the yellow breast display near maximum colour saturation of 240.  That doesn't imply that the hue is the same right across the whole breast - and indeed it is not.  I suspect in this case reflection from nearby exposed wood or flowers is adding a pinkish flush to the shaded part of the breast, resulting in an orange hue overall.  The ventral area has a slightly colder-looking hue and this is possibly due to the naturally blue colour of skylight as illustrated HERE, or it may be reflection from nearby foliage, or both.  Once again, the colour saturation of the ventral area is still very close to fully saturated.  Perhaps there is another way of investigating this question?

Saturation & Exposure
Photographers working with film or slide would have detected changes in saturation with exposure due to the inherent properties of those media.  Do digital saturation algorithms attempt to emulate the variable properties of film?

I carried out a simple experiment involving progressively increasing exposure then sampling a number of discrete colour patches and comparing the saturation values of each.

The results suggest that, for the JPEG output of my Canon 70D at least, overexposed images have significantly greater colour saturation than normally or underexposed images.  This has implications for the use of the birders colour pallet, as one might be inclined to name a colour incorrectly based on the saturation value of the colour patch.  On the other hand, one is probably not likely to try and sample and name the colours using such an obviously poorly exposed swatch.  And, one is just as likely to have significantly blown or clipped colours as to have captured anything useful in such an overexposed image.

We know that while colour saturation is measured by the camera sensor there is also an adjustable saturation setting so photographers can choose to boost or reduce saturation to suit their tastes.  Are there other, hidden manipulations going on at the software level?

On the plus side, there are only three saturation increments provided for in the birders colour pallet.  Provided the user has a reasonably good camera, takes reasonably good exposures, uses colour calibration methods and is careful with the use of saturation controls, there is sufficient latitude built in to allow for reasonable saturation errors.

See also HERE and HERE.