While I have researched and written about lots of complex aspects in the realm of light, up until now I have avoided one of it's fundamental characteristics, namely Polarization. Light exhibits characteristic of both particles and waves. Waves oscillate as we all know, and we are already very familiar with wave oscillation as we experience it all the time, be it a wave propagating across the surface of water, or along a rope as it is moved up and down (or from side to side). A light wave which is simply oscillating like this is said to be Linearly Polarized. Natural sunlight is Unpolarized. The easiest way to understand what this means is to imagine many ropes oscillating, not in unison along one axis, but in many different random axes, i.e. vertically, horizontally and all angles in between. Light waves can also rotate as they move forwards through space and time, as illustrated below. This is referred to as Circularly Polarized. We need to understand these three simple concepts before we can proceed.
For a nice overview of polarization I recommend this video by Eric Mickelsen.
For a more complex explanation, or if you are having difficulty sleeping, can I recommend this video (HERE).The next piece to understand is that light can move between these states of polarization when it comes in contact with different types of matter. The common mechanisms by which this happens are by scattering, reflection and refraction as discussed in the video above.
Polarization by Scattering
When sunlight hits our atmosphere the shorter bluer wavelengths of light are scattered (Rayleigh scattering) giving rise to the appearance of a blue sky and related phenomena. In doing so unpolarised sunlight is transformed into linearly polarized sky-light. This is all very neatly explained in this online article (HERE). The landscape photographers out there will already be aware that a polarized filter may be placed over the camera lens to boost the saturation of the blue sky in a landscape image, simply by blocking some of the glare. This method is a direct, practical example of the use of polarization in photography. Could we find other uses for it?
Polarization by Reflection and Refraction
When unpolarized light hits a non-metallic surface the light which reflects back off that surface is polarized as neatly explained by Eric Mickelsen in his video. The extent to which polarization occurs depends on the material and the angle of incidence. For example a water's surface when viewed at a shallow angle appears very reflective, with a high degree of glare as the extent of linear polarization is large. Fisherman use glasses with polarized filters to block out this glare and peer through the water. HERE are some more examples of this filter in use in photography. Interesting to note that metallic surfaces, though very reflective, do not tend to reflect polarized light. Rather, the reflected light from metallic surfaces is unpolarized. You can read more about it along with the other applications referred to above at this LINK.
Birds and Polarized Light
It's now well accepted that birds use a combination of magnetic fields and polarized light, together perhaps with landmarks to navigate during migration. In this intriguing recent study (HERE) it was found that birds become disoriented if light polarization is disrupted. Other animals including bees can see polarized light and also use it for navigation. Perhaps most intriguing of all, humans too possess a very subtle ability to see polarized light using only our eyes. The phenomenon is referred to as Haidinger's brush. Nonetheless, a much easier way to experience light polarization is with the aid of a polarized filter or polarizer. Here is another nice online video showing different types of light polarization experiments using polarizers. It includes a demonstration of the all-important blue sky-light polarization, considered so important to the birds and the bees in navigation.
Polarization and Bird Photography
As indicated above, polarized filters play a useful role in landscape photography. This works by reducing the glare from the sky, thus increasing the intensity and thus the saturation of the blue sky in the image. Similarly a polarized filter can be used to isolate unwanted glare and reflection from other objects such as surface reflection on water or reflection from a waxy surface on leaves. It could also be used to help reduce glare on the surface of a bird, such as its bareparts, and to a lesser extent, its feathers.
Like a lot of aspects of natural lighting phenomenon we can see spatial and temporal variation during the day and throughout the seasons. Taking advantage of what we observe about the polarization of sky-light is there a way we can improve our capture of birds against the sky, either with the use of a polarized filter or a more judicious selection of angle to point the camera relative to the sun? Of course bird's don't readily cooperate with the photographer when it comes to choosing a flight path. But if a bird is routinely circling an area, knowing the best place to position oneself relative to the sun and polarized sky-light may be an advantage.
At sea, as I have already mentioned, the glare of the sky off the water can create an added difficulty for the capture of detail on seabirds passing by. I don't know if I have yet figured out the ideal location to position myself on a boat during a pelagic on a sunny day! But I might try using a polarized filter next time to see if it brings some useful results, or at least helps me find the best place to sit and wait for that lucky fly-past!
Photographing birds on snow and ice might seem like a similar case in point but in fact light reflection off snow is different and it's polarization may be more variable. Like a sea which is rough, snow and ice crystals are multifaceted with more complex reflection, refraction/transmission of light taking place. That said anything that reduces glare, even slightly may be worth a try.
The downside of using a polarizer is it's overall impact on exposure. Polarizers dramatically cut the available light reaching the sensor, which can be a big disadvantage in bird photography.
This has been a brief overview of polarization. I can see myself returning to various aspects of this in due course.
In this Birding Frontiers posting the late, and sadly missed Martin Garner describes a phenomenon often attributed to Siberian Chiffchaff (Phylloscopus collybita tristis), referred to as colour or plumage morphing. Clearly plumage itself doesn't have the ability to morph, or change pigment colour. So it must have something to do with light. What is it about this taxon in particular that seems to attract our attention to what is surely a widespread and common problem? The photographs which Martin uses to illustrate his point give some clues as to what is going on. It is easy to get side-tracked on various related matters so I am going to break it down here as clearly as I can. I have already delved into this subject under various postings in the past. But as this subject is a direct complement to my recent posting on ghostly birds, I thought I might be useful to pull these threads together once more.
A typical 'grey and white' appearance characterises the 'classic' look of a Siberian Chiffchaff. But many are less obvious than this, and that is part of the confusion.
White Balance
At the heart of this problem lies the concept of white balance. The colour of light changes all the time due to scattering caused by the atmosphere. It just so happens that a dramatic shift in the quality of our light coincides with the arrival of Siberian Chiffchaffs here in Western Europe. No surprise then that we are perhaps more acutely aware of colour morphing now than at any other time of the year. I never tire of the illustration below which I lovingly and painstakingly compiled in order to satisfy my fascination with this cyclical phenomenon. This, in a nutshell is I suspect the main underlying cause of plumage morphing.
Away from the equator the sun's position in the sky is dictated by time of year. On the winter solstice here in the northern hemisphere the sun is at it's lowest ebb, bumping along just over the horizon before plunging us into a long night. This means that sunlight is not very pure, even at noon. And, by early afternoon the sun's rays are already beginning to dim and yellow significantly. Returning to Martin Garner's posting it's quite easy to distinguish between the brighter, sunlit, and invariably warmer yellow toned images and the colder images. The yellow cast in the brighter images is almost certainly a natural effect of low winter sunlight.
Colder, bluer light is associated with pre-dawn and dusk, as well as shadow. In a posting examining white balance (HERE) I demonstrated simply enough that two different white balance's can exist for the same image. On a sunny day there is one white balance for direct sunlight and another one for shade. The reason for this is simple - the blue sky dome projects blue light into the shadows, therefore a bird photographed in the shade on a sunny day is bathed in this cold, blue light.
This point may account for some of the colder looking birds in Martin's posting but not all I suspect. Its difficult to say if conditions were sunny or overcast in some of the images, which leads to the next important point. On an overcast day shadows are not blue. This is because the blue sky is obscured by cloud on an overcast day. Instead of the sky dome projecting blue light it projects diffuse sunlight which is white (or perhaps yellowish or reddish depending on time of day). Much like a frosted light bulb, or lamp shade, the sunlight is scattered throughout the cloud cover and thus scattered to earth from that massive diffuser in the sky.
Last but not least, I must of course point out that camera white balance correction is prone to significant error. Without proper white balance calibration we are always left discussing these birds in a bit of a vacuum. For instance, in a number of the images posted in Martin's blog I can detect a fairly obvious green cast. Green is not one of the colours of sunlight as the sun traverses the sky, though a momentary green flash may occur just as the sun dips below the horizon. As can be seen from the animation above, sunlight passes from blue light to slightly magenta then reddish at dawn through yellow to white and back again. Thus the green cast to these images has nothing to do with the sun at all. It is a camera white balance error. This is not uncommon when a bird is shot against a green background. As discussed in a posting (HERE) simple white balance correction tools only cater for the normal sunlight colour shift in the yellow-blue axis. To correct for errors in the magenta-green axis one needs a more advanced white balance correction tool.
Returning to our subject, the green cast created by an in-camera white balance error in this case in turn adds a false olive tone to the bird, which for another species might go unnoticed. But as the subject matter is the subtle shade of a dingy warbler, white balance correction is really important. This leads me to the next point.
The Bold Versus The Bland
So, having established that white balance error is the main cause for plumage morphing why are we not aware of this problem more often? The problem lies in the nature of the colours we are trying to analyse. Effectively we are looking at subtle, very low-saturation colours, or the ultra-bland as I like to call them.
I have penned a few postings on the differences between bold and bland field marks. The conclusion I reached while analysing various parameters was that bold field marks including plumage colours were more 'resilient' to image quality deterioration than bland features. What I mean by this is it is still possible to accurately gauge bold features in an image of almost any quality, whereas an image must be of higher quality in order for bland features to be so readily assessed.
The Camera Versus The Eye
The points which apply to the camera's ability to capture accurate colour is similarly true of our observations of birds in life. To make an accurate assessment of the subtle colours of a Chiffchaff for instance, we need good, neutral light. It must be said however that we do appear to possess a surprisingly effective ability to discern subtle colour differences, even in challenging circumstances, particularly with a bit of training. As a former quality manager for a photofinishing company I was always amazed by the keen eyes of the more experienced printing operators, who could accurately gauge individual colour corrections to obtain neutral, white balanced prints, based often on very limited visual cues.
The concept of white balance arose because of the need to emulate this adaptation of human vision to work around the challenges of ever-changing light quality. Sometimes however our vision fails us. Optical illusions like 'The Dress' viral phenomenon and Beau Lotto's colour cube experiments are a reminder how our eyes can deceive us. This must surely account for at least part of the explanation for colour plumage morphing.
Colour Quality
Having already devoted an entire section of this blog to colour I won't rehash it all again here. Suffice to say that accurate colour capture is dependent on many variables. Key among these is the actual calibration of the camera sensor itself. Surprisingly in this day and age camera sensors are not calibrated for colour to any recognised standard. No two cameras, even of the same model will display colours exactly the same. We resolve this by using a professional tool like the X-rite colorchecker passport. Because few if any birders calibrate their camera sensors we start out with a certain level of colour bias which we cannot measure or rectify.
Secondly, as outlined above we must then calibrate white balance to account for variations in both natural lighting colour and in-camera white-balance error. Provided we have obtained a reasonable exposure we now have done the very best we can to 'approach' accurate, representative colours. I stress the word 'approach' because of course a camera's colour pallet only accounts for a fraction of all the colours we can actually appreciate with our eyes. So even with the best calibration in place a photograph will never match exactly the colours we see in nature.
I developed the concept of colour profiling as a pinnacle of the study of colour capture in digital photography. If we can obtain accurate colour captures we can begin to accurately name and profile the subtle colours of the birds we capture.
Conclusions
I think it's clear that colour plumage morphing is a real phenomenon. When it comes to observation there is a natural lighting explanation, coupled perhaps with a slight optical illusion at work as the brain's white-balance and brightness settings try to make sense of what it is seeing. When we add the additional quality parameters needed to accurately capture a bird with a camera we must consider other layers of complexity. I have summarised these with the aid of a quality control tool I have developed (HERE). These can be grouped as parameters related to image capture, those related to the quality of the light and those related to accurate colour calibration.
This turns out to be a pretty good image of a tristis-type Chiffchaff, made possible by a set of ideal circumstances - a confiding bird, an ideal, overcast day, good neutral mid-day light, a very lucky camera exposure, and finally and most importantly, both sensor and white-balance calibration. Where there is a will, there is a way and plumage morphing need not be feared!
In the spirit of the season I recently had a close encounter with a ghost. A very washed out Western Bonelli's Warbler Phylloscopus bonelli on Cape Clear Island, Co. Cork last weekend got many pulses racing when the finder reported it called just like it's colder, eastern counterpart P. orientalis, a potential Irish first! The following day it was heard to call like a western and by it's eventual trapping, biometrics left no doubt as to it's true identity.
The recent focus on grey scales on this blog has re-awakened my fascination with birds and light. While there is little doubt that the camera is no match for human vision, when it comes to these ghostly birds I often wonder is it the human eye that falls a bit short? Typically the camera fails to truly convey just how pale and striking these ghosts appear in life. And yet it seems that, in the hand these birds often don't quite match their shockingly pallid appearance in the field. I have spent a bit of time exploring various elements which I think contribute to this phenomenon.
Brightness Illusions
In the posting on brightness illusions HERE I explored a well known optical illusion called the checker shadow. Unlike a camera's exposure which delivers a uniform correction across an image, human vision is much more sophisticated, allowing for varying degrees of correction at different locations throughout the scene. For instance, objects which appear to be in the shade may receive a local tonal boost, making them brighter and easier to observe.
This is proven in the case of the checker shadow when we draw lines of equal tone between checker squares A and B. The brain is forced to reconcile the fact that these squares are actually of the same tone and our perception finally catches up with reality. Incredibly its actually possible to witness this alteration of perception in real time as can be shown by moving between these two illustrations.
In the case of the Cape Clear Western Bonelli's, on the evening this bird was found it was seen to move in and out of a dense, shady area of scrub. Most observers were both agog and aghast at the appearance of the bird. It is fair to say that the bird's mantle shade was significantly faded when compared with the typically warmer, honey-colour of autumn Western Bonelli's, such as the bird illustrated below.
Western Bonelli's Warbler, Mizen Head, 30th October, 2004
Also, the typical crisp white underparts of Bonelli's is always much brighter than even the palest of Chiffchaffs (eg. P. colybitta tristis types). But I think there is a bit more going on here.
Foliage Canopy Edge
In another posting HERE I delved a little more deeply into the domain of these ghostly figures. While our eyes are mesmerised by the sight of pale birds moving through the deep shadows our cameras find it extremely difficult to obtain representative images. So much so in fact that it generally takes shooting in RAW and subsequent tone mapping to approach how the bird looked in it's dark environment.
Here, in late autumn birders live for the chance to see an exotic ghost from the east. Last week's birding on Cape Clear was awesome. The bird may not have proved to be an elusive Irish first, but it certainly hit the spot for me.
When I finally got home and managed to look at my images more closely with a copy of Lars Svensson's Identification Guide to European Passerines to hand I was able to carefully and properly interpret this open wing shot. Primaries P2 in bonelli is typically shorter than P6 (longer in orientalis). Primaries P3,4 and 5 are all roughly of equal length in both. Generally P2 is hidden and therefore not easily interpreted in the field. Even allowing for a certain amount of error in this image the short appearance of P2 clearly seems to strongly point to bonelli. Wing formula and biometrics confirmed this when the bird was captured a day or two later. Remarkably there have been two more Western Bonelli's in Ireland this week. For more on the benefits of Ringer's (Bander's) reference guides see HERE.
In this series of postings I have been concerned with trying to replicate the famous (among gull enthusiasts) Kodak Grey Scale in sRGB colour space. The intention has been to try and directly measure gull upper-wing and mantle tones from digital images, in a manner consistent with studies using an actual Kodak Grey Scale card alongside a gull in the hand. Why? Because, upperparts tone can be instructive in gull identification, and if reliable measurements can be taken from digital images it will help in some gull identifications. To date I have written four other blog postings on the subject, parts One, Two, Three and Four. In the most recent posting I took a conceptual look under the hood as it were, focusing on the various parameters that together explain the non-linearity of tonality in digital images.
For starters, human perception of brightness is non-linear (covered by the luminosity function). Next we have gamma - a non-linear function applied to images to cater for the non-linear properties of older display monitors. Lastly we have the characteristic curve, used in photography to make subtle tonal corrections and get the best out of our photographs. In this posting it's time to get 'down and dirty'. Would the real Kodak Grey Scale card please stand up?
From left to right, the X-rite (formerly Gretag-Macbeth) Colour Checker Passport is the modern professional photographer's quality control tool for exposure and colour calibration. I have used it in discussions about colour in this blog. Centre, the Kodak Grey Scale together with the Colour Separation Guide (not shown) represent the original quality control tool for many photographers. Though the Kodak tool has tended to be surpassed in more recent times by more robust, all-in-one tools like the X-rite Colourchecker Passport, the Kodak grey scale tool has long been favoured by gull researchers as a tool to aid in the separation of taxa based on mantle shade. And so it remains. Lastly to the right I have included a cheap and cheerful, Mudder white balance card, consisting of a white card, an 18% grey card and black card. Time for a more practical, direct look at the Koday Grey Scale.
Online Resources - sRGB Guideline Values
Having spent a long time trying to obtain appropriate sRGB values for the Kodak Grey Scale, I finally stumbled upon an excellent resource from Berkeley, University of California as outlined in my last instalment on this subject (HERE).
The sRGB values certainly appear to replicate Berkeley's high quality copy of the Kodak Grey Scale. However, in attempting to apply those values in my analysis of gulls, something didn't quite fit. It proved necessary to darken my gull images before applying the tool. Considering that I had been able to obtain surprisingly consistent results using just a linear grey scale model, and without having to darken the images drastically to read off the mantle tones, something seemed to be amiss. Hence the research has continued, and hence I find myself writing yet another chapter on Grey Scales and Gulls.
A Comparison of Multiple Grey Card Captures
For my first experiment with the Kodak Grey Scale card I have simply taken a series of bracketed exposures with my Canon 70D and 300mm lens, then selected the most representative one. I then took another image of the card with an Iphone 6. I found it was necessary to adjust the brightness of the Iphone 6 image slightly to obtain a matching exposure (note I used the 18% greycard in both images as a standard exposure reference). Next I converted both images to greyscale in Adobe Elements before samplling each swatch from each image (using the sampling procedure HERE). Lastly I compared each image both visually and graphically.
The results showed a clear difference between the camera's in terms of tonality. What more, both differed markedly from the Berkeley image. Of the various parameters I had explored in my earlier post only one could account for this vast difference - the camera's Characteristic Curve.
Though it's difficult to make a meaningful comparison between my eye's perception of the tonality of the actual Kodak Grey Scale card and various on-screen depictions of it, I nonetheless gave it a go. To my eyes the Canon 70D gave the closest match to the actual Kodak Grey Scale card in terms of mid-tones, say from level 2 or 3 to level B. Whereas the Iphone did a much better job in depicting the highlights and shadows, i.e. levels A - 2 and B - 19. So I decided to average the Canon 70D results and Iphone results and graph the averages alongside each of the different captured versions. The resulting compromise certainly has the classic sigmoid or S-shape of a characteristic curve and it looks elegant. But are we any closer to that elusive ideal sRGB Grey Scale after all of this?
What Next?
It may be tempting at this point to throw in the towel and say that, as all camera's have differing characteristic curves surely it's impossible to accurately reproduce and measure tones along any comparable scale? And yet, all the results to date have been surprisingly effective using just a purely linear model (the blue scale in the graph above). So it's not all doom and gloom.
As a young boy I can remember being told that white light can be scattered by a prism into all the colours of the rainbow. Like most kids, I found that an incomprehensible concept. For a child, used to subtractive mixing of coloured paints, the additive mixing of coloured light to produce white light is totally alien. For more on additive and subtractive colour mixing see HERE.
In the typical model of colour that most of us work with in image processing we have three axes which together describe all the colours that we see. The classic rainbow is defined by the property of colour referred to as hue. This represents colours at their purest and most vibrant (fully saturated). Luminance is merely a measure of the brightness of a colour. If we take away hue what we are left with essentially is a B&W image made up of levels of brightness of each pixel along a grey scale.
The third axis, saturation is a little harder to grasp, but, actually I have just described it in the previous paragraph. Desaturation of colour is the gradual removal of colour to reveal a grey scale. Scientifically, saturation is a measure of the purity of the most dominant wavelength of light. The presence of other wavelengths of light desaturates the dominant wavelength making it less vibrant. It's totally counter-intuitive. By adding more colours we end up with grey scale. If this sounds a bit like the process involved in creating white light that's because it is the very same process. A prism splits apart different wavelengths of light so they become individual, vibrant, saturated colours. Take away the prism and all these wavelengths intermingle again, reducing their individual vibrancy or saturation levels until what remains is pure luminance, without colour. I have written a bit more about saturation HERE.
Boosting Colour Saturation
As birders we put a lot of demands on our digital cameras. We bolt on a long lens and ramp up aperture and shutter speed in the hopes of capturing an elusive, often small and fast-moving subject, using minimal levels of light. Thankfully, modern digital cameras use advanced processing to boost the sensitivity of the camera sensor to increase it's versatility in low light situations. Part of that process may include a boosting of colour saturation.
In the illustration above I have taken a typical exposure and boosted saturation beyond normally acceptable levels. It reveals a number of pros and cons about the tool. On the plus side, colourful objects like the bareparts of the gulls are boosted in a positive way. We also see a boosting of other natural colours including the mantle shades of the gulls (these are not neutral greys as it turns out), plus the colour of the sand and sky reflection on the water. These are 'over-cooked' here for illustrative purposes. Taking saturation back a few notches will render them more acceptably.
On the negative side we can see how boosting saturation makes colour noise more apparent and makes shadows appear unnatural in colour. In reality even shadows have underlying colour in them which only becomes apparent when saturation is boosted. Provided we have an understanding of each of these inherent pros and cons saturation can be used as a forensic tool.
To illustrate that true neutral greys are not altered by the saturation tool note I have added six grey boxes, three of which are neutral grey. The other three have a minimal, almost imperceptible colour cast applied, which is revealed when the saturation is boosted.
So, what can boosting saturation tell us about the image above?
It tells us that the mantle shades of these gulls are not neutral grey.
We can better visualise leg colour, not always clear from low saturation images
We can see there are a number of things impacting the shadows including the blue sky and reflected sand. We often think of shadows as grey but in fact they generally have underlying colour in them.
We may be better able to detect a white balance error
If there are any true neutral greys in an image these will be revealed
In Camera Saturation Processing
Processing from RAW, saturation is one of the parameters requiring setting by the operator. RAW data files are naturally low in contrast and saturation. When the camera outputs a JPEG from RAW the processor uses proprietary settings for saturation. These may not always be easy to anticipate. For instance in an earlier posting HERE, I carried out an analysis of the relationship between exposure and saturation, with some unexpected results.
In another posting HERE I explored the intrinsic interrelationship between brightness, contrast, saturation and sharpness. Adjusting any one results in a knock-on effect for all the others.
In summary, saturation is an intrinsic part of colour. It is also yet another tool which we can use in the forensic analysis of images. There are of course limitations which we need to understand in order to use this tool effectively.
This apparent Kamchatka Gull (Larus canus kamtschatschensis), rear right was photographed by David O'Connor in Co. Kerry, Ireland on 6th March, 2014. Recent developments in the identification of the Larus canus complex, thanks to extensive studies by Peter Adriaens and Chris Gibbins (Dutch Birding Volume 38, No. 1) have made this identification finally possible. The discovery of this, yet another potential far eastern gull, and possible another 1st for the Western Palearctic in Ireland has prompted me to return again to the subject of grey scales and gulls.
In this series of postings I have been concerned with trying to replicate the famous (among gull enthusiasts) Kodak Grey Scale in sRGB colour space. The intention has been to try and directly measure gull upper-wing and mantle tones from digital images, in a manner consistent with studies using an actual Kodak Grey Scale card alongside a gull in the hand. Why? Because, upperparts tone can be instructive in gull identification, and if reliable measurements can be taken from digital images it will help in some gull identifications. To date I have written three other blog postings on the subject, parts One, Two and Three. These were more exploratory than anything else. In this posting I aim to put this tool under much closer scrutiny. A Simple Tool For Starters
I'll readily admit that I have approached this subject thus far with all the subtlety of the proverbial bull in a china shop. To the uninitiated, replicating an apparently linear grey scale artificially on a computer screen seems like a simple enough task. One only has to make a linear grey scale from stepped grey tones, right? Starting with the simplest possible model, from white point (RGB 255) to black point (RGB 0) I created a straightforward linear scale with equally spaced grey tone increments as illustrated below. Both perceptually and numerically in terms of sRGB values it is a linear scale.
I started with 21 increments in the very first draft, one additional increment for black point (RGB 0), as I sensed that Kodak 19 isn't particularly black. But, after obtaining promising results I have since reverted to just 20 increments, exactly as per the standard, where Kodak 19 has since been represented by RGB 0. Despite the rather crude attempts, the results have been quite surprising effective, and seemingly reliable.
However it doesn't take much to find fault with this most simple of efforts. For instance, without benchmarking, the definition of white, black or indeed any particular shade of grey is totally arbitrary. These terms are entirely subjective, relative descriptions of different levels of brightness. Then one has to ask if the scale we are trying to reproduce is actually linear or if this merely appears to be the case. Perceptually two grey scales may appear to be linear, but perception isn't everything as we shall see. There are a number of factors contributing to a rather confusing puzzle.
Linear Light Capture
The sensor of the camera actually captures light intensity linearly. For an incremental increase in light hitting an individual photosite, there is indeed an equal incremental increase in charge, up until the point that the photosite becomes saturated with charge. The dynamic range of the camera is defined simply therefore by, on the one extreme, the minimum light required to register a signal (typically just one photon of light), up to, at the other extreme, the maximum the photosite can absorb before saturation. This represents a potential dynamic range of upwards of 23,000:1, or 14.5 stops of exposure, according to this reference. So, despite being somewhat less versatile than the human visual system, the camera can still gather an amazing range of light intensity from its captured black point to its captured white point. For more see HERE.
Human Vision Versus The Camera - The Luminosity Function
Viewing a RAW image file without any tonal correction, one would be struck by its darkness and lack of contrast. RAW images must undergo at least a couple of transformations to make them approach the scene as perceived by human eyes. So, while the RAW image may be a factually accurate representation of the scene light intensity as captured during the given exposure, it's still not perceptually accurate, in human terms. Human's don't perceive light intensity linearly, but rather based on a non-linear Luminosity Function.
There are a couple of things about this that are particularly relevant here. Due to the need to be able to see well across a wide ambient brightness range in daylight we developed the ability to distinguish between small changes in light intensity in the shadows, while at the same time accommodating intense sunlight in the open. So, for instance when we look across a wide range of brightness we find it much easier to distinguish contrast between darker tones than brighter ones. This means that, for the photographer it makes sense to try and ensure that a lot of shadow detail is captured, possibly at the expense of detail in the highlights (termed exposing to the right, ETTR). It also means that it may be possible to selectively discard a lot of image RAW data involving the highlights without any noticeable loss in final image quality. Hence a 16bit RAW image can be compressed into an 8-bit JPEG, post capture, after the camera (or someone editing in RAW) has 'selected' the details needed to make a reasonably representative and perceptually satisfying image.
In addition to weighting our visual perception towards the dark end of the tonal range, our senses perceive intensity along an almost logarithmic scale. Double the noise, or double the brightness, or double the heat doesn't actually equate to double the sensation. This means that a linear input capture of light intensity must be transformed into a near logarithmic output of light intensity in the final visual image. That is, in order for it to appear perceptually accurate. This also explains for example why in intensity terms middle grey, that point perceived to be mid-way between black and white, is not actually found mid-way along the light intensity or reflectance curve, but at approximately 18% reflectance. And, this also explains why the reflectance of a grey card is 18% and why a camera's on board light meter has supposedly worked with middle grey (i.e. 18% reflectance) as it's centred reference point (though this fact is often disputed as I will come to).
In terms of colour brightness perception, humans have roughly as many green receptors in the eye (cone cells) as red cones and blue cones combined. This in turn means that we tend to perceive greens as being somewhat brighter than they actually are. This can impact our perception of the relative brightness of different colours that we see. In this case the luminosity function can be broken out more specifically into different spectral sensitivities for red, green and blue cones, as well as for rods, the eye's receptors used for night (scotopic) vision. The luminosity slider in an image processing software programme like Photoshop makes a correction for brightness taking account of this spectral disparity, to ensure that perceptual colour accuracy is maintained during image editing. A simple brightness slider tool on the other hand may not factor in this spectral disparity and that in turn may affect the perceptual brightness of different colours on screen. This is a subtle but important consideration when we are concerned about accurate colour management from the original scene to the camera, screen and printer.
Luckily for the purposes of grey scale analysis we don't need to consider colour at all. In fact, I recommend transforming images from colour to greyscale for all analysis involving the Kodak Grey Scale. Colour can be unnecessarily distracting. It's also worth noting that humans are better able to perceive subtle changes in brightness (spatial sensitivity to luminance) than subtle changes in colour (chromatic sensitivity). Video manufacturers for example capitalize on this point to store chromatic data at lower resolution to luminance data thereby conserving bandwidth (as discussed in this link here).
Gamma
Gamma correction is a transformation given to an image to iron out any inconsistencies in the display device, so that brightness levels appear perceptually accurate across the entire tonal range. The original cathode ray tube (CRT) screens by sheer coincidence displayed tonal levels in a manner which was almost perfectly the reverse of the human luminosity function. So, in effect the gamma correction for a CRT was more or less a mirror image of the luminosity function. Modern liquid crystal display (LCD) screens have a linear response function, which means in effect they should not require gamma correction at all. However, in order to be able to view archived images containing an encoded gamma correction, and in order to ensure the backward compatibility of modern images for those still using CRT screens, gamma correction remains an important consideration, and all images continue to be gamma encoded. This is therefore just another layer of complexity to add on top of an already decidedly confusing picture. For more on the continued relevance of gamma please check out this very useful article. For more on the terminology of light see HERE and HERE.
Characteristic And Other Curves
Having undergone gamma/luminosity corrections the image may also have to undergo other minor adjustments in different areas of the tonal range. In the original film-based photographs a film emulsion's specific sensitivity to light was expressed in terms of an s-shaped sensitivity or response curve. The digital era has continued the fascination with curves. This is the science of sensitometry. Digital sensors have their own limitations. Curves tend to be used in the modern era to counteract a camera's deficiencies due to dynamic range, and to simply accentuate tones in different areas of the image to make the image more appealing. Curves may also be used simply to mimic the characteristics of different film stock. While as photographers and researchers we can deliberately avoid interfering with an image's response curve we have no way of knowing if the camera manufacturer may have programmed an in-built curve correction as part of image processing from RAW. This is yet another departure from a linear representation of tones in digital images. For more on the use of tonal curves in image editing see HERE.
Kodak Grey Scale Specification
Before trying to tackle the implications of the various points raised above, time for a quick look at The Kodak Grey Scale itself. The card is a quality control tool, used primarily for assessing and reproducing specific tones in print reproduction. The scale was intended to be incorporated in the photograph itself, to help identify accurate exposure for accurate reproduction. Each patch is stepped 0.1 density units apart. In exposure terms, each step represents a third of a stop, according to the specification. The specification sheet accompanying the Kodak Grey Scale can be viewed HERE.
Note three of the patches are represented by letters, A, M and B, which together are used in instrument calibration for photographic printing. The white of the Kodak card is referred to as A and has a reflective density of 0.05. Kodak grey scale value 7, referred to as M, with a reflective density of 0.75, equates to a reflectance of 17.8%. This is a little shy of the 18.4% reflectance standard grey card but it's a close approximation. Lastly B, at patch 16 has a reflective density of 1.65. An interesting point to note, the middle grey patch (M) is not half way along the scale. It is at step number 7. This in itself suggests the scale is non-linear.
To confuse matters somewhat Kodak Grey Scale patch density values are quoted somewhat differently in various places. The Kodak specification sheet states that the patches have a density range of 0.0 (white) to 1.9 (practical printing black). Whereas elsewhere one will find quoted a density range from 0.05 to 1.95. So which is correct? The whitest point on a print density scale represents D-min, or the lowest practical print density. It's limited by the reflectance of the paper itself. Densities above this value are calibrated relative to it, so presumably by quoting 0.0 here Kodak are referring to the relative densities of each patch after having zeroed the densitometer using the D-min patch. The white patch therefore has a relative density of 0.0 but an actual density of 0.05. For the purpose of this posting I am interested in the actual density of each patch, not their relative densities.
Grey Cards Versus White Balance Cards
Just a quick point in relation to grey calibration cards. There are two different 'grey' cards used in photography. The middle grey card (18% reflectance) is intended for exposure metering. It tends not be perfectly neutral grey. In other words, measured with a spectrometer one would find a slight colour bias in the green, red or blue channels, which would throw out a white balance correction. 18% grey cards are therefore not recommended for white balance correction. True white balance cards tend to be lighter (approx. 60% reflectance) and are therefore not middle grey. They are perfectly neutral grey so deliver perfect white balance correction. Given that they are lighter than 18% grey cards they can only be used for exposure metering with the addition of a suitable exposure compensation. Why the lighter grey target for white balancing? Apparently it's got something to do with signal to noise ratio. The lighter target is easier to expose with minimal noise, and therefore more accurate as a calibration tool for colour. Exposure metering doesn't require such a high level of precision. For those of you who, like me use a Colorchecker Passport for white balance correction check out this useful technical review.
For more on the difference between 18% grey cards and white balance cards see below.
Note it is widely claimed that camera meters work off an 18% reflectance target. But there are also claims that this is incorrect and the true target is closer to 12% (darker than middle grey). As if things weren't confusing enough! Check out this commentary (link).
Whatever the truth about metering, if we are trying to compare references with the Kodak Grey Scale, which uses approximately 18% grey as it's middle grey (or M value), it probably makes sense to try and meter our own image exposure to the same target reference.
Light Reflectance and Transmission Measurement and Conversion Factors
The Kodak Grey Scale is a reflectance tool containing patches stepped in 0.1 density increments as outlined. Density is a simple base 10 logarithm of reflectance, measured using a device called a densitometer, the traditional quality control tool of the photographic industry, So, while density is a linear scale, the real essence behind it, reflectance is non-linear. A logarithmic scale is useful as it closely resembles (but does not perfectly match) the human luminosity function. For this reason, density scales appear to the human eye to have a fairly uniform gradient. Densitometers can be used to measure light reflectance from paper samples, or light transmission through film or slide, so they really are perfectly suited for photofinishing.
Densitometry is just one method however for measuring light reflectance and transmission. Colorimetry is the science and technology used to quantify and describe human colour perception. So, colorimeters are weighted, taking account of the luminosity function. Colorimeters measure reflectance and display tristimulus values (green, red and blue). Meanwhile spectrometry uses even more advanced technology to measure individual wavelengths and generate complete spectral signatures from colour samples, including but not limited to the human visual range. So, between these different technologies we have the means to directly measure and convert the Kodak Grey Scale into whatever colour space we require.
It may also be possible to convert density values directly to sRGB using complex formulae. I find the equations daunting so I've decided to revert to those who have gone and done this work before me. I have found a couple of different resources online which are well worth a look but the one that I have found most helpful is linked HERE. The authors at Berkeley, University of California have identified appropriate RGB values for each reflectance patch in the Kodak Grey Scale.
This digital representation of the Kodak Grey Scale depicts RGB white and black outside the range of the Kodak Grey Scale and this seems entirely appropriate. After all, one commonly encounters whites which are brighter than paper and blacks which are darker than ink. The Kodak middle grey value M, with a reflectance of 17.8% yields an RGB value of 116 which seems about right. Standard middle grey as defined by the standard grey card should have a reflectance of 18.4% and a RGB value of 119. So it looks like we are on the right track.
So how does this revised data look when compared with the original linear model?
The revised model clearly isn't linear, exhibiting the sloped appearance one would associated with a logarithmic function. The grey scale is now far more graduated in the mid-tone range, with a sharp step at the toe and shoulder where the scale begins and ends.
Comparison of Linear and Non-linear Digital Grey Scales
Having established what would appear to be the correct digital interpretation of the Kodak Grey Scale, it's time to compare it with the original linear grey scale.
In the top image above I have corrected brightness slightly to allow me to make comparisons between the different mantle grey levels using my original linear model. The results suggest the gulls all fall within their predicted Kodak Grey Scale ranges (references Howell & Dunn, 2007; Adriaens & Gibbins, 2016).
The new non-linear model requires that I substantially darken the image in order to take mantle grey level measurements using that scale. In order to understand the impact of a brightness correction on the image I have added a grid with all 256 tonal levels in it to the original image (top left corner). I later interrogate this grid to see what the brightness correction has actually done to the image across the whole tonal range. The results are interesting. The brightness tool doesn't merely darken the image linearly. It applies a non-linear correction, appearing linear initially but sharply sloping upwards in the highlights region. Clearly the brightness tool takes account of luminosity and gamma correction. I have noted this before in an earlier experiment looking at the functionality of each of the Adobe lighting tools. For more see HERE.
As for the gull mantle results. The actual results obtained were very similar regardless of which version of the digital grey scale I used. That in itself tells an interesting story.
Tone Reproduction
This exercise has been all about Tone Reproduction - the mapping of scene luminance and colour to print reflectance or display luminance but what we have ended up with is something slightly different. I have been attempting to measure individual tonal levels (namely the tones representing the mantle shades of gulls). At the same time, I have been trying to apply a measurement benchmark scale which is based on reflectance. The question is, have I actually improved the accuracy of the tool or is the improvement merely illusory and overly cumbersome? I think this particular journey has been the most technically challenging to understand and explain, and I don't think I have quite reached journey's end. My instinct is that the original linear grey scale is more than adequate for our purposes but I think it is equally important to gain a better understanding of the underlying mechanics and to test as many assumptions as I can.
...to be continued.
With special thanks to David O'Connor for allowing me to analyse and use his image of that stunning gull.
On a recent family holiday to Portugal I had the opportunity for close study of Azure-winged Magpie, the beautifully named Cyanopica cyanus.
With it's unique combination of chalky blues and subtle earthy, vinaceous pinks and russet reds, this is a truly spectacular bird. It got me to ponder the colour azure and other related blues.
Azure
The particular hue of blue which I have up until now named azure in the Birder's Colour Pallet just didn't quite match what I was seeing. It's very hard to pin down colours exactly on the internet. There is no agreed standard nomenclature. Of course, in many cases colours may not have a very fixed hue at all and may refer to a range of different hues. The original 'azure' colour in the early computer pallet was what is now referred to as cyan - the hue opposite red in the standard colour wheel. With a bit of further research it emerged that azure should be considered a hue exactly mid-way between cyan and true blue. It's more akin to a bright blue sky. Having adjusted the pallet accordingly, sitting at hue 140 it certainly now makes for a far better fit than it did at hue 130.
Cerulean
With an enigmatic North American wood warbler Setophaga cerulea, a Kingfisher Alcedo coerulescens, a Paradise Flycatcher Eutrichomyias rowleyi| and others named after it, surely this evocative term deserves a place on the Birder's pallet. But what is cerulean exactly? It appears the term doesn't apply to any rigid colour hue, but with a home anywhere between cyan and blue it might for instance be used to evoke the colour of a tropical sea. I always associated the colour more with cyan than with blue. I have seen Cerulean Warbler in Costa Rica but am not very familiar with their range of hues. Online images certainly suggest Cerulean Warbler has a colour tone essentially identical to Azure-winged Magpie, a bright sky blue, not a turquoise blue at all. And, if one compares images of various birds with the colour cerulean in their name there is no consistency at all. Cerulean CuckooshrikeCoracina temminckii Cerulean KingfisherAlcedo coerulescens Cerulean Paradise FlycatcherEutrichomyias rowleyi| Cerulean WarblerSetophaga cerulea Cerulean-capped ManakinLepidothrix coeruleocapilla
In this instance I have decided not to go with the classic azure hue demonstrated by photos of Cerulean Warbler but instead opted for the classic, slightly turquoise hue one is more likely to associate with the colour. I think it would be a shame to lose cerulean from the Birder's pallet, even if it is actually quite vague term in reality.
Lazuli
With a closely related etymology, azure and lazuli might also be considered synonymous. The colour lazuli is derived from a semi-precious stone Lapis Lazuli, a bright blue metamorphic rock consisting mainly of the mineral lazurite. In choosing an appropriate nomenclature for the Birder's colour pallet however I am guided also by bird names and of course in this instance Lazuli Bunting Passerina amoena. It turns out, based on images of male Lazuli Buntings online that the particular blue hue 140 fits that species very nicely indeed.
Strictly speaking, lapis lazuli is a darker shade of blue than is found in Lazuli Bunting. To confuse matters even further lapis lazuli is associated with the colour ultramarine which is also said to derive from lazurite. And yet, ultramarine appears as hue 170 throughout the internet, between true blue (hue 160) and the violet end of the spectrum. Once again there appears to be a distinct lack of consistency when it comes to colour nomenclature, and no doubt a certain amount of variation can be found in the hues expressed by raw lazurite as well.
Hyacinth
With lazuli moved off its hue 150 spot, what vivid blue bird should take its place? Of course - what else but Hyacinth, after the enigmatic and endangered Hyacinth MacawAnodorhynchus hyacinthinus and the perhaps rather less splendid genus of plant the bird is named after.
Following this shuffle of spaces I have given the name Sky to full saturation hue 130. Admittedly not very inspired but nothing else quite fits for now. Blue-grey on the other hand did sit very nicely as illustrated below. If anyone has any suggestions for better terms to describe hue 130 I'd love to hear from you.