In Beak Structure and Shape (Part 1) I described many of the morphological features that go to make up the miriad of bills we see in nature. In the last week an exciting project was launched at the University of Sheffield in the UK to scan in 3D and analyse the bills of all 10,000 bird species. What makes this project quite unique is the #citizenscience element. Through an excellent website resource https://www.markmybird.org/ birders from all around the world can participate in the project. In my spare time this week I catalogued 200 bills. It was a fascinating exercise which gave me a totally fresh insight into the amazing range of designs. If 50 people can spare a similar amount of time this project will quickly get over the line and very soon we will all benefit from the knowledge gained. More science should work like this!
Pages
- Home
- Introduction
- Quick Reference Guide
- Image Quality Tool
- Spotlight - On Birds and Light
- Spotlight - On Colour
- Spotlight - On Field Marks
- Spotlight - On Forensic Image Analysis
- Spotlight - On Gestalt
- Spotlight - On Human Bias
- Identification : Resolution
- Identification : Focus
- Identification : Exposure
- Identification : Colour
- Identification : Artefacts
Saturday, 26 September 2015
Saturday, 19 September 2015
Birds and Light - The Terminology of Light
At a basic human level we all have some appreciation for Light and probably have more than a few words to describe its qualities. Here linked for example are 36 different adjectives describing light! We probably never give this terminology much critical thought. All these words have a subjective meaning and we probably use these terms interchangeably in normal discourse. But light can be described far more scientifically. When one first begins to research the scientific terminology of light the seemingly overwhelming array of terms can be quite off-putting, Thankfully we don't need to understand all these terms for the purposes of observation and photography.
In order to measure anything we first need to understand what it is we are trying to measure. Light is clearly not a simple concept and so when we talk about measuring light we have to consider the overall context. There are a lot of different terms and units of measure for light and these are often used incorrectly or out of context. This blog is no exception and I must learn to try and use the terminology with more care.
Light is measured by the camera's or perhaps a handheld light meter. Light is also measured by the camera sensor. In the case of the light meter what is being measured is irradiance in units of watts/m2. Each individual photosite on a camera sensor actually captures individual photons of light and uses them to accumulate a charge which is proportional to the amount of light received. But photosites have a limited charge capacity and therefore a limited dynamic range. A light meter typically measures light across a much broader range.
Radiometry
Light is part of a much larger electromagnetic spectrum and when we are talking about light in terms of that totality we are referring to the science of radiometry. At this, the broadest definition of light we have terms like radiant energy (the total energy of electromagnetic radiation), radiant energy density (that energy per unit volume) and radiant flux (which is the rate of radiant energy per unit time). Because light intensity falls off as we move away from the source we need to measure intensity using another form of measurement called radiant intensity or radiance. Lastly we have irradiance, which is the radiant flux received by a surface per unit area. This last term is actually all we really need for our purposes but it may be useful to understand that we have all these other complex terms and it is quite easy to use a term out of context.
Spectroradiometry
All electromagnetic energy can be characterised by wavelengths along an electromagnetic spectrum. If we need to measure light of a fixed wavelength we must use a different terminology - spectroradiometry. Then it is just a matter of applying the word spectral in each of the contexts above. So we have spectral flux, spectral intensity, spectral radiance and spectral irradiance etc., all describing the same things as listed above, but in a narrower spectral context.
Photometry
This is all well and good but in reality we don't typically work within the full electromagnetic spectrum. Our eyes are sensitive to only a small fraction of the entire spectrum. So we don't normally talk about light measurement based on radiometry. When it comes to human vision it is based on a more specialist field of study called photometry.
Of course, as humans we all perceive the world in slightly different ways. There is no uniformity when it comes to biology and there are a large range of variables at play during any given observation. So we use normal distributions or functions rather than fixed values in the world of photometry. The luminosity function for example describes the average spectral sensitivity of human perception of brightness. This is all made possible thanks to the work of the International Commission on Illumination CIE.
In the experiment below I used a lux meter to observe light under a foliage canopy. A lux meter works in exactly the same way as any other light meter but the readings are subject to a correction for human vision using the standard daylight or photopic luminosity function. Normally these meters are used for workplace or occupational monitoring and are adjusted for indoor use (a range 0 - 15,000 lux approx.). So they are not ideal for outdoor light metering, where bright sunlight can achieve up to 100,000 lux. I used the meter on a fairly dull day where the ambient illuminance was at most a mere 6,000 lux.
The Camera Versus The Human Eye
In a recent posing HERE I explored some of the key differences between the camera's light sensitivity and that of the human eye. Camera sensors have a limited spectral sensitivity which differs significantly from that of the human eye. Camera manufacturers create their own functions and algorithms to bring digital images more in line with what the human eye is capable of seeing.
Brightness and Luminosity Tools
Firstly, time for a quick breather! Okay so far I have established that light is typically described in very subjective terms but a very specialist scientific lexicon also exists to describe and accurately measure light in all its complexity. Radiometry is the study of the overall electromagnetic spectrum while photometry concerns only that narrow portion which we call the visible spectrum. Thanks to the work of the CIE we have a fairly clear picture of how a typical human perceives light and colour and digital camera manufacturers have used this understanding in the design of their imaging systems and processing algorithms.
I now turn to a couple of oft misunderstood lighting terms used in image processing. Simple image brightness tools increase and decrease the brightness level or value of each pixel linearly across all three colour channels. So the adjustment is not spectrally dependent. They are a fairly blunt instrument that don't take into consideration the particular characteristics of human vision. Luminosity on the other hand does take into account human perception of brightness. Once again, the luminosity function describes the average spectral sensitivity of human perception of brightness. For every one blue cone and one red cone in the fovea of the human retina there are two green cones. This in part means that we perceive green as being brighter than the other primary colours. The luminosity slider used in Adobe Photoshop, MS Paint etc. is not a simple linear slider but takes account of our spectral light sensitivities and makes a weighted brightness correction across the three colour channels. This explains why when we observe each of the colour channels of an image in monochrome they all appear distinctly different from the final compiled image. This is all explained in better detail by this Cambridge in Colour posting on Luminosity & Colour.
The distribution of tones across each colour channel must take account of the particular spectral characteristics of human vision. This means that the greyscale appearance of each colour channel will look slightly different from the combined greyscale image, as illustrated above. Colour channel layers can be created in Adobe Elements as I have done using this procedure linked HERE.
Incidentally, just to confuse things the term luminosity is used in another context in astronomy to denote the total amount of light emitted by a star per unit time.
In order to measure anything we first need to understand what it is we are trying to measure. Light is clearly not a simple concept and so when we talk about measuring light we have to consider the overall context. There are a lot of different terms and units of measure for light and these are often used incorrectly or out of context. This blog is no exception and I must learn to try and use the terminology with more care.
Light is measured by the camera's or perhaps a handheld light meter. Light is also measured by the camera sensor. In the case of the light meter what is being measured is irradiance in units of watts/m2. Each individual photosite on a camera sensor actually captures individual photons of light and uses them to accumulate a charge which is proportional to the amount of light received. But photosites have a limited charge capacity and therefore a limited dynamic range. A light meter typically measures light across a much broader range.
Radiometry
Light is part of a much larger electromagnetic spectrum and when we are talking about light in terms of that totality we are referring to the science of radiometry. At this, the broadest definition of light we have terms like radiant energy (the total energy of electromagnetic radiation), radiant energy density (that energy per unit volume) and radiant flux (which is the rate of radiant energy per unit time). Because light intensity falls off as we move away from the source we need to measure intensity using another form of measurement called radiant intensity or radiance. Lastly we have irradiance, which is the radiant flux received by a surface per unit area. This last term is actually all we really need for our purposes but it may be useful to understand that we have all these other complex terms and it is quite easy to use a term out of context.
Spectroradiometry
All electromagnetic energy can be characterised by wavelengths along an electromagnetic spectrum. If we need to measure light of a fixed wavelength we must use a different terminology - spectroradiometry. Then it is just a matter of applying the word spectral in each of the contexts above. So we have spectral flux, spectral intensity, spectral radiance and spectral irradiance etc., all describing the same things as listed above, but in a narrower spectral context.
Photometry
This is all well and good but in reality we don't typically work within the full electromagnetic spectrum. Our eyes are sensitive to only a small fraction of the entire spectrum. So we don't normally talk about light measurement based on radiometry. When it comes to human vision it is based on a more specialist field of study called photometry.
Of course, as humans we all perceive the world in slightly different ways. There is no uniformity when it comes to biology and there are a large range of variables at play during any given observation. So we use normal distributions or functions rather than fixed values in the world of photometry. The luminosity function for example describes the average spectral sensitivity of human perception of brightness. This is all made possible thanks to the work of the International Commission on Illumination CIE.
In the experiment below I used a lux meter to observe light under a foliage canopy. A lux meter works in exactly the same way as any other light meter but the readings are subject to a correction for human vision using the standard daylight or photopic luminosity function. Normally these meters are used for workplace or occupational monitoring and are adjusted for indoor use (a range 0 - 15,000 lux approx.). So they are not ideal for outdoor light metering, where bright sunlight can achieve up to 100,000 lux. I used the meter on a fairly dull day where the ambient illuminance was at most a mere 6,000 lux.
The Camera Versus The Human Eye
In a recent posing HERE I explored some of the key differences between the camera's light sensitivity and that of the human eye. Camera sensors have a limited spectral sensitivity which differs significantly from that of the human eye. Camera manufacturers create their own functions and algorithms to bring digital images more in line with what the human eye is capable of seeing.
Brightness and Luminosity Tools
Firstly, time for a quick breather! Okay so far I have established that light is typically described in very subjective terms but a very specialist scientific lexicon also exists to describe and accurately measure light in all its complexity. Radiometry is the study of the overall electromagnetic spectrum while photometry concerns only that narrow portion which we call the visible spectrum. Thanks to the work of the CIE we have a fairly clear picture of how a typical human perceives light and colour and digital camera manufacturers have used this understanding in the design of their imaging systems and processing algorithms.
I now turn to a couple of oft misunderstood lighting terms used in image processing. Simple image brightness tools increase and decrease the brightness level or value of each pixel linearly across all three colour channels. So the adjustment is not spectrally dependent. They are a fairly blunt instrument that don't take into consideration the particular characteristics of human vision. Luminosity on the other hand does take into account human perception of brightness. Once again, the luminosity function describes the average spectral sensitivity of human perception of brightness. For every one blue cone and one red cone in the fovea of the human retina there are two green cones. This in part means that we perceive green as being brighter than the other primary colours. The luminosity slider used in Adobe Photoshop, MS Paint etc. is not a simple linear slider but takes account of our spectral light sensitivities and makes a weighted brightness correction across the three colour channels. This explains why when we observe each of the colour channels of an image in monochrome they all appear distinctly different from the final compiled image. This is all explained in better detail by this Cambridge in Colour posting on Luminosity & Colour.
The distribution of tones across each colour channel must take account of the particular spectral characteristics of human vision. This means that the greyscale appearance of each colour channel will look slightly different from the combined greyscale image, as illustrated above. Colour channel layers can be created in Adobe Elements as I have done using this procedure linked HERE.
Incidentally, just to confuse things the term luminosity is used in another context in astronomy to denote the total amount of light emitted by a star per unit time.
Saturday, 12 September 2015
Human Bias - Camera Versus Human Eye (Part 2)
Imaging Regimes
We are all probably acutely aware of the difference between night vision and normal or day vision. Night vision utilises scotopic vision, involving the rod cells of the retina. These cells surround the focal point of the eye - the fovea. The image produced is faint bluish and monochromatic. It is also not as sharp in appearance as the image produced during the day (photopic vision) due to lower resolution and a low signal to noise ratio (SNR).
We are accustomed to obtained a sharp image by focusing on a very narrow line of sight. This sharp photopic image is formed by the densely packed cone cells of the fovea. In the night sky, when we attempt to resolve fine detail in a dim object like a galaxy or comet, the object often appears to vanish before our eyes. This happens because the cone cells of the fovea are not sensitive enough to such a low level of light. Also photopic vision may be temporarily suspended during scotopic vision. Simply by focusing to the side of the dim object, it suddenly reappears, though somewhat blurry. This is not an illusion. It happens because the rod cells once again are able to register the dim light of the object shining on them, albeit it at a lower sharpness than we might like. For a novice astronomer having to observe faint objects in this way may be disconcerting. Unfortunately there is always a trade-off in very low light between brightness and sharpness in the visual system.
Rod cells are not completely switched off by day. They form an integral part of our peripheral vision. Scotopic and photopic vision can operate together on a moonlit night and towards dawn or after dusk. It also occurs when we drive a car at night. In our modern world, polluted by light, some of us possibly never get to experience true scotopic vision. The mixing of the two vision systems is referred to as mesopic vision and explains why for example we can see sharply in colour while driving at night. The different vision regimes make for an important distinction between human vision and cameras. For a start these human adaptations dramatically expand the dynamic range of the human visual system when compared with digital media.
However camera image exposure allows cameras to make up some lost ground. Long exposure times allow cameras to peer far more deeply into the gloom of a dark night than the human eye ever could. I recently watched a documentary where it was reported that by progressively exposing a single image of a small portion of the night sky, hundreds of times over countless hours and successive nights with the aid of the orbiting Hubble Space Telescope it has been possible to photograph some of the dimmest objects in the universe. Therein lies the miracle of modern photography! At a slightly more mundane level, adjusting ISO allows us to amplify our own digital images to extract more detail and colour, though we also increase noise in the process.
Despite the eye's broad dynamic range we also struggle to perceive detail at high light intensity. Once again, by reducing exposure time, aperture and ISO we can record digital images at extremely high luminance levels that would otherwise seriously damage our eyes. Using such techniques we can even photograph and capture detail on the surface of the sun.
Spectral Sensitivities
Human vision operates roughly between wavelengths 400nm and 700nm. Our night vision rods create a monotone image in the blue end of the spectrum at approximately 500nm. During the day, while using photopic vision our cone cells are dominant. Our cones are distributed within the fovea at a ratio of green:blue:red of 2:1:1 and our eyes are most sensitive to the green portion of the light spectrum. But we also have a strong sensitivity to blue and red. We cannot see beyond the violet (ultraviolet) or red (infrared) wavelengths and hence these colours define the boundary of the visual spectrum.
Typical digital camera sensors or the other hand have a slightly different spectral sensitivity and this varies from one camera make and model to another. Digital sensors typically have a higher sensitivity to blue but with image processing this is not noticeable. Camera manufacturers apply a ratio of 2:1:1 in terms of green, blue and red filters respectively to the surface of digital sensors to match the sensory configuration of the human fovea. Unlike the human visual system digital camera sensors are sensitive to both near-ultraviolet and infrared. Special UV and IR filters are used to filter out this normally unwanted light. Birds see in UV and we can explore this hidden world in monochrome using a modified digital camera. For more see HERE.
Gamma Correction
Our eyes do not perceive light intensity the same way cameras do. Camera's record light intensity linearly. Twice the light intensity equals twice the recorded signal. On the other hand our eyes record light non-linearly. We can for example distinguish between dark tones with a fine degree of discernment. In other words we can perceive a contrast between very slight changes in light intensity within shadows. However we are poor at distinguishing such relatively small contrasts within higher levels of luminance. There is an evolutionary advantage to this trait. It has allowed us to function well in low level light while at the same time providing our vision with enough dynamic range to function adequately even under the brightest of daytime conditions. What this means in terms of imaging systems is that it makes sense to preserve plenty of mid-tone and shadow detail but we can afford to preserve less within the highlights. All imaging equipment corrects for this non-linearity in human vision and this correction is referred to as gamma correction or gamma compression.
Technological Advances
As illustrated above we can, in any given image exceed either the low light or the high light range of human vision, but we can't do both at the same time. Current digital cameras don't even come close to matching the vast dynamic range of the human visual system within a single image, Also, despite the fact that many cameras are capable of recording 16-bit images, the internet and virtually all screens and printers can only output images in 8-bit. So there is always a trade-off and loss of some colour and tonal detail. But technology is advancing all the time and it is quite reasonable to expect that within 10 or 20 years digital camera technology will be able to match and surpass the imaging capabilities of human sight. In time output devices will also advance and it should eventually be possible to display an image with the same brightness, colour and clarity of human sight. When that day comes we will probably once again begin to take for granted some of the wonderful complexities of light and imaging systems. But for now, if we are to make the most of our camera equipment we need to understand it's limitations and the workarounds required to record what we see, and beyond.
Monday, 7 September 2015
Birds and Light - On Grassland
When we think of grasslands we probably consider the prairies of North America, the llanos and pampas of South America, the steppes of Asia, the veld or savanna of Africa or the downlands of Australia. But add to that all the cultivated areas of the planet and even the tundra and semiarid areas and it's pretty clear this wide, expansive type of habitat dominates the landscape of most countries.
Of course the most recognisable feature of grassland is it's habit of frustrating the observer. This time the quarry is a Least Seedsnipe (Thinocorus rumicivorus), photographed in Chile.
Of course the most recognisable feature of grassland is it's habit of frustrating the observer. This time the quarry is a Least Seedsnipe (Thinocorus rumicivorus), photographed in Chile.
Greens
Where natural grasslands occur on the planet they are not always so green and lush. Seasonal rain brings new growth which supports the breeding cycle of many herbivores and associated ecosystems. But in many parts of the planet this is followed by much drier conditions and very often fire and regeneration of the grassland habitat. But lets start with the lush green growth that we certainly associate with grasslands here in Ireland. There are two photographic factors to consider.
Firstly fresh green grass transmits and reflects a considerable amount of green light. This may not dominate a scene quite as well as it does under foliage canopy. Nevertheless it can have some impact, particularly on underparts colouration. Secondly a dominance of a single colour in any image can confuse a camera's auto white balance function. So, it is not unusual for photographs containing a lot of green grass to have an unnatural white balance tilt along the green-magenta axis. As discussed under an earlier posting HERE this problem can only be corrected using a proper white balance tool. Some white balance or colour temperature tools only correct along the yellow-blue colour axis and don't allow for the correct white balancing of these types of images.
Firstly fresh green grass transmits and reflects a considerable amount of green light. This may not dominate a scene quite as well as it does under foliage canopy. Nevertheless it can have some impact, particularly on underparts colouration. Secondly a dominance of a single colour in any image can confuse a camera's auto white balance function. So, it is not unusual for photographs containing a lot of green grass to have an unnatural white balance tilt along the green-magenta axis. As discussed under an earlier posting HERE this problem can only be corrected using a proper white balance tool. Some white balance or colour temperature tools only correct along the yellow-blue colour axis and don't allow for the correct white balancing of these types of images.
This Sedge Warbler (Acrocephalus schoenobaenus) against a dominant green background will tend towards a green white balance error, requiring some careful white balance correction. Unlike its much skulkier cousin the Aquatic Warbler (Acrocephalus paludicola) Sedge will often show quite well with a bit of patience, particularly during spring migration, when this image was taken.
Browns, Yellows and Greys
Browns, Yellows and Greys
Many grassland birds tend to be brown and streaky. Very often quite similar species occur together in the same area thanks to the rich availability of ecological niches in natural grasslands.
Bland, desaturated colours like pale browns and greys are more susceptible to lighting issues than bold colours as discussed HERE. Take for example the Golden Plover image below. Within this flock of European Golden Plover (Pluvialis apricaria) there are two Americans (Pluvialis dominica). The predominantly blue light before sunrise and after sunset, the yellow light of dawn and dusk, the dull grey light of an overcast day and the harsh light at high noon are all challenging lighting conditions for picking out subtle differences like the generally colder plumage of a juvenile AGP in a flock of EGPs.
Bland, desaturated colours like pale browns and greys are more susceptible to lighting issues than bold colours as discussed HERE. Take for example the Golden Plover image below. Within this flock of European Golden Plover (Pluvialis apricaria) there are two Americans (Pluvialis dominica). The predominantly blue light before sunrise and after sunset, the yellow light of dawn and dusk, the dull grey light of an overcast day and the harsh light at high noon are all challenging lighting conditions for picking out subtle differences like the generally colder plumage of a juvenile AGP in a flock of EGPs.
Grass grows very well in temperate climates due to the high rainfall, reasonable temperatures and adequate sunlight, but if left to nature deciduous and mixed forest will regain dominance. Where natural grasslands dominate lush green growth is not the norm and for much of the year the grass is yellow and stunted.
Despite it's large size a Double-striped Thick-knee (Burhinus bistriatus) can be feet away and yet perfectly blended with this dry llanos in Venezuela. In the warm late evening light the camouflage is no less striking.
Heat Haze
Natural grasslands tend to be hot places. They also often tend to be quite flat and featureless. Not only are grassland birds hidden in the long, dense grass much of the time, but it is difficult to approach birds in this habitat. So inevitably we resort to watching and photographing grassland birds at longer range than we might like. This can be a truly frustrating form of birding! Heat haze as we know gets worse with distance so this very challenging natural image artefact is at work during many grassland observations. Really the best time to be out grassland birding is in the early morning or late afternoon, when temperatures are lower and birds are more active.
Heat haze frustrates photography in the Ethiopian Highlands where this Abyssinian Longclaw (Macronyx flavicollis) demonstrates that interesting convergent evolution between the Longclaws of Africa and Meadowlarks of the Americas.
Thursday, 3 September 2015
Forensics - The Normalizing of Image Doctoring
Back in 2003 I wrote a short piece for Birdwatch Magazine outlining how I had doctored an image in Photoshop. I had numerous digiscoped images of a Pallas's Leaf Warbler (Phylloscopus proregulus) and I wanted to create one image depicting why it is given the charming name 'seven-striped sprite'. The purpose of the exercise was two-fold. Firstly I wanted to demonstrate Photoshop as an artistic tool for those who might not already be aware of it's power and secondly I wanted to highlight the genuine risk posed by forgery.
The term Photoshop is already synonymous with image manipulation and there can't be too many people unaware of the term and of the risks. The media and academia have become increasingly sensitive to the issue of forgery and a whole industry has grown up to identify and stop falsified images. But at the same time the media has always admitted that images of models undergo considerable touching up. Some high profile and respected journals openly now admit that images on their front covers are doctored for artistic purposes. At what point do we begin to doubt the veracity of all images in print and, by extension the internet?
Thus far there haven't been too many high profile cases involving image manipulation in birding and ornithology. But the decision of Birdlife South Africa this week to doctor an image of a Lappet-faced Vulture (Torgos tracheliotos) and hoax the discovery of a new species, the 'Tuluver' for a campaign may have crossed the proverbial Rubicon. I don't normally comment on current affairs in this blog but I thought this deserved a mention.
In future posts I hope to gather together information on some of the freely available online resources for sniffing out potentially forged images but this isn't a core objective of the blog.
Relevant links
Wednesday, 2 September 2015
Birds and Light - Against The Sky (Part One)
In this series of posts I have been exploring the various different lighting environments in which we observe and photograph birds. In many ways watching and photographing birds against the sky is the ultimate, pure synthesis of birds and light.
It's hard to resist a dramatic sunset scene. Take this party of Black-bellied Whistling Ducks (Dendrocygna autumnalis) coming to a Venezuelan evening roost. These silhouetted birds may not be the easiest to identify from this image but this scene is actually a very good representation of how my eyes witnessed this spectacle. Very often however when we photograph birds against the sky the results don't really match expectation or indeed what our eyes are capable of seeing.
Dynamic Range, Metering and Exposure
The posting on High Dynamic Range Imaging (HDRI) discusses the camera's particular limitations when it comes to dynamic range. For those not already familiar with this concept I recommend starting there. I spent a bit of time filming a Chimney Swift as it endlessly circled a patch of scrub at the edge of Baltimore village, Co. Cork one morning in late October. For light metering and exposure I was completely at the mercy of the Sony mini dv camcorder I was using, and it was a very frustrating exercise! Typically all digital cameras respond to the brightness of the sky by reducing exposure to preserve highlights. This has the inevitable effect of underexposing everything else in a scene. But when this bird flew low against a darker background the image brightened up and the bird's colour tones and detail were revealed.
I am very fortunate where I live that in almost any weather throughout the year my neighbour's racing pigeons circle my house in small parties, and it makes for a nice spectacle. It also affords me the opportunity to practise my flight photography and to try and make sense of this lighting environment.
As this sequence of images illustrates, even in continuous shooting mode the camera is constantly influenced by the overall scene lighting. Exposure is adjusted accordingly with each frame. This sudden shift in exposure can be even more dramatic along the interface between the sky and the land or built environment, as illustrated below.
There is a limit to what can be done to resolve this particular problem. We could attempt to narrow the focus by spot-metering but where our subject is fast moving it is simply impossible to track the bird with such a high degree of accuracy. Alternatively we can dial in exposure compensation or use exposure bracketing. There are pros and cons to all these methods. If we fix an exposure compensation but then a bird all of a sudden offers us the ideal photo opportunity our fixed exposure compensation may result in a missed opportunity. Exposure bracketing on the other hand increases our scope a bit but also increases the amount of rubbish shots we will end up having to sort and bin. In the end it's probably a matter of personal taste, trial and error.
Lighting Variation
In most of this series of postings I have tried to simulate how a subject looks under various different lighting conditions from pre-dawn to sunrise and throughout the day to sunny versus cloudy conditions.
Taking this presumed female Venezuelan White-tipped Swift (Aeronautes montivagus) for example we can see that lighting angle, white balance and light intensity all create very different effects. The sky is always brighter than the ground and our subject. This is particularly striking before dawn and after dusk when subjects appear strongly silhouetted against the sky. Little or no detail is apparent in these conditions. At sun rise and sun set the low angle of the sun illuminates the underside of a high flying bird. Plumage detail can be revealed that is not normally visible and this can easily confuse an observer. Take for instance the very similar Common Apus apus and Pallid Swifts Apus pallidus. The yellow light of early morning and late evening can dramatically alter the colour and appearance of a juvenile Common Swift on migration, leading to potential mis-identification as a vagrant Pallid. For more on this challenge see part two of this thread HERE.
As the sun gains height in the sky typically the underside of the bird falls into shadow and the outer wing and tail feathers start to become more translucent looking. A bird takes on a much different appearance.
I have discussed translucency in detail HERE. Of course depending on the angle of observation relative to the sun it is possible with patience to obtain good views and photographs of both the upperparts and underparts of a flying bird but, for the most part our views tend to be somewhat limited by lighting.
In bright overcast conditions we may benefit from slightly better overall lighting and viewing conditions but birds still tend to remain effectively in shade or silhouette much of the time and this just gets worse as cloud thickens. Overall, there is no ideal or optimum viewing conditions for watching birds against the sky. Each set of circumstances carries its advantages and its challenges.
Processing from RAW and other formats
As various earlier postings have illustrated there is a huge advantage to shooting RAW. Images which have been over or underexposed can be rescued. While working from JPEG or a similar format there is far less scope of resolving exposure problems. Taking for instance a frame from the Chimney Swift video and trying to correct for underexposure doesn't greatly improve the results and only brings noise and other artefacts to the fore.
There is a limit to what can be done to resolve this particular problem. We could attempt to narrow the focus by spot-metering but where our subject is fast moving it is simply impossible to track the bird with such a high degree of accuracy. Alternatively we can dial in exposure compensation or use exposure bracketing. There are pros and cons to all these methods. If we fix an exposure compensation but then a bird all of a sudden offers us the ideal photo opportunity our fixed exposure compensation may result in a missed opportunity. Exposure bracketing on the other hand increases our scope a bit but also increases the amount of rubbish shots we will end up having to sort and bin. In the end it's probably a matter of personal taste, trial and error.
Lighting Variation
In most of this series of postings I have tried to simulate how a subject looks under various different lighting conditions from pre-dawn to sunrise and throughout the day to sunny versus cloudy conditions.
Taking this presumed female Venezuelan White-tipped Swift (Aeronautes montivagus) for example we can see that lighting angle, white balance and light intensity all create very different effects. The sky is always brighter than the ground and our subject. This is particularly striking before dawn and after dusk when subjects appear strongly silhouetted against the sky. Little or no detail is apparent in these conditions. At sun rise and sun set the low angle of the sun illuminates the underside of a high flying bird. Plumage detail can be revealed that is not normally visible and this can easily confuse an observer. Take for instance the very similar Common Apus apus and Pallid Swifts Apus pallidus. The yellow light of early morning and late evening can dramatically alter the colour and appearance of a juvenile Common Swift on migration, leading to potential mis-identification as a vagrant Pallid. For more on this challenge see part two of this thread HERE.
As the sun gains height in the sky typically the underside of the bird falls into shadow and the outer wing and tail feathers start to become more translucent looking. A bird takes on a much different appearance.
I have discussed translucency in detail HERE. Of course depending on the angle of observation relative to the sun it is possible with patience to obtain good views and photographs of both the upperparts and underparts of a flying bird but, for the most part our views tend to be somewhat limited by lighting.
In bright overcast conditions we may benefit from slightly better overall lighting and viewing conditions but birds still tend to remain effectively in shade or silhouette much of the time and this just gets worse as cloud thickens. Overall, there is no ideal or optimum viewing conditions for watching birds against the sky. Each set of circumstances carries its advantages and its challenges.
Processing from RAW and other formats
As various earlier postings have illustrated there is a huge advantage to shooting RAW. Images which have been over or underexposed can be rescued. While working from JPEG or a similar format there is far less scope of resolving exposure problems. Taking for instance a frame from the Chimney Swift video and trying to correct for underexposure doesn't greatly improve the results and only brings noise and other artefacts to the fore.
Subscribe to:
Posts (Atom)