Pages

Friday, 29 August 2014

Forensics - Acutance and Unsharp Mask

While there are numerous different mechanisms that can affect image focus, the actual cause of a focus error is perhaps of less importance for our purposes than the extent to which an image is out of focus.  For simplicity I only use three focus increments in the Image Quality Tool:-

(1) The image looks perfectly sharp in which case edges are crisp and well defined.
(2) The image looks soft.  The image is reasonably usable but the edges are not totally crisp.
(3) The image is out of focus.  The whole image is very soft and details are hard to make out.

It is the combination of image resolution and acutance that make up what we term image sharpness as neatly explained HERE.  So, for the best focus we need a reasonably high pixel resolution plus a reasonable level of edge contrast, or acutance.  I have already dealt with image resolution HERE so I am going to focus on acutance here, while also taking a look at image sharpening tools.


Raw Image Softening


As outlined HERE there is a little more to digital image focus than meets the eye.  While an image may appear perfectly pin-sharp through the lens, the RAW image formed by the camera will always start out softer due to an image processing step called demosaicing.  This image softness is then corrected automatically using an unsharp mask algorithm, though this step can alternatively be completed manually using Camera Raw or some other raw viewing software.  

Unsharp Mask


Unsharp Mask basically makes an image appear sharper by increasing edge contrast or acutance.  This may or may not adversely impact an image from a bird identification perspective.  Usually, a little image-sharpening has little or no adverse affect at the macro level but does undoubtedly alter the appearance of fine details and the edges between objects.  So, for example, use of the unsharp mask could subtly change the colour or contrast of a very narrow feather edge or other micro structures.  Like a lot of image editing tools, when the unsharp mask is overused there are a number of additional problems including the introduction or worsening of some image artefacts.  I think, from the point of view of correctly scoring focus and focus related artefacts using the Image Quality Tool it is important to go into some of these artefacts in rather more detail.  For more on the use of unsharp mask see HERE.


Edge Halo Artefacts

A telltale sign that Unsharp Mask may have been overused, halo artefacts are a bi-product of the mask itself.  The mask is effectively a high-pass filter and the halos are like echoes produced by filtering of the data.  For more on the technical aspects see HERE.

From a practical perspective, image halos may be white or coloured and obviously can give the impression of false feather fringes and other false plumage markings.  Because they tend to be of the order of a couple of pixels in width they are only likely to confuse an identification at low pixel resolutions but clearly there is a potential to confuse things.  However, with experience it is possible to recognise artificial image sharpening.  At the level that halos are becoming obvious, the image contrast and acutance tends to look unnaturally high.


Aliasing and Blurring Artefacts

There is an excellent tutorial on the Cambridge in Colour website HERE that explains a linkage between three related digital image artefacts and how they interact during image sharpening.  As one of the consequences of correcting for demosaicing, Unsharp Mask or a similar algorithm may be used to try and improve image quality and sharpness.  

In lower resolution images, jagged, pixillated edges can be smoothed by simply blurring the image, but the whole image appears soft.  "Nearest Neighbour" algorithms simply average pixels either side of a border, creating a smoother surface appearance - termed aliasing.   Finding the right balance between blurring, aliasing and sharpening without creating halos often requires the aid of an human eye, experienced in using unsharp mask.  Simply applying automatic sharpening is liable to lead to poor results and unwanted artefacts, especially with lower resolution images.


Many photographic websites present this type of diagram to illustrate the trade-offs during image sharpening.  The images below compare the different effects of the unsharp mask on high resolution and low resolution images.  In this case, the original image was slightly low in acutance so the sharpening tool has certainly increased sharpness, or at least, the appearance of sharpness.  Overall contrast is increased. Pale sharpening halos are evident around the artificially added sharp black edges and around the bird's bill.  There is a dark halo created at the border between the white breast and the darker patches of water behind.  While the overall effect is to increase contrast and acutance, in this case the bird also appears to jump out of the background - pseudo 3D-style.  This can be another encouragement to overuse of unsharp mask.


With a much lower resolution image (below) there should be obvious pixilation visible but the particular software used has blurred the image intentionally to mask the pixilation.  As a result, the effect of a much lower resolution is not particularly noticeable at full screen resolution.  Interestingly, the unsharp mask actually has the effect of undoing some of the blurring and revealing the pixilation in this image, so there isn't a great advantage to increasing acutance in this case.   Additionally, the sharpened image again contains sharpening halos, similar to the high resolution image.  Moiré becomes more obvious at this resolution but is only apparent in the artificially-added focus wheel.  Moiré occurs wherever there are fine, regular patterns.  In bird images it most commonly affects the regular pattern of flight feather edges on a closed wing.  Moiré is an artefact which has an association with image resolution.  For more, scroll down to the end of THIS post.


The devil is in the detail

I could keep going, talking about the individual pros and cons of different sharpening tools but what counts here in terms of a bird identification is whether or not the critical details and colours can be judged accurately.  In the Image Quality Tool there is an opportunity to score down poor resolution and artefacts separately from focus.  The key in my view to deciding if an image is sharp, soft or out of focus for identification purposes is the effect of focus on fine detail.  If important fine details are blurred to the point that they can't be reliably seen, the image is out of focus.  If the details are not perfectly clear but can still be judged confidently, the image is soft.  Otherwise, I would tend to score the image as being sharp.

Monday, 25 August 2014

Human Bias - Visual Acuity versus Digital Image Resolution

When it comes to bird identification from digital images I believe there are five key quality parameters to consider, namely:-
RESOLUTION
FOCUS
EXPOSURE
COLOUR
DIGITAL ARTEFACTS

 These properties are all intertwined in many different ways.  I am now approaching the subject from the point of view of fine image detail.

Human's have a very sophisticated visual system.  Vision, it could be said, is our most prominent and acute sensory ability.  Firstly, we have a reasonably acute eye-sight, focused mainly in a very small part of the retina called the Fovea centralis (or fovea). Most of the colour optical receptors of the eye (cones) are located in this small space.  Much like a digital camera, the visual acuity of the fovea is mainly a product of it's large number and density of photoreceptors.  Birds of prey, which have a much greater visual acuity than us, have many times more photoreceptors making up their visual system, somewhat akin to having a camera with more megapixels. 

Unlike most animals, humans observe the world in full colour, thanks to the fact that most of us have three colour cones in our eyes.  Most animals only possess green and blue cones but, thanks to a genetic mutation, the ancestors of humans and related primates developed the ability to see in red in addition to green and blue.  The main evolutionary benefit it seems has been our ability to distinguish ripened fruits from unripened green fruit and foliage, giving our ancestors a competitive advantage over other fruit-foraging species.  

Our green cones outnumber blue and red two to one.  The digital image sensor and formerly colour film both have attempted to mimic the human visual system by attempting to recreate this balance.  The result from a digital imaging perspective is the Bayer Filter.


The image above depicts the workings of a typical digital camera.  The Bayer Filter sits on top of the digital image receptors (photosites).  It works in much the same way as the cone cells of the human eye.  Just as a red cone cell in the eye will only pass red light, the red bayer filter will only allow red light through to the digital receptor.  Each photosite therefore equates to a single pixel of the equivalent bayer filter colour with a record of the  light intensity hitting it.


Demosaicing



As the illustration above depicts, colour digital image formation using a Bayer Filter comes at a cost.  Because the initial "Bayer Raw" image consists of a mosaic of green, blue and red coloured pixels, the image must be processed to form a correctly-coloured digital image.  Called demosaicing, this process consists of an algorithm which interpolates the data from adjacent photosites (two green, a red and a blue) to create the full colour picture.

Interpolation involves averaging values so there is a significant amount of uncertainty brought about by this process.  Some camera manufacturers and raw image editing packages use more complicated algorithms to produce better results.

HERE is a nice blog posting by Adam Hooper, explaining and illustrating the difference between two common types of demosaicing interpolation methods, Bilinear and Adaptive Homogeneity-Directed (AHD).  Basically the bilinear method doesn't take account of the actual image content and simply, blindly averages every pixel.  While, on the other hand a more intuitive algorithm like AHD follows lines and edges between patches of colour and tries to create better definition and less blurring of colour across patches.  But consequently AHD involves more processing, and therefore is slower in creating an image from RAW.

In the ideal world each photosite would work like a mini-spectrophotometer, capable of recording a complete spectral analysis of the light hitting it.  Imagine how big that image file would be, not to mind the sophisticated photosite technology required!


Human Raw Vision


When we start to look at the fine workings of a digital camera and processor it is all too easy to become critical about the loss of data and seemingly heavy processing that is going on.  But, before we get too carried away lets compare what we have just seen with the workings of the human eye and brain.  If we could somehow zoom into the image that our eyes capture we would probably be no less critical.


The fovea is a tiny spot at the back of the retina, directly opposite the pupil of the eye.  It is packed with cone cells for acute colour vision but contains no rod cells (used for low light or night vision).  If you have ever gazed at a galaxy or comet in the night sky you will have noted that it is easier to observe if you focus on a spot slightly to the side of it.  This is because the cone cells in the fovea have relatively poor low light sensitivity.  By shifting the focus to the side of an object of interest the image of the object is projected onto the periphery which is rich in low light sensitive rods.  Suddenly, the object materialises, albeit frustratingly blurry and poorly defined.  When we try and centre our vision on the object, again it appears to vanish as the cones cannot register it's low light.  As kids we all learnt how to find the blind spots in our eyes, where the optic nerve enters the eye.

In an earlier posting HERE I came up with a way to check one's foveal field of view using a neat scintillating pattern I had found online.  It is really amazing just how narrow and tunneled our focus actually is, and it is not too surprising that we often miss something that is literally right under our nose.  

If we think that the heavy processing going on in the camera is unpalatable, consider what the brain has to do to construct a full colour image from the light hitting such a complex arrangement of structures.  Almost every detail we consciously register comes from the cone cells in the fovea.  Our peripheral cones and rods are active by day as part of our peripheral vision. Peripheral vision serves to widen our field of view, alerting us to movement and aiding our spacial awareness, but has little or no active or conscious role until after dark when the rods come into their own as our sole method of vision.


Above I have compared what an image of a small, distant triangle might look like if captured exactly as it appears in life (left) with what a normal modern digital camera records (centre) and what an equivalent human retina might see (right).  The digital camera sensor consists of a regular grid of green, blue and red colour photosites.  The ratio of green is to blue and red is 2:1:1, which is intended to match the distribution of cone cells in the retina.  Unlike the digital sensor, the cone cells in the retina are arranged at random and vary both in size and shape (surface area exposed to the light).  So the digital image starts out not that dissimilar from a "raw" human visual image.

To the brain the triangle edge must have a very odd and ever-changing shape - as image projected on to the back of the eyeball does not remain perfectly stationary (like a photograph) but instead moves about constantly in real time (like a video recording) as our head moves relative to the subject.  The brain must process this real time image and somehow make sense of it.

How much of what we see is real and how much is a construct of the human brain as it tries to fill in gaps?  Using human vision and struggling to make sense of a distant object is not much different from someone trying to make sense of a tiny fuzzy object in a digital image.  Both involve a high degree of uncertainty and there is probably a strong urge to let the brain fill in the missing bits!  On a visit to an optician the Snellen chart quickly remind us of the limitations of our visual acuity.  What we need I think is an equivalent cue for digital image acuity.  With the Image Quality Tool I am advocating Pixel Resolution as one such cue, coupled with Image Focus or Sharpness and an awareness of Image Artefacts.  Together, hopefully these parameters encourage the observer to stop before rushing towards a rash identification.


Acutance 


Acutance is an intriguing concept which again draws parallels between digital imaging and human vision.  If an image appears sharp our brain will happily accept it as being sharp.  Due to demosaicing, digital images start out slightly soft in appearance.  Unsharp Masking is very effective at increasing the acutance or apparent sharpness of photographs but, as these links highlight, the net effect is actually a loss of image data at the pixel level.  When attempting to make sense of small details in images it is best to start with the original raw image if available, not the final, possibly heavily sharpened image.

It is the combination of image resolution and acutance that gives us image sharpness as neatly explained HERE.

The actual mechanism by which acutance works in photo-finishing appear very similar to the natural visual phenomena of of Mach bands and the Cornsweet Illusion.



Moiré 

Moiré is an artefact associated with image resolution.  It can be produced wherever two regularly occurring patterns overlap.  One of these patterns may include the regular distribution of photosites making up the image sensor.  Another may be the repeating pattern of lines making up the computer screen image.  Another may be any regular pattern occurring in the digital image itself.  Lastly, moiré may be produced due to the repeating pattern within an image processing algorithm.  In bird images it occurs most commonly in the repeating pattern of flight feather fringes in the closed wing.  For more see HERE.


The top left image is of high resolution.  The images to the right of it are reduced in resolution to 25% and 12.5% of the original image size respectively.  At full crop there is no obvious difference between these three images on screen.  However when zoomed up at roughly 20% crop the differences are obvious.  I have sharpened the images to enhance the moiré pattern.  The pattern in the 100% and 25% resolution images are much the same, consisting of a slight parallel moiré pattern in the primary and secondary fringes.  However the added pixelation of the 12.5% resolution image adds an additional regular pattern and therefore an extra moiré pattern emerges.  The overall effect is a cross-hatch. 

Thursday, 7 August 2014

Colour - Summary of UV Imaging

The facts and some serious speculation

In Birds and UV Light I summarised how birds see in UV and also how they display ultraviolet reflection and absorption patterning in their plumage.  Most research looking into these phenomena to date have been through the medium of spectrophotometry rather than digital photography.  At the time of posting there are very few avian UV digital images readily available online so, up until now it has been difficult to imagine what a UV image of a bird actually looks like.  While researching this area I felt it might be possible to address this with minimal modification to some of my existing camera equipment.  The postings HERE and HERE outline how I have achieved this and in this posting I will reveal some of my preliminary avian UV imaging results.  


Imaging techniques using UV


Firstly, I think it is important to discriminate between two UV imaging techniques which actually have very little to do with each other.  UV reflectography (UVR) is a technique used mainly in the art world for forensic analysis of paintings.  The method used is phosphorescence not UV reflectance.  Phosphorescence is a phenomenon in which certain materials (phosphors) absorb UV light and re-emit it as visual light. So, phosphorescence is visible to the human eye and can be photographed by any standard digital camera.

UV reflectance is quite different.  UV and UV reflectance is invisible to the human eye.  It requires using a specially modified digital camera to record monochromatic UV images.  There is no phosphorescence involved.


What are Ultraviolet reflectance Images and how are they made?


A typical ultraviolet image is no different to any other digital image except that it is exposed entirely using light from the ultraviolet (UV) portion of the light spectrum, and it is monochromatic (like a B&W image) rather than a full colour image.  UV consists of wavelengths of light which are shorter than those of the visible spectrum.  As they are outside the range of the visible spectrum they are invisible to the human eye under normal circumstances.  However all digital camera sensors are sensitive to at least some wavelengths of UV and therefore, in theory at least, we should be able to create images from ultraviolet light alone using a standard digital camera.


The illustration above helps to explain how an ultraviolet image is formed using the example of a CCD image sensor.  UV must jump a number of hurdles to reach the sensor and generate the image, and these are depicted below.  Firstly, for comparison, here is a diagram showing how a normal colour image is created using a digital colour sensor in most modern digital cameras.


In the case of a camera modified for UV digital imaging, the configuration looks like this.



Firstly, moving from left to right, the Baader U filter blocks out both the visible and infrared, leaving only the ultraviolet portion of light to pass through to the lens.  Most modern lenses have coatings which absorb UV.  However, provided there are minimal lens coatings and few lens elements, a reasonable amount of  UV light should make it right through the lens.

Normally, at this point an IR/UV blocking filter absorbs any UV that has made it through.  However, in the case of the Sony Handycam which I have been using, that camera's IR night vision feature requires that there should be no IR/UV blocking filter present to restrict the image.  UV gets a free pass thanks to the lack of an IR/UV filter.

The UV light is now reaching the outer layers of the sensor.  On the surface of the sensor are microlenses that help focus the light onto individual colour filters in a mosaic structure called the Bayer filter array.  The Bayer is a honeycomb of tiny green, red and blue colour filters.  The purpose of the array is to give colour to digital images.  Each colour filter will only allow light from it's own portion of the visible spectrum through while blocking other wavelengths.  These colour filters do not block out IR and UV very well however, hence the need for a separate, dedicated IR/UV blocking filter.

Light which makes it through to the photoactive region of a CCD (or CMOS) sensor, beneath the Bayer filter array will now register as an image.  Each tiny photo receptor or 'photosite' is a capacitor or 'photodiode' which accumulates an electrical charge due to the photons of light striking it.  This charge, together with a record of the colour Bayer filter sitting above it ultimately transmits as raw image data to the image processor.

Note each pixel in an image is created from the accumulated data of a small grid of four neighbouring photosites, i.e. an individual pixel's hue, saturation and luminosity value are calculated based on multiple data points, not just from one photosite.  This is done through a process called de-mosaicing, as explained HERE.  This process accounts for a digital camera's wide colour gamut.


Avian UV Reflectance Images - some preliminary results


Compared with the amazing results obtained with flowers (nectar guides) and butterflies (butterfly UV signalling)  HERE and HERE, the avian UV reflectance results obtained so far have been uninspiring to say the least.  Almost invariably, there is very little notable difference between colour monochrome images (B&W or greyscale) and UV monochrome images of birds.  It has to be said however that, with it's rather drab avifauna, the Western Palearctic may not be the best place on earth to be looking for UV patterns in birds.  Most of the interesting UV finds to date have been in the Neotropics.


One of the few results of note so far - in Moorhen, the yellow tip of the bill is invisible in monochrome UV.  The remainder of the bird matches colour reflectance images more or less exactly.


This Mallard and it's surroundings effectively look identical in both VIS and UV in monochrome.  Even the structural iridescence in the secondaries is preserved perfectly in UV.  

Rather than put up a whole host of avian UV images that appear to show very little I think it's best to look beyond.  I have yet to record one of the more celebrated UV reflective species, the Blue Tit (Paris caeruleus) as they are keeping a low profile it seems, post-breeding.  The sexes of that species are indistinguishable in VIS but in UV the males have a more reflective blue cap.  It would appear that blue pigmentation is more likely to be associated with strong UV reflectance than many other colours.

So what is going on with birds and UV?  Presumably UV reflectance and absorption is by design rather than by accident.  If the majority of birds are similarly reflective in VIS and UV it might suggest that the primary purpose of UV reflectance in birds is to simply ensure no net loss of plumage brightness.  Does the story end with that dull discovery, or, are we missing something here?  I think we might be.


Limitations of monochromatic imaging


In the colourful, visual world that we inhabit our eyes are masterfully adapted for studying our environment.   Our eyes have three different colour receptor cells (red, green and blue cone cells) plus one receptor adapted for low light or night-time vision (rod cells).  We are programmed to respond differently to different colours.  Our eyes are most sensitive to green.  Red and yellow are used widely in nature to denote danger and our brains are pre-disposed for registering these colours in our environment.  They appear somewhat more vivid than other colours (even if they are not necessarily so in reality).  It is also known that we have a very good ability to discern subtle tonal gradients and we even modify what we see to improve acutance (edge contrast) eg. the phenomena of Mach bands and Cornsweet Illusion.  We are not typically used to studying the world around us in monochrome, so when it comes to the study of UV reflectance we are not really utilising all of our visual acuity and senses. 

Many birds have three cones, just like us but some species have a fourth cone that appears to have a function in UV vision.  The big question is, what is this cone registering?  Is it purely used to record UV light intensity and reflectance, like our night-vision rods.  Or, does it record UV hues in much the same way as our colour cone cells?

It is very important to stress a key limitation in our study of UV patterning in birds.  Whereas with colour images we can study hue, saturation and luminance, when it comes to monochrome images, we only have luminance to work with.  Studying images in monochrome is somewhat akin to humans trying to study plumage colouration in birds under moonlight.


Because we cannot currently discriminate between different hues and saturation's while imaging in the UV spectrum, we should not discount the possibility that we may be missing some fundamental UV plumage patterning in birds.  There could for example be a fringe on the edge of a feather with a UV hue peaking say at around 350nm, while the centre of the feather may have a UV hue peaking at 380nm.  For birds able to visualise and discriminate between UV colour wavelengths, this fringe would appear just as obvious to them as the coloured shapes in the example given above appear to us.  If however the reflectance and therefore the luminance of the feather is uniform throughout, this pattern will be completely invisible to the UV camera, as illustrated by the colour example below.


Spectrophotometry offers some advantages over UV photography in that the spectrophotometer can detect peaks at different wavelengths of UV.  However the method has it’s own limitations, because the data gathered by such a device is based on point sampling.   If the data is all collected from only one or two points on each feather, chances are that subtle markings around the edges or tip of the feather may be missed.  Scientists have tried to search for hidden UV patterning in feathers during avian UV research using spectrophotometer's so what I am saying here is not new.  It is just possible that variations in UV within feathers is rare or non-existent.  Have we another way of looking for it?

I think the ultimate UV imaging solution would be a bespoke UV imaging camera with false colour gradients set at discrete UV wavelength increments.  This type of device is already in existence for far Infrared work – eg. the IR thermographic camera.  Until such a device is readily available for UV we will have to make do with UV reflectance and spectrophotometry, but we should remember that we may be only seeing a small part of the whole picture.  UV imaging is very much at the stage that photography was before the dawn from B&W to colour, with the added disadvantage of course that as humans we can’t see in UV!  

Here is how a bespoke UV imaging camera might work if it were based on the same principals as normal digital cameras.


Here is a mock-up of what an image from such a device might look like.  So far I have not found any evidence that a camera like this has been manufactured but it should certainly be possible.  Unfortunately UV imaging doesn't have the same uses or mass market appeal as an IR thermographic camera.  IR thermography is a growing business thanks to its application in energy management, fire prevention, preventative maintenance, security and the rescue services.  It is hard to envisage too many uses for a UV imaging camera outside of very specialist industrial applications.  Then again, when such a camera is developed, I have no doubt there will ingenious uses found for it.


The 'colour' of UV light (pure speculation)


Considering that the UV spectrum (100nm - 400nm) is nearly as wide in range as the visible spectrum (400um - 700um), surely animals and birds that see in UV actually see a range of UV hues and perhaps even perceive colours which we simply cannot comprehend?

Perhaps a clue can be found in the design of the avian eye.  Some birds possess a fourth cone cell and this seems to function in UV vision.  So, if the bird uses three cone cells for the visual spectrum (green, blue and red) and an additional one for UV, it seems reasonable that birds with a UV cone cell should be able to discriminate between various hues of UV light in much the same way as we distinguish subtle hues in the visible spectrum.  Put another way, it would seem like an waste of valuable resources to equip oneself with a receptor for UV only to use it for UV reflectance purposes.

If we could see what birds see, what would the colour of UV look like?  Okay, so the next statement has no basis in fact but bare with me.  Could UV be magenta?

It seems weird that we have this one broad set of hues (the magenta scale) that don't have a place in the visible spectrum and yet we are surrounded by pinks and magenta's in nature.  It also seems highly convenient that magenta sits right where UV should be, beside violet on the majestic colour wheel, the foundation stone of colour theory.  Could it be that while most animals were busy losing the ability to see in UV they were at the same time gaining the ability to see in magenta, and without the need for an additional receptor in the retina?  Lastly, an evolutionary reason.  UV is damaging to life on earth - why waste time on a visual system that encourages us to look at UV, a potentially life-limiting form of radiation.