FAQ: Assessment of empty magnification using GIMP

Here are summaries and links to discussions for Frequently Asked Questions.

Moderators: rjlittlefield, ChrisR, Chris S., Pau

Bushman.K
Posts: 162
Joined: Wed Aug 26, 2015 5:49 pm
Location: OR, USA
Contact:

FAQ: Assessment of empty magnification using GIMP

Post by Bushman.K »

Every optical setup intended for macro or microphotography has own limitations in terms of magnification. For example, setup with lens, set on macro bellows can be effectively used within certain extension length only. Longer extension gives larger image, but amount of details visible in it stays the same, so it's impossible to see details smaller than certain minimum size. Magnification beyond this limit is often called "an empty magnification".

This short article tells how to check any image for empty magnification. It also helps to assess images with focusing issues, because wrong focus has similar result: lack of small (1-2 pixels) details.

Method

Method is based on so called "Wavelet decomposition". This mathematical concept stands for splitting an image into scale planes, where each plane contains details of specific size: 1 pixel, 2 pixels, 4 pixels and so on (these are powers of 2, starting from 0).

Having all details sorted by size and split into planes, it's possible to check, if planes, supposed to contain smallest details, are empty or not. Technically, it's important to remember, that any image also contains certain amount of noise, originating from camera sensor, for example. Therefore, smallest details plane will always contain something, but it could be just noise. This plane can also contain much less details than others. It means, that optical system works somewhere on its limit, but not beyond it.

Software

This method utilizes freeware cross-platform image editor, called GIMP (don't download it from there if you are not an experienced programmer). There are both ready to use binary distributions for different operation systems (Windows, Mac, Linux) and source code available.

You will also have to have Wavelet decompose plugin installed.

Windows and Mac OS users can obtain installations from Partha's place website. It has Wavelet decompose plugin already incorporated into distribution file, so you don't need to compile it.

Install GIMP using one of distributions, make sure it has Wavelet decomposition plugin and you are good to go.

Instruction

It doesn't have any sense to assess combined stacks, because stacking software obviously distorts the result.

Here, I will use cropped portion of spider skin photo taken with lens, mounted on bellows:
Image

Open the image in GIMP, go to Filters - Generic - Wavelet decompose ... menu:
Image

Technically, you need only a couple of fine details planes, but it makes sense to generate five planes, containing details from 1px to 16px size. To do that, use these settings:
Image

You will get six new layers (five planes plus "Residuals"). Turn off all of them, except "Wavelet scale 1" - finest details layer. It will look like this:
Image

Probably, you can barely see anything there (in this exact case, optical system works on its limit, therefore, there are not so much details). Let's check, if larger details layer contains more.
To do that, turn off "Wavelet scale 1" and turn on "Wavelet scale 2":
Image

Second layer obviously reveals more details. But let's get back to the finest layer (turn off "Wavelet scale 2" and turn on "Wavelet scale 1"). Using conventional image enhancement methods, it's possible to make details from "Wavelet scale 1" more pronounced and easier to assess.
To do that, select "Wavelet scale 1" layer in list (click on it), go to Colours - Levels... menu and use Auto button to apply automatic channel equalization to currently selected layer:
Image

Here is the resulting enhanced image:
Image

As you can see, it does contain a lot of noise, but there are also details of legs. It means, this image doesn't suffer from "empty magnification" effect.

Notes
Adobe Photoshop or ImageJ (Fiji), widely used for image processing, do not have any similar tools. There are certain ways to use this method there, utilizing Fast Fourier Transform or KPT Equalizer plugin for Photoshop (no longer supported and incompatible with 64-bit version of Photoshop), therefore I will unlikely be able to answer questions regarding usage of this method with software, other than GIMP.

Wavelet decomposition can be effectively used for detail enhancement. Increasing the contrast of any detail plane makes details of corresponding size more pronounced in image after fusing the layers back together. Local contrast manipulation (such as applying Gaussian blur to selected empty areas of "Wavelet scale 1" layer) is an effective method of noise reduction.
Last edited by Bushman.K on Wed Dec 16, 2015 2:11 pm, edited 1 time in total.

Joyful
Posts: 143
Joined: Thu Feb 19, 2015 4:15 am
Location: Cape Town, South Africa

Post by Joyful »

Many thanks Bushman for a clear tutorial !

joyful

Malcolm Storey
Posts: 33
Joined: Tue Oct 11, 2011 5:13 am
Location: Dorset, UK
Contact:

Does this actually work?

Post by Malcolm Storey »

I have to say I'm not quite convinced by this technique.

After trying a variety of images, there always seems to be a information at scale 1 that only becomes visible after extreme manipulation of levels. Sometimes it contains fine detail like your example (which I presume means it's proper detail); other times the detail is quite gross (which perhaps means it's mostly mathematical differences [see below]). Anyway, the result is never clear-cut so it's at best a judgment call.

If you think about the maths behind it: an image with empty magnification contains interpolated values to fill the gaps between the "real" pixels. (I know it's not quite like that, but near enough.) Wavelet transform must make its own assumptions (presumably sine waves) about the shape of the interpolation. If your lens, sensor and camera software happen to generate a different curve, then there will be a scale 1 image composed of the difference between the two curves.

Think of it another way: if you upscale an image in PhotoShop you can choose from several different interpolation techniques. At most only one of these will match what Wavelet Transform expects and the others will presumably show false detail.
You wait all this time for a coincidence, then two come along at once...

Bushman.K
Posts: 162
Joined: Wed Aug 26, 2015 5:49 pm
Location: OR, USA
Contact:

Post by Bushman.K »

Malcolm,

Nature of digital photographic image, obtained using current technology, will always leave something in smallest scale level.

I didn't tell anything about "clear-cut" image - it is completely impossible at least because there always be noise, different artifacts and so on. This method doesn't give you direct binary answer "yes/no" on question, if particular picture was taken with lack of spatial resolution. It gives you a tool to evaluate amount of useful details.

In case of that picture I was using, Wavelet Scale 1 layer has minimum value 118 and maximum value 136, which means we have amplitude = 18 ? 7%. Then, technically, I should compare it with noise level in the same image by sampling it in areas, where no details should be found. But I'm too lazy, so let's assume that noise makes a half of the same amplitude, so "normalized" amount of details, keeping in mind SNR=2, will be 3.5%. Simply speaking, there are fine details in this picture, where local contrast is about 3.5% of the full brightness scale of 8-bit image. From this point it's up to you: to think that 3.5% of finest details is an improvement comparing to not having them, or it's insignificant.

Image with empty magnification produced by "magnification-limited" system (with too long extension for particular lens, for example) does not have anything like "interpolated pixels". It just lacks spatial resolution, because optical system is unable to project an image without "mixing" rays from really close features. Only two effects, capable to introduce false details in this case are diffraction and flares (internal reflection), I will talk about them a bit later.

Your further example with digitally enlarged image is a kind of irrelevant to this situation, because here I'm talking about assessment of optical system for magnification only (not about diffraction-limited system or even digitally manipulated image).

Indeed, diffraction that takes place in the same optical system can introduce non-existent fine details. Flares can do that too. But again, that's up to you, as an operator of this tool, to use your knowledge and to find out, if certain details are natural, artificial or introduced in any other manner.

rjlittlefield
Site Admin
Posts: 23562
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

Bushman.K wrote:Image with empty magnification produced by "magnification-limited" system (with too long extension for particular lens, for example) does not have anything like "interpolated pixels".
Sure it does, at least if you're looking at the output of raw --> RGB conversion. In that case each channel necessarily contains interpolated components at each pixel position where the Bayer filter had a different color. In addition there was probably a color profile change that makes each RGB in the image file be some linear combination of the interpolated RGB sensor data. I'd be a little surprised if there were any non-interpolated values at all in a typical TIFF file.

--Rik

Bushman.K
Posts: 162
Joined: Wed Aug 26, 2015 5:49 pm
Location: OR, USA
Contact:

Post by Bushman.K »

Rik,

It is not that kind of interpolation Malcolm was talking about, as I understand that. Or at least, it is the same interpolation every digital photo has, regardless of optical system, since almost every camera has Bayer pattern and so on. But, say, regular camera objective, brought too far from camera using bellows does not "interpolate" anything to fill "empty" pixels on sensor.

rjlittlefield
Site Admin
Posts: 23562
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

Bushman.K wrote:It is not that kind of interpolation Malcolm was talking about, as I understand that.
I'm not sure exactly what Malcolm was talking about.

But the essence of his concern, as I understand it, is that the method as described so far simply gives the wrong answer to a newbie who tries to use it to tell whether their setup is operating beyond the diffraction limit.

I share that concern. After all, this thread is currently posted in the FAQs section. That's a place where people go to get solid, reliable, usable answers to their questions. I don't think this one is quite there yet.

Let me illustrate my concerns.

I set up this evening with a Canon T1i camera, using an Olympus 135 mm f/4.5 bellows lens, shooting a picture of a standard USAF resolution chart placed far enough away that about half its bars would be below theoretical sensor resolution even with no diffraction. For GIMP testing, I shot in raw format and processed through Lightroom, setting 0 for sharpening and noise reduction, and saving as 8-bit TIFF in sRGB.

Here's an overview of what the full frame looks like:

Image

At f/11, here's what we see at 100% (actual pixels):

Image

Then I used an external 2 mm aperture to stop down the lens to f/67.

That changed the image to this:

Image

I think we can agree that this f/67 image is well beyond the diffraction limit for a 15 megapixel APS-C sensor.

Then I downloaded a fresh copy of Gimp 2.8.14 from http://www.partha.com/, and simply went through your procedure as described above, following the instructions as precisely as I could.

Here's what I saw.

Original image.
Image

Wavelet scale 1, before levels adjustment.
Image

Wavelet scale 1, during levels adjustment, showing histogram.
Image

Wavelet scale 1, after "Auto" levels adjustment.
Image

Now, to a newbie eye, this last image pretty clearly shows subject detail, leading to an obviously incorrect conclusion that the image is not over-sampled.

Why does subject detail appear in the wavelet scale 1 image?

Even more puzzling, why does that detail appear to be a noisy echo of what appears 2 levels away, up in scale 3?
Image

I do not know the answers to these questions. In standard theory it shouldn't do that, but obviously it does.

In any case, it seems to me that recommending to do an auto levels adjustment is somehow wrong.

Perhaps that bit of the operation might be presented as just a way to call the user's attention to places in the image that ought to then be considered in their original form, without the levels adjust. I can't actually see any detail in my scale 1 image without the adjustment, where with yours, I can see detail once my attention has been drawn to it.

--Rik

Malcolm Storey
Posts: 33
Joined: Tue Oct 11, 2011 5:13 am
Location: Dorset, UK
Contact:

Clarification

Post by Malcolm Storey »

Sorry I wasn't clear.

An image with a factor of two empty magnification still has the full number of pixels. In some sense only 25% (ie half in two dimensions) of the pixels contain information and the other 75% are in some sense "interpolated" by the physics of the lens. I realise the information is actually smeared across all the pixels, but the mathematics of this smearing is unlikely to exactly match the mathematical surface expected by the scale 2 wavelet transform.

My suggestion was that the differences between the two curves would be more marked in areas with rapid change of brightness so could produce a ghost of the information.
You wait all this time for a coincidence, then two come along at once...

Bushman.K
Posts: 162
Joined: Wed Aug 26, 2015 5:49 pm
Location: OR, USA
Contact:

Post by Bushman.K »

Rik,

I did not recommend doing auto levels as a final step of the procedure - it is only an enhanced illustration of what that 1-pixel scale layer contains. Doing auto adjustment of levels only helps to see what is there, and user's next step is to think about what he sees. He makes a decision, if "unevenness" he can see in 1-pixel scale layer is either useful information or noise/artifacts of any nature.

In case of your example with f/67, it is quite enough to compare 1-pixel scale level and 2-pixel scale level to see, that those ghosts you talking about are somewhere close to noise level, so it's necessary to reject any common sense judgement to misinterpret it as useful information.

This was never intended to be an extensive guide for newbies. And if you think that only articles of that grade should be here - let's just delete it, since I'm not claiming I can turn this into such guide.

Joyful
Posts: 143
Joined: Thu Feb 19, 2015 4:15 am
Location: Cape Town, South Africa

Post by Joyful »

Hi Guys -

Newbie here - Please do not delete it - move it back to where it started originally. I have personally found it very useful, and through it, have discovered Gimp - something I had heard about, but never used.

Although my level of knowlede is nowhere near you math gurus, I have still found it helpful in expanding my knowledge

Joyful

Malcolm Storey
Posts: 33
Joined: Tue Oct 11, 2011 5:13 am
Location: Dorset, UK
Contact:

Apologies

Post by Malcolm Storey »

Bushman's right that he used auto-levels just to demonstrate that the detail related to the subject, rather than as the final step of the procedure, but you do have to read it fairly carefully (aka "properly"!) to pick that up. My apologies for reading it too fast and diving in.

What we're really interested in here is the variance at different scales.

Has anybody tried 2-dimensional anovar? I'm not even sure that's the correct name, - we used to call it Southwood's method (but he came up with so many!) Anyway, I've just downloaded his "Ecological Methods" and gone through it but couldn't find it.

It was a sums-of-squares method apportioning the variance to a series of square grids, doubling the size in each successive grid (ie combining blocks of 4 squares). The result was a graph of variance (or at least, of sums of squares) against (log) grid size.

If you were prepared to forgo statistical political correctness, you could evaluate the intermediate grid sizes; and use different starting points to include all the data when it doesn't exactly match the grid, and to reduce any interference between the grid and regular pattern.
You wait all this time for a coincidence, then two come along at once...

rjlittlefield
Site Admin
Posts: 23562
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

Regarding what was or wasn't written...
Bushman.K wrote:I did not recommend doing auto levels as a final step of the procedure
Malcolm Storey wrote:Bushman's right that he used auto-levels just to demonstrate that the detail related to the subject, rather than as the final step of the procedure
I must be missing some trick about how to read the post. I'm looking at it again right now, and here's what I see:
...
But let's get back to the finest layer (turn off "Wavelet scale 2" and turn on "Wavelet scale 1"). Using conventional image enhancement methods, it's possible to make details from "Wavelet scale 1" more pronounced and easier to assess.
To do that, select "Wavelet scale 1" layer in list (click on it), go to Colours - Levels... menu and use Auto button to apply automatic channel equalization to currently selected layer:

(inline display of https://habrastorage.org/files/cb1/c9b/ ... c88273.jpg)

Here is the resulting enhanced image:

(inline display of https://habrastorage.org/files/f22/946/ ... ac2051.jpg)

As you can see, it does contain a lot of noise, but there are also details of legs. It means, this image doesn't suffer from "empty magnification" effect.

Notes
Adobe Photoshop or ImageJ (Fiji), widely used for image processing, do not have any similar tools.
...
If appearing as the last instruction and inference is not "final step of the procedure", then I don't know what is.

I realize that the writer may not have intended to recommend this as a final step, but the reader learning this technique will not know what's intended, only what's written.

Anyway, back to the content...
Bushman.K wrote:so it's necessary to reject any common sense judgement to misinterpret [Rik's enhanced scale 1 image] as useful information.
The difficulty I have with that argument is that somebody new to wavelet decomposition will not have any "common sense judgement" that applies to it.

So then what criteria is a person new to this technique supposed to use to make his or her judgement?

The one I would propose is visual evaluation, using a "flash to compare" technique to see what the wavelet scale 1 layer does or not contribute to the final image.

For my extreme diffraction test, here is what the first three scales look like, as animated GIF with 1.5 seconds per frame:

Image

To my eye, it seems clear that adding scale 3 contributes new detail, adding scale 2 increases the contrast of that detail without adding anything finer, and adding scale 1 just adds noise. That suggests to me that the diffraction cutoff must lie somewhere between scales 3 and 2, and I can't tell exactly where. (Crosscheck: assuming 450 nm blue light as the relevant wavelength, I calculate cutoff at 6.4 pixels/cycle, reassuringly halfway between the 4 pixels/cycle of scale 2 and 8 pixels/cycle of scale 3.)
Malcolm Storey wrote:What we're really interested in here is the variance at different scales.
Without a standardized test pattern, I don't understand how variance is going to help. We could try to standardize a test pattern, but then there's the problem of how to reproduce it over the wide range of sizes needed.

If we want a direct numerical test, then I think would make more sense to look at the slant-edge technique. It has some practical difficulties too, but at least there a single well made target can cover a wide range of magnifications.

But I'm not really interested in a direct numerical test.

To me, the most interesting thing about GIMP's wavelet decomposition is all the other stuff you can do with it, notably editing the layers separately so as to target things like cleaning up skin texture.

I had not been aware that this capability was integrated so nicely, so I am greatly appreciating learning about it.

--Rik

Joyful
Posts: 143
Joined: Thu Feb 19, 2015 4:15 am
Location: Cape Town, South Africa

Post by Joyful »

Good Morning folk -

As a newbie who blindly followed the written word, I indeed took the instructions as a "Procedure", but then explored further by switching off all layers except level 1 (auto adjusted) and the original image - which I then exported, and depending on the image fould it nicely sharpened.

I then switched off all layers except for level 1 then (flattened or not - can't remember) then exported the image, took it into ACR, fiddled all sliders, and ended up with my most "artistic" image, which Rik then helped me with.

So - on balance I have definately learned from this post - correct or not !

THANKS ALL

Joyful

rjlittlefield
Site Admin
Posts: 23562
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

Joyful wrote:I then switched off all layers except for level 1 then (flattened or not - can't remember) then exported the image, took it into ACR, fiddled all sliders, and ended up with my most "artistic" image, which Rik then helped me with.
For those who missed that interesting diversion, see HERE. It turns out that for files with very strong detail at the pixel level, Photoshop previews at less than 100% can look wildly different depending on whether the image is layered or not, and whether it's the main body of Photoshop or the ACR module that's doing the preview.

--Rik

Post Reply Previous topicNext topic