What causes halo?

Have questions about the equipment used for macro- or micro- photography? Post those questions in this forum.

Moderators: rjlittlefield, ChrisR, Chris S., Pau

mlackey
Posts: 24
Joined: Sun Jan 08, 2012 4:48 pm

What causes halo?

Post by mlackey »

Initially I was not sure where to post this question, but since s/w is now as much a part of imaging as any macro lens, I decided that "equipment" was as good a place as any.

I recently purchased a copy of Zerene stacker. I am still in the "learning phase". I can now see and identify "halo" in my pics, and am successfully using the "retouching tool" to remove it.

My real question is what causes "halo"? My perception is it's a limitation of an algorithm in general, that no algorithm can ever be as good as the human eye. The algorithm functions as designed, based on the latest technologies, but sometimes makes a less than perfect decision.

As a s/w developer for over 30 years, I have no complaints whatsoever, I do not want to convey negativity. I am just curious.

Mike Lackey
Madison, AL
Best regards,
Mike

rjlittlefield
Site Admin
Posts: 24061
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

Mike, thanks for posting this question so we could cover this publicly.

The first part of explaining is to provide a lot of background information.

For starters, look at three images at the beginning of this section of the Zerene Stacker DMap tutorial:
http://zerenesystems.com/cms/stacker/do ... imitations

As explained there, the basic problem is that when foreground goes OOF, its blur expands to cover nearby parts of the background. As a result, those portions of the background are never seen both focused and uncontaminated. They're either uncontaminated but out of focus (when focus is on the foreground), or they're focused but contaminated (when focus is on the background).

A live human viewer looking at a scene will not be aware of this basic problem because live humans avoid looking at background near foreground edges. If interested in the background, the human (and any other live critter) will simply move its head so as to get an unobstructed view. If that's not possible, then the situation gets written off as "can't get a clear look".

Similarly, a live human painting a picture of the scene for someone else to view will not actually paint what they see. Instead, they will paint a synthesis that represents what they think someone would see. In that synthesis, the background may be painted sharp and uncontaminated right up to the edge of a foreground object, but that's only possible because either the artist moved their head to see it uncontaminated, or they inferred from other background what that stuff next to the foreground should look like.

It's not hard to imagine an algorithm that would imitate the painter's second approach: infer from the stack of input images what the background should look like, and create an output image showing the result of that inference.

Unfortunately that level of smarts is far beyond any current stacking algorithm.

Instead, our current algorithms sort of just "pick and choose" from information that's explicitly present in the source images. For DMap and similar algorithms, that means choosing pixels or simple averages of pixels plucked from source images that are deemed to be "in focus" at those locations. PMax is more complicated, but still at heart it's plucking out information that is explicitly present in the source images.

Now let's go back to the question you asked: "what causes halo?"

My current answer is that halo is introduced by the optics of OOF blurs and is transformed by the software into a variety of forms that no longer look like simple blurring. There are several forms of halo, including obvious remnants of OOF blurs (by DMap), loss-of-detail regions around foreground edges (also by DMap), and inversion halos (by PMax).

Certainly a lot of halos -- maybe most of them -- can be accurately described as the software making less than perfect decisions. If you see a halo around a simple subject in front of a uniform background, that's just a mistake -- perfectly good data was present in the source images and the software didn't select it.

But when you see a halo on non-uniform background, that's not a simple mistake because what you'd like to see actually never appears in the source images. It would have to be constructed by some complicated synthesis, not simple selection, and that sort of synthesis is not present in today's software.

Does this help?

--Rik

mlackey
Posts: 24
Joined: Sun Jan 08, 2012 4:48 pm

Post by mlackey »

rjlittlefield wrote:It would have to be constructed by some complicated synthesis, not simple selection, and that sort of synthesis is not present in today's software.

Does this help?

--Rik
Yes, absolutely. Thanks for the bandwidth, and for taking time to educate.

Mike
Best regards,
Mike

rjlittlefield
Site Admin
Posts: 24061
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

Here's a diagram that illustrates the problem.

What's shown here is a side view and two top views of a white sphere sitting on a dark textured plane, viewed with a roughly NA 0.3 microscope objective.

One of the two top views is "What you want" -- a uniformly contrasty background right up to the edge of the white sphere.

The other top view is "What you get" -- a white sphere surrounded by contaminated background.

Image

This contamination looks like classic flare caused by a dirty lens, but that's not the case at all.

What's really happening is just the effect described earlier -- when the lens is focused on the background near the foreground object, roughly half the entrance cone sees OOF foreground instead of focused background.

The result is what we might call "optical contamination" in which colors and brightness of the foreground bleed into nearby background. The effect is particularly bad when the foreground is bright and the background is dark, because in that case most of the light actually comes from OOF bright foreground even though you're focused on the dark background. In extreme cases the contamination can swamp real detail in the background, resulting in a halo of blur as well as altered colors and contrasts.

Sad to say, there are no good cures for this problem.

In a theoretically perfect world, the contamination could be removed computationally by a form of 3D deconvolution, essentially figuring out the 3D structure of the subject, computing the amount of contamination caused by the effect shown here, then subtracting out the contamination so as to restore the original background colors and contrasts.

Unfortunately, it's very difficult to actually do this in practice. I am not aware of good demonstrations even in a research environment.

--Rik

johan
Posts: 1005
Joined: Tue Sep 06, 2011 7:39 am
Contact:

Post by johan »

Gosh, what a good clear explanation. Hat off to you.
My extreme-macro.co.uk site, a learning site. Your comments and input there would be gratefully appreciated.

rjlittlefield
Site Admin
Posts: 24061
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

Um, thanks! And it's only taken me, what, 4+ years (?!) to draw a good diagram of this effect. :? :)

To be fair, I have vague memories of having drawn something like this before, but I can't seem to remember where I posted it, if at all. Earlier text descriptions are easier to find but (I hope) harder to understand. There's one of them HERE.

--Rik

Chris S.
Site Admin
Posts: 4104
Joined: Sun Apr 05, 2009 9:55 pm
Location: Ohio, USA

Post by Chris S. »

rjlittlefield wrote:The effect is particularly bad when the foreground is bright and the background is dark, because in that case most of the light actually comes from OOF bright foreground even though you're focused on the dark background. In extreme cases the contamination can swamp real detail in the background, resulting in a halo of blur as well as altered colors and contrasts.

Sad to say, there are no good cures for this problem.
I'm a bit embarrassed to ask the following question, to which the answer is almost certainly "no," followed by an explanation of why there is no such thing as a free lunch. But here goes anyway.

What if one assembled a stack of images twice--once in the normal fashion, and once again upon negative versions of the images. Retouch both. Then make a positive version of the negative-image stack, and retouch the two together. Any benefit?

(For folks who came to photography in the digital age--way back in the days of film and dinosaurs, Some of our film, when processed, became something called a "negative." It was a tonal reverse of whatever you took a picture of--the light bits of your subject were dark on the negative, and the dark bits of your subject were light on the negative. To make a print, you made a negative of the negative, which was a positive. (I'm ignoring a large class of films called "reversal films," which didn't do this, and some complexities added by color film--this description is most accurate for black and white film.) Today, it is very easy to make a negative of a digital image with a few clicks in Photoshop.)

--Chris

spongepuppy
Posts: 87
Joined: Mon Nov 26, 2012 11:03 pm
Location: Sydney, Australia

Post by spongepuppy »

Chris,

In my work as a graphic designer, when retouching photographs we use a technique that in some respects resembles what I think you are describing here.

A common technique called (in my industry) frequency separation is used to split the image into low and high spatial frequencies by gaussian blurring the original image and then subtracting it from itself. This is then overlaid onto the blurred image to restore the original pixel values.

We use this to retouch things like skin colour without destroying the texture of the skin, because we can work on the low frequency layer independently. At the same time, it is also possible to change textures without the added complication of luminance matching. It's basically the same thing that Photoshop's healing tool does, but with finesse.

Instead of using a gaussian blur, you can use any filter to retouch one aspect of an image without impacting another. Using Photoshop's Surface Blur instead of gaussian blur effectively separates an image into colour and texture layers, but it is selective to texture boundaries. That way, you can often 'underpaint' the halos away without destroying [too much] detail.

I'll see if I can work up an example this afternoon to demonstrate what I'm talking about.
---
Matt Inman

Chris S.
Site Admin
Posts: 4104
Joined: Sun Apr 05, 2009 9:55 pm
Location: Ohio, USA

Post by Chris S. »

You know, Matt--I once watched a tutorial that showed the process you're describing to remove some discolorations on a model's face, without removing the texture of her skin. I thought the result looked much better than the mannequin-like skin appearance left by many retouchers.

I hadn't thought of using frequency separation in other contexts, including macro. Will be looking forward to what you demonstrate. Great!

--Chris

spongepuppy
Posts: 87
Joined: Mon Nov 26, 2012 11:03 pm
Location: Sydney, Australia

Post by spongepuppy »

Edit: You can see these images at 100% size in this flickr set: http://www.flickr.com/photos/spongepupp ... 094607253/

Here is a 1600px crop from a PMAX stack I did this morning to test out my shiny new LOMO 3.7x/0.11 objective. It's a jumping bulldog ant, which should be familiar to most Australians. You can see the characteristic halos around the setae on the margin and uneven shading on the solid background (you'll probably need to view these on Flickr, because the effect is harder to see at small sizes):

Image
Original(1of4) by spongepuppy, on Flickr

After duplicating this layer twice in Photoshop, I take one of the layers and blur it until most fine detail disappears with Gaussian Blur, but the structure of the image is still clear:

Image
Blurred(2of4) by spongepuppy, on Flickr

Now we use Image->Apply Image on the second, untouched duplicate layer. This next image is really just to show the dialog - your source layer will probably be name something different.

Image
Apply Image Dialog by spongepuppy, on Flickr

That will give us this:
Image
Subtracted(3of4) by spongepuppy, on Flickr

Move it above the blurred layer, and set the blending mode to Linear Light. The original image reappears!

Now I can work on the margin. Since only a small amount of the data that comprises the halos ended up on the grey (detail) layer, and because I am lazy, I hit the blurry (structure) layer with a surface blur. This doesn't kill the halos completely, but it does clean things up a lot:

Image
Retouched(3aof4) by spongepuppy, on Flickr

And the end result is:

Image
Result by spongepuppy, on Flickr

Most of the more offensive fringes on the setae on the mandibles are gone, but the actual structures are still [mostly] intact.

Obviously, this isn't ideal - my next step would generally be to re-mask the original in to restore some of the '3d' structure to the subject (and to restore some low-contrast detail that may be been lost), and to tidy up the colours on the margin.

Edit: Here's a low-res animation to compare the original and the final (mods: sorry if imgur isn't an approved image host - flickr doesn't support GIFs):
Image

I'm working on a second example to show how you could use this technique to restore some contrast to regions partially obscured. I just haven't worked out how to make it work consistently yet! Will post soon if I have any success.
---
Matt Inman

soldevilla
Posts: 710
Joined: Thu Dec 16, 2010 2:49 pm
Location: Barcelona, more or less

Post by soldevilla »

I have tested another method. I duplicated layer and convert it to mode "overlay". I applied to this layer a filter "high pass" with a rather high value (in your image, 15) to create halos. Desaturated, invert and apply a Gaussian blur. This produces an image with reduced halos and slightly focused.

Try it...

Image

spongepuppy
Posts: 87
Joined: Mon Nov 26, 2012 11:03 pm
Location: Sydney, Australia

Post by spongepuppy »

The high-pass / inverted gaussian blur solution is useful in some settings, although not ideal as a foundation for more significant retouching work. It can be great for reducing contrast across a specific frequency domain, and has merit when you have to somehow recover an image that has been heavily oversharpened.

Notice that the dark smearing around the hairs on the bottom right is still present. Is is also more difficult to work with from a retouching perspective, because you end up with your original image, plus an inverted HP layer that doesn't really correspond to any particular aspect of the image you want to modify. You're still stuck with having to match colour and luminance values at low frequency, and at high frequency you're basically back to square one.

At this stage of processing most retouchers will want to avoid anything that amounts to sharpening. A key benefit to frequency separation is the fact we can basically 'paint' the background to fix margin halos directly onto the layer as one would see it, but without changes to the structure of highly detailed regions.

That said, the Apply Image step used to create the high frequency layer is basically high pass - but you will note that just doing a high pass gives a slightly different result in Photoshop, most likely because the High Pass filter uses a rectangular window for blurring, whereas Gaussian Blur uses a circular one.
---
Matt Inman

rjlittlefield
Site Admin
Posts: 24061
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

Chris S. wrote:I'm a bit embarrassed to ask the following question, to which the answer is almost certainly "no," followed by an explanation of why there is no such thing as a free lunch. But here goes anyway.

What if one assembled a stack of images twice--once in the normal fashion, and once again upon negative versions of the images. Retouch both. Then make a positive version of the negative-image stack, and retouch the two together. Any benefit?
I agree: the answer is almost certainly "no", and I'll try to explain why.

The reason why is that Zerene's stacking methods are linear in the sense that stack(c*images+k) = c*stack(images)+k. (The math describes a calculation that is done pixel-wise, where c and k are just numbers. For example c = -1 and k = 255 will invert an image that is stored as values 0..255.)

When you invert the source images, stack, then invert the result, what comes out the end is essentially the same as if you had skipped both inverts.

For PMax the description is not completely accurate because of an HDR-like step at the very end that attempts to bring otherwise blown-out highlights back into range so they can be saved in normal TIFF and JPEG files. Values in the affected range will be treated differently in the normal and inverted representations. However, this difference will have minimal effect on any halos. The normal and invert+invert images will have very similar appearance.

By the way, if you want to run these experiments yourself, be sure to disable brightness adjustment. If you leave that enabled, as it is by default, then you'll definitely see some differences between the normal and invert+invert results. That's because brightness adjustment involves altering gamma, which affects normal and inverted images differently. However, again this will not affect halos significantly, since the adjustment is essentially just curves-and-levels.

--Rik

Edit: to clarify the notation around "linear".
Last edited by rjlittlefield on Mon Feb 16, 2015 9:24 am, edited 2 times in total.

rjlittlefield
Site Admin
Posts: 24061
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

Chris S. wrote:I hadn't thought of using frequency separation in other contexts, including macro.
You may not have thought of it, but I happen to know that you're an ardent fan of the technique and you use it almost every day.

That's because both the PMax stacking method and Zerene Stacker's default retouching brush are based on frequency separation. The original source images are internally decomposed into a "Laplacian pyramid" of progressively smaller difference images that each contain information from only one narrow frequency band. The bands are operated on separately, then recombined to produce a conventional image that can be displayed and saved.

It's the decomposition into frequency bands that allows PMax to avoid needing any adjustable parameters like radius. A single small radius, applied to each frequency band separately, naturally adapts to whatever level of detail is present in the image. The finest hairs on a fly and the largest smooth stripes on a seashell are handled equally well by the same computation, albeit at different levels in the pyramid.

Similarly, it's the decomposition into frequency bands that lets the default retouching brush automatically adjust its hardness to match whatever level of detail is present in the image. Fine detail gets painted with a hard brush and coarser detail gets painted with a softer brush, all as a natural result of working on each frequency band separately.
spongepuppy wrote:I'm working on a second example to show how you could use this technique to restore some contrast to regions partially obscured. I just haven't worked out how to make it work consistently yet!
I expect that making it work consistently will be difficult indeed. That's because the information you would need to do that automatically is not contained in the result image.

A simple way to see this is to imagine my diagram with two white spheres, one big and one small. Then the situation for each sphere is similar, but because the edge of the big sphere sits farther away from the surface, its blur circle is bigger and it contaminates a wider halo.

As a more extreme example, imagine that instead of two spheres we have a single white cone. Then at the narrow end of the cone the contamination halo is narrow, at the wide end it's wide, in between it's in between, and at no point does the stacked image provide any decent information about how wide the halo is.

The width of the halo is determined by the 3D structure of the original subject, combined with the shape of the entrance cone, or more precisely the 3D point spread function of the lens.

Given full knowledge of those things, it's "conceptually straightforward" to correct for the contamination. You infer the 3D structure, figure out how much light came from OOF foreground, subtract that from the total amount originally sensed on focused background, then scale up whatever is left to compensate for the fact that the background was actually seen by only a fraction of the aperture.

Masking and filtering definitely can work well as editing techniques, to the extent that they can be made to reasonably approximate the subtract-and-scale that would be needed to produce an exact result. The challenge is how to come up with the parameters necessary to do that. I have no idea how to do that computationally. In the hands of a skilled user, they may be powerful tools for implementing human judgement.

--Rik

spongepuppy
Posts: 87
Joined: Mon Nov 26, 2012 11:03 pm
Location: Sydney, Australia

Post by spongepuppy »

rjlittlefield wrote: Masking and filtering definitely can work well as editing techniques, to the extent that they can be made to reasonably approximate the subtract-and-scale that would be needed to produce an exact result. The challenge is how to come up with the parameters necessary to do that. I have no idea how to do that computationally. In the hands of a skilled user, they may be powerful tools for implementing human judgement.
I am aware that frequency separation is, basically, how PMax works. It's clear in the retouching mode, since there is still occasional contamination in the low-frequency domain.

My goal on the retouching side of things is to achieve an aesthetic objective - so technical correctness isn't the goal. If I can come up with a workflow that lets me brush-in improved detail in contaminated areas, then I will consider my quest complete! Anything more complicated than that is likely well above my pay grade :)
---
Matt Inman

Post Reply Previous topicNext topic