Stacking software: CZP vs. HF -new images
Moderators: rjlittlefield, ChrisR, Chris S., Pau
Stacking software: CZP vs. HF -new images
That is a comparison between CombineZP and Helicon Focus 4.1.
Helicon Focus 4.1 consistently puts halos around my bug subjects, especially noticeable on hairs and setae.
Rik suggested, in an e-mail, that I try CombineZP. Specifically do an "Align and Balance Used Frames" and then a "Pyramoid Maximum Contrast" algorithm.
CombineZP is somewhat more difficult to use than HF but it is free and does produce a halo-free image.
Top image is a 17 frame stack using HF 4.1, note the halos around the setae.
Bottom image, CZP of the same frames - halo free setae.
Thanks Rik.
Helicon Focus 4.1 consistently puts halos around my bug subjects, especially noticeable on hairs and setae.
Rik suggested, in an e-mail, that I try CombineZP. Specifically do an "Align and Balance Used Frames" and then a "Pyramoid Maximum Contrast" algorithm.
CombineZP is somewhat more difficult to use than HF but it is free and does produce a halo-free image.
Top image is a 17 frame stack using HF 4.1, note the halos around the setae.
Bottom image, CZP of the same frames - halo free setae.
Thanks Rik.
Last edited by NikonUser on Sat Nov 22, 2008 2:30 pm, edited 1 time in total.
NU.
student of entomology
Quote – Holmes on ‘Entomology’
” I suppose you are an entomologist ? “
” Not quite so ambitious as that, sir. I should like to put my eyes on the individual entitled to that name.
No man can be truly called an entomologist,
sir; the subject is too vast for any single human intelligence to grasp.”
Oliver Wendell Holmes, Sr
The Poet at the Breakfast Table.
Nikon camera, lenses and objectives
Olympus microscope and objectives
student of entomology
Quote – Holmes on ‘Entomology’
” I suppose you are an entomologist ? “
” Not quite so ambitious as that, sir. I should like to put my eyes on the individual entitled to that name.
No man can be truly called an entomologist,
sir; the subject is too vast for any single human intelligence to grasp.”
Oliver Wendell Holmes, Sr
The Poet at the Breakfast Table.
Nikon camera, lenses and objectives
Olympus microscope and objectives
In another posting HERE Rik questioned the meaning of my comment that a CPZ stack was "noticeably poorer" than the HF stack. Perhaps I should have said that I prefer the HF image despite its halos; purely subjective but that's OK.
Rik also stated "It's a known problem that the pyramid algorithms amplify noise. This is a particular problem with CZP because it only accepts 8-bit input. So I'd expect the CZP output to be quite a bit noisier than either of HF's methods. But I wonder, is something else going on or is it just the noise that you're talking about".
I guess I was talking about noise, noise that degraded the image. I had no idea that pyramid algorithms amplify noise; actually, I have no idea what a pyramid algorithm is.
Top image: 800 actual px crop from the HF stack
2nd : same from the CZP stack
3rd : 400px actual px crop from HF stack resampled to 800px
4th: similar from the CZP stack
Images 1 & 2 with levels and USM adjustments.
Rik also stated "It's a known problem that the pyramid algorithms amplify noise. This is a particular problem with CZP because it only accepts 8-bit input. So I'd expect the CZP output to be quite a bit noisier than either of HF's methods. But I wonder, is something else going on or is it just the noise that you're talking about".
I guess I was talking about noise, noise that degraded the image. I had no idea that pyramid algorithms amplify noise; actually, I have no idea what a pyramid algorithm is.
Top image: 800 actual px crop from the HF stack
2nd : same from the CZP stack
3rd : 400px actual px crop from HF stack resampled to 800px
4th: similar from the CZP stack
Images 1 & 2 with levels and USM adjustments.
NU.
student of entomology
Quote – Holmes on ‘Entomology’
” I suppose you are an entomologist ? “
” Not quite so ambitious as that, sir. I should like to put my eyes on the individual entitled to that name.
No man can be truly called an entomologist,
sir; the subject is too vast for any single human intelligence to grasp.”
Oliver Wendell Holmes, Sr
The Poet at the Breakfast Table.
Nikon camera, lenses and objectives
Olympus microscope and objectives
student of entomology
Quote – Holmes on ‘Entomology’
” I suppose you are an entomologist ? “
” Not quite so ambitious as that, sir. I should like to put my eyes on the individual entitled to that name.
No man can be truly called an entomologist,
sir; the subject is too vast for any single human intelligence to grasp.”
Oliver Wendell Holmes, Sr
The Poet at the Breakfast Table.
Nikon camera, lenses and objectives
Olympus microscope and objectives
- Charles Krebs
- Posts: 5865
- Joined: Tue Aug 01, 2006 8:02 pm
- Location: Issaquah, WA USA
- Contact:
NU,
Wow... looks like we are officially into "pixel-peeping" territory now!
But this is a helpful example. Somewhere I once mentioned that I thought the output from HF looked more "photographic" than CZP and TuFuse. Although those two do a do a better job avoiding the haloes, they accentuate contrast and noise, and give the image a more "digital" look. You are showing exactly what I was trying to describe. To be honest, I don't think that is even really noticeable on a web-size image, but I can really see it when I do the "finishing" work on an image. (Maybe I'm looking too closely )
Wow... looks like we are officially into "pixel-peeping" territory now!
But this is a helpful example. Somewhere I once mentioned that I thought the output from HF looked more "photographic" than CZP and TuFuse. Although those two do a do a better job avoiding the haloes, they accentuate contrast and noise, and give the image a more "digital" look. You are showing exactly what I was trying to describe. To be honest, I don't think that is even really noticeable on a web-size image, but I can really see it when I do the "finishing" work on an image. (Maybe I'm looking too closely )
Charles: I recognize the absurdity of the last 2 images. In the 'actual pixel' eys images (#1&2) I thought I could see a loss of quality in the CZP stack. So I enlarged a section of the images (3&4). This convinced me that what I thought I was seeing in #2 was in fact real.
I feel mean to be criticizing CZP considering it is a free program; I'm not totally enthralled with HF either, for which I had to pay.
I feel mean to be criticizing CZP considering it is a free program; I'm not totally enthralled with HF either, for which I had to pay.
NU.
student of entomology
Quote – Holmes on ‘Entomology’
” I suppose you are an entomologist ? “
” Not quite so ambitious as that, sir. I should like to put my eyes on the individual entitled to that name.
No man can be truly called an entomologist,
sir; the subject is too vast for any single human intelligence to grasp.”
Oliver Wendell Holmes, Sr
The Poet at the Breakfast Table.
Nikon camera, lenses and objectives
Olympus microscope and objectives
student of entomology
Quote – Holmes on ‘Entomology’
” I suppose you are an entomologist ? “
” Not quite so ambitious as that, sir. I should like to put my eyes on the individual entitled to that name.
No man can be truly called an entomologist,
sir; the subject is too vast for any single human intelligence to grasp.”
Oliver Wendell Holmes, Sr
The Poet at the Breakfast Table.
Nikon camera, lenses and objectives
Olympus microscope and objectives
- Charles Krebs
- Posts: 5865
- Joined: Tue Aug 01, 2006 8:02 pm
- Location: Issaquah, WA USA
- Contact:
I was sort of joking about the "pixel-peeping". In reality, it is not a pejorative term at all when you want to analyze the performance of these programs. You really do need to examine the output at pixel level to understand the strengths and weaknesses of each of the programs.Charles: I recognize the absurdity of the last 2 images.
- rjlittlefield
- Site Admin
- Posts: 23972
- Joined: Tue Aug 01, 2006 8:34 am
- Location: Richland, Washington State, USA
- Contact:
NU, I apologize for so casually saying "known problem" and "pyramid methods". Let me take some time here to explain about the various stacking algorithms.
HF Method A is a "weighted average" algorithm. The details are proprietary, but roughly speaking, what it does is to compute a weight for each frame at each pixel position, by estimating how well focused the image is in a small neighborhood around that pixel. Then it computes the final value of each pixel as a weighted sum of the pixel values from every frame. CZP's Do Weighted Average macro is quite similar.
HF's Method B is a "depth map" algorithm. The first stage of processing is again to estimate focus by looking in small neighborhoods. But instead of computing a weighted average, the program identifies the single best-focused image at each pixel position where that decision can be made reliably. Then it constructs a single-valued "depth map" function that fits all of the reliable decisions and smoothly interpolates the gaps. Finally it computes the value of each output pixel as the pixel from whichever single input frame is indicated by the depth map, or as a weighted average of a couple of frames surrounding the interpolated depth. CZP's Do Stack takes a similar approach.
Notice that in both of these methods, the estimation of best focus is done by looking within a fixed-size neighborhood. You can adjust the size of the neighborhood, but it's a global parameter. This is a serious limitation.
Suppose you use either of the above algorithms to process a stack showing the inside of an abalone shell sitting on a pile of netting. The same size neighborhood gets used to evaluate focus of the netting and of the shell --- even though the netting has detail and depth changes at the scale of a few pixels, while detail and depth in the abalone shell change only over much larger areas. This situation causes problems for the fixed-size-neighborhood approaches.
You might imagine, then, an approach based on variable size neighborhoods. Use small neighborhoods where there is fine detail, and large neighborhoods where there is only larger detail.
That is essentially what the pyramid algorithms do, but the actual trick they use is more clever.
What the pyramid algorithms do is to transform each original input image into a collection of "difference" images, each of which contains detail at only one narrow range of scales. The largest difference image is the same size as the original image, but contains detail only at the scale of individual pixels. All detail at larger scales has been subtracted out. Then there is a difference image that is 1/2 the original size (on each axis) and contains detail only at a scale that originally spanned 2 pixels. A third difference image is 1/4 the size of the original, containing detail that originally spanned 4 pixels. This pattern repeats for difference images at 1/8 scale, 1/16, 1/32, and so on.
The collection of all these smaller and smaller images is called a "pyramid" because you can imagine the collection in terms of what you would get if you sliced an ordinary pyramid at points 1/2 of the way down from its apex, 1/4 of the way down, 1/8, 1/16, and so on.
What happens next is simple: run one of the neighborhood algorithms at each level of the pyramid independently. What this produces is a collection of extended-depth-of-field difference images.
Then to finish up, the algorithm reassembles a single extended-depth-of-field output image by reversing the process it used to build the pyramid in the first place. In other words, the differences at various scales get recombined so as to recreate an ordinary image containing ordinary pixel values.
I imagine this concept of "de-compose, operate, re-compose" is not very intuitive. It hurt my head a bit, the first time I encountered it. But it is really quite an old approach, first developed in the 1980's and applied then for a variety of purposes, including the construction of extended depth of field images!
The pyramid approach has a couple of advantages.
One advantage is that the algorithm automatically makes decisions using a neighborhood that is appropriately sized for the scale of detail that's present at each point in each image. In my example, on the netting around the shell most of the information gets analyzed using what would be a small neighborhood in the original image. On the smooth part of the shell, most of the information gets analyzed using what would be a large neighborhood in the original image.
This automatic adjustment for size of detail is what lets the pyramid algorithms avoid the most common cases of halo. The halo you see around bristles in your HF output is due to analyzing those regions with a neighborhood that is too large for those areas. If you made the neighborhood smaller, the bristles would come out better. But at the same time, other areas would come out worse, because they do not contain sufficient fine detail to allow making reliable decisions over only a small neighborhood. The pyramid algorithms handle these differences gracefully by automatically using neighborhoods of the appropriate size.
The pyramid algorithms are also quite resistant to "stacking mush", which often occurs in low contrast regions when using the depth map or weighted average approaches. Explaining why this happens is even harder than explaining about halo, so I'm going to preserve both of our sanities and not even try.
The pyramid approach also has a couple of disadvantages.
One of them is contrast buildup. To see why this occurs, consider a uniform area containing a bright spot that is only one pixel big when perfectly focused. In the perfectly focused frame, the information about that bright spot is concentrated at the base of the pyramid. In the next frame, the bright spot is slightly unfocused. That loss of focus looks like detail at a coarser scale, so information about that now unfocused bright spot concentrates in a different level of the pyramid. As the bright spot becomes less and less focused, information about it gets distributed across several levels of the pyramid. The neighborhood algorithm running on each level of the pyramid independently has no way to know that all this information represents the same bright spot just seen in different ways. Instead, it ends up accumulating some brightness from the perfectly focused frame, plus some brightness from a less focused frame, plus some brightness from an even less focused frame, and so on. When the difference images get re-composed to form an output image, the brightnesses at all of these levels get added together. The final result shows a bright single pixel sitting in the center of a less bright blur, and that brightest pixel is brighter in the final result than it was even in the perfectly focused frame. With more complex patterns, the process is harder to imagine but it is essentially the same -- unfocused images end up contributing brightness, even though all the fine detail comes from the focused images.
Contrast buildup can also cause a form of edge enhancement that is sort of like halo but usually not so obvious. It's most visible when you have lots of frames and your subject contains strong edges such as a bright leaf against a dark background. What happens is that in the vicinity of the bright leaf, the dark background gets darker, and in the vicinity of the dark background, the bright leaf gets brighter. These effects tend to be pretty gradual so they're not easily noticed, but in some stacks pixel values can get pushed so far toward black or white that the effects are pretty unpleasant.
The other disadvantage is noise buildup. What happens here is that the current algorithms blindly ignore all sorts of structure in the images except whatever is present in each small neighborhood of each pixel in the pyramid. As a result, they essentially declare any bit of noise in any frame to be "detail" that might be worth showing. If some pixel-level noise in any frame happens to be more contrasty than any real detail at that same position in all the other frames, then the noise propagates into the output. In areas that have no pixel-level detail, the noise you get at each pixel position is always that of the worst frame, even if that portion of that frame is completely OOF. Several ideas come to mind for how to at least reduce this problem, but to the best of my knowledge, nobody has taken time to explore them yet. The challenge is how to avoid noise buildup without simultaneously re-introducing stacking mush.
Each of the methods has advantages and disadvantages. To summarize:
1. Depth map methods produce the cleanest images for simple geometries having no overlap or sharp changes in depth. They tend to fail catastrophically when confronted with overlapping fibers.
2. Weighted average methods do a better job of handling complex geometries. They suffer from halo effects and stacking mush, but they do not build up contrast at all and generally do not build up much noise.
3. Pyramid methods are good for complex geometries including overlapping fibers, without suffering from halo or stacking mush, but at the cost of contrast buildup, edge enhancement, and noise accumulation.
Despite its length, this note is a simplified picture of the technology. There's a lot going on "under the covers" of all these packages, and the algorithms continue to improve at a pretty good clip.
I'm happy to hear that you're now saving all of your source images. It's quite likely that by this time next year, you may want to reprocess one or two of them using whatever the best tool available then happens to be.
--Rik
HF Method A is a "weighted average" algorithm. The details are proprietary, but roughly speaking, what it does is to compute a weight for each frame at each pixel position, by estimating how well focused the image is in a small neighborhood around that pixel. Then it computes the final value of each pixel as a weighted sum of the pixel values from every frame. CZP's Do Weighted Average macro is quite similar.
HF's Method B is a "depth map" algorithm. The first stage of processing is again to estimate focus by looking in small neighborhoods. But instead of computing a weighted average, the program identifies the single best-focused image at each pixel position where that decision can be made reliably. Then it constructs a single-valued "depth map" function that fits all of the reliable decisions and smoothly interpolates the gaps. Finally it computes the value of each output pixel as the pixel from whichever single input frame is indicated by the depth map, or as a weighted average of a couple of frames surrounding the interpolated depth. CZP's Do Stack takes a similar approach.
Notice that in both of these methods, the estimation of best focus is done by looking within a fixed-size neighborhood. You can adjust the size of the neighborhood, but it's a global parameter. This is a serious limitation.
Suppose you use either of the above algorithms to process a stack showing the inside of an abalone shell sitting on a pile of netting. The same size neighborhood gets used to evaluate focus of the netting and of the shell --- even though the netting has detail and depth changes at the scale of a few pixels, while detail and depth in the abalone shell change only over much larger areas. This situation causes problems for the fixed-size-neighborhood approaches.
You might imagine, then, an approach based on variable size neighborhoods. Use small neighborhoods where there is fine detail, and large neighborhoods where there is only larger detail.
That is essentially what the pyramid algorithms do, but the actual trick they use is more clever.
What the pyramid algorithms do is to transform each original input image into a collection of "difference" images, each of which contains detail at only one narrow range of scales. The largest difference image is the same size as the original image, but contains detail only at the scale of individual pixels. All detail at larger scales has been subtracted out. Then there is a difference image that is 1/2 the original size (on each axis) and contains detail only at a scale that originally spanned 2 pixels. A third difference image is 1/4 the size of the original, containing detail that originally spanned 4 pixels. This pattern repeats for difference images at 1/8 scale, 1/16, 1/32, and so on.
The collection of all these smaller and smaller images is called a "pyramid" because you can imagine the collection in terms of what you would get if you sliced an ordinary pyramid at points 1/2 of the way down from its apex, 1/4 of the way down, 1/8, 1/16, and so on.
What happens next is simple: run one of the neighborhood algorithms at each level of the pyramid independently. What this produces is a collection of extended-depth-of-field difference images.
Then to finish up, the algorithm reassembles a single extended-depth-of-field output image by reversing the process it used to build the pyramid in the first place. In other words, the differences at various scales get recombined so as to recreate an ordinary image containing ordinary pixel values.
I imagine this concept of "de-compose, operate, re-compose" is not very intuitive. It hurt my head a bit, the first time I encountered it. But it is really quite an old approach, first developed in the 1980's and applied then for a variety of purposes, including the construction of extended depth of field images!
The pyramid approach has a couple of advantages.
One advantage is that the algorithm automatically makes decisions using a neighborhood that is appropriately sized for the scale of detail that's present at each point in each image. In my example, on the netting around the shell most of the information gets analyzed using what would be a small neighborhood in the original image. On the smooth part of the shell, most of the information gets analyzed using what would be a large neighborhood in the original image.
This automatic adjustment for size of detail is what lets the pyramid algorithms avoid the most common cases of halo. The halo you see around bristles in your HF output is due to analyzing those regions with a neighborhood that is too large for those areas. If you made the neighborhood smaller, the bristles would come out better. But at the same time, other areas would come out worse, because they do not contain sufficient fine detail to allow making reliable decisions over only a small neighborhood. The pyramid algorithms handle these differences gracefully by automatically using neighborhoods of the appropriate size.
The pyramid algorithms are also quite resistant to "stacking mush", which often occurs in low contrast regions when using the depth map or weighted average approaches. Explaining why this happens is even harder than explaining about halo, so I'm going to preserve both of our sanities and not even try.
The pyramid approach also has a couple of disadvantages.
One of them is contrast buildup. To see why this occurs, consider a uniform area containing a bright spot that is only one pixel big when perfectly focused. In the perfectly focused frame, the information about that bright spot is concentrated at the base of the pyramid. In the next frame, the bright spot is slightly unfocused. That loss of focus looks like detail at a coarser scale, so information about that now unfocused bright spot concentrates in a different level of the pyramid. As the bright spot becomes less and less focused, information about it gets distributed across several levels of the pyramid. The neighborhood algorithm running on each level of the pyramid independently has no way to know that all this information represents the same bright spot just seen in different ways. Instead, it ends up accumulating some brightness from the perfectly focused frame, plus some brightness from a less focused frame, plus some brightness from an even less focused frame, and so on. When the difference images get re-composed to form an output image, the brightnesses at all of these levels get added together. The final result shows a bright single pixel sitting in the center of a less bright blur, and that brightest pixel is brighter in the final result than it was even in the perfectly focused frame. With more complex patterns, the process is harder to imagine but it is essentially the same -- unfocused images end up contributing brightness, even though all the fine detail comes from the focused images.
Contrast buildup can also cause a form of edge enhancement that is sort of like halo but usually not so obvious. It's most visible when you have lots of frames and your subject contains strong edges such as a bright leaf against a dark background. What happens is that in the vicinity of the bright leaf, the dark background gets darker, and in the vicinity of the dark background, the bright leaf gets brighter. These effects tend to be pretty gradual so they're not easily noticed, but in some stacks pixel values can get pushed so far toward black or white that the effects are pretty unpleasant.
The other disadvantage is noise buildup. What happens here is that the current algorithms blindly ignore all sorts of structure in the images except whatever is present in each small neighborhood of each pixel in the pyramid. As a result, they essentially declare any bit of noise in any frame to be "detail" that might be worth showing. If some pixel-level noise in any frame happens to be more contrasty than any real detail at that same position in all the other frames, then the noise propagates into the output. In areas that have no pixel-level detail, the noise you get at each pixel position is always that of the worst frame, even if that portion of that frame is completely OOF. Several ideas come to mind for how to at least reduce this problem, but to the best of my knowledge, nobody has taken time to explore them yet. The challenge is how to avoid noise buildup without simultaneously re-introducing stacking mush.
Each of the methods has advantages and disadvantages. To summarize:
1. Depth map methods produce the cleanest images for simple geometries having no overlap or sharp changes in depth. They tend to fail catastrophically when confronted with overlapping fibers.
2. Weighted average methods do a better job of handling complex geometries. They suffer from halo effects and stacking mush, but they do not build up contrast at all and generally do not build up much noise.
3. Pyramid methods are good for complex geometries including overlapping fibers, without suffering from halo or stacking mush, but at the cost of contrast buildup, edge enhancement, and noise accumulation.
Despite its length, this note is a simplified picture of the technology. There's a lot going on "under the covers" of all these packages, and the algorithms continue to improve at a pretty good clip.
I'm happy to hear that you're now saving all of your source images. It's quite likely that by this time next year, you may want to reprocess one or two of them using whatever the best tool available then happens to be.
--Rik
- rjlittlefield
- Site Admin
- Posts: 23972
- Joined: Tue Aug 01, 2006 8:34 am
- Location: Richland, Washington State, USA
- Contact:
NU, your images are very helpful.
They're also a bit puzzling, in that I'm surprised by how very much noise has accumulated in the CZP image. The pyramid algorithms are known to accumulate noise, but I've never seen anything nearly as intense as what you're showing here.
For example, I just now reprocessed my fruit fly portrait, running it through CZP Pyramoid Maximum Contrast and Helicon Focus Method A, default parameters.
After balancing brightness and contrast to make the two images look as much alike as I could, here's what they look like. These are 100% pixels.
I simply don't know how to reconcile these results with what I'm seeing in NU's images. What I'm showing here is what I routinely get from CZP and TuFuse -- a significant increase in noise compared with the original frames, but typically not beyond a level that I would have expected from 35mm film.
Obviously other people are getting different results, and I don't know why.
The source stack for my fruit fly is 167 frames, so plenty of opportunity for noise to pile up. The source images were lowest-quality JPEG from a Canon 300D. This camera is approaching 5 years old, and it's hard to imagine lower quality source images coming from a newer DSLR. Very confusing!
--Rik
They're also a bit puzzling, in that I'm surprised by how very much noise has accumulated in the CZP image. The pyramid algorithms are known to accumulate noise, but I've never seen anything nearly as intense as what you're showing here.
For example, I just now reprocessed my fruit fly portrait, running it through CZP Pyramoid Maximum Contrast and Helicon Focus Method A, default parameters.
After balancing brightness and contrast to make the two images look as much alike as I could, here's what they look like. These are 100% pixels.
I simply don't know how to reconcile these results with what I'm seeing in NU's images. What I'm showing here is what I routinely get from CZP and TuFuse -- a significant increase in noise compared with the original frames, but typically not beyond a level that I would have expected from 35mm film.
Obviously other people are getting different results, and I don't know why.
The source stack for my fruit fly is 167 frames, so plenty of opportunity for noise to pile up. The source images were lowest-quality JPEG from a Canon 300D. This camera is approaching 5 years old, and it's hard to imagine lower quality source images coming from a newer DSLR. Very confusing!
--Rik
- rjlittlefield
- Site Admin
- Posts: 23972
- Joined: Tue Aug 01, 2006 8:34 am
- Location: Richland, Washington State, USA
- Contact:
- rovebeetle
- Posts: 308
- Joined: Thu May 22, 2008 4:21 am
- Location: Vienna, Austria
- Contact:
Rik: puzzling is putting it mildy, it's driving me crazy. I didn't make notes or pay too much attention to running the 27 fly head frames through CZP.rjlittlefield wrote: They're also a bit puzzling, in that I'm surprised by how very much noise has accumulated in the CZP image.
--Rik
So, not really wanting to look at any more of this fly's head I thought I would run another bug through CZP, this time making notes of the process, and compare the output with the HF output. If I got similar results to the fly head we (you) could perhaps work out what was happening. If the results were different I would re-run the fly head through CZP paying particular attention to my methodology.
Looks as though I will have to re-run that fly head.
For the following images (fly head difference in brackets):
ISO 100
reversed 105mm MF Micro Nikkor on bellows (reversed 50/2.8 El Nikkor)
lens set at f/8 (f/5.6)
mag on sensor 1.58x (4.9x)
18 frames @ 0.1mm (27 @ 0.06mm).
image #1: from the HF 4.1 stack of 18 16 bit NEF frames, no processing, = 69.98 MB .tif @ 300ppi
image #2: 800x800 px selection from image #1
Converted each frame to 8 bit, max. JPGs using Photoshop Actions.
Stacked the 18 jpg's with HF = 34.8 MB .tif @ 300ppi
Stacked the 18 jpg's with CZP = 20.1MB .tif @ 120ppi (!!)
image #3: 800x800 px selection from the HF stacking of the jpg frames
image #4: 800x800 px selection from the CZP stacking of the jpg frames.
All converted to sRGB and Save for Web, <200KB
I'm not seeing the major flaws in the CZP image that I saw in the fly's head.
I'm not seeing any halos in the HF image that I see in just about every other HF stack.
NU.
student of entomology
Quote – Holmes on ‘Entomology’
” I suppose you are an entomologist ? “
” Not quite so ambitious as that, sir. I should like to put my eyes on the individual entitled to that name.
No man can be truly called an entomologist,
sir; the subject is too vast for any single human intelligence to grasp.”
Oliver Wendell Holmes, Sr
The Poet at the Breakfast Table.
Nikon camera, lenses and objectives
Olympus microscope and objectives
student of entomology
Quote – Holmes on ‘Entomology’
” I suppose you are an entomologist ? “
” Not quite so ambitious as that, sir. I should like to put my eyes on the individual entitled to that name.
No man can be truly called an entomologist,
sir; the subject is too vast for any single human intelligence to grasp.”
Oliver Wendell Holmes, Sr
The Poet at the Breakfast Table.
Nikon camera, lenses and objectives
Olympus microscope and objectives
- rjlittlefield
- Site Admin
- Posts: 23972
- Joined: Tue Aug 01, 2006 8:34 am
- Location: Richland, Washington State, USA
- Contact:
Oh good! Somebody besides me is going nuts trying to figure this stuff out!NikonUser wrote:Rik: puzzling is putting it mildy, it's driving me crazy.
Frame count could matter, though these are pretty small frame counts in both cases.18 frames @ 0.1mm (27 @ 0.06mm).
The change from 300 to 120 ppi doesn't mean anything except that CZP changed the number. The ppi number is essentially an arbitrary scale factor. The pixel counts should still match up (after taking into account that reflected border that CZP adds on).image #1: from the HF 4.1 stack of 18 16 bit NEF frames, no processing, = 69.98 MB .tif @ 300ppi
Stacked the 18 jpg's with HF = 34.8 MB .tif @ 300ppi
Stacked the 18 jpg's with CZP = 20.1MB .tif @ 120ppi (!!)
The reduction in file length between HF and CZP in 8-bit mode is interesting, but probably not relevant to the noise problem. I've never noticed it before, but now I see that CZP makes smaller TIFF output files if there are large expanses of featureless background. We usually think of TIFF files as being exact bit-for-bit representations of the image, but they are permitted to use lossless compression internally. It looks like CZP is doing that, and HF is not.
Halo varies a lot with subject geometry, contrast, and lighting. It is most obvious when a high contrast foreground edge overlaps a background surface that is covered with low contrast detail. Your fly stack HERE turned out to be a great example. I think this last test image you used in the current thread just happens to not have any of the attributes that produce bad halo.I'm not seeing any halos in the HF image that I see in just about every other HF stack.
Me either. Hhmm....I'm not seeing the major flaws in the CZP image that I saw in the fly's head.
--Rik
-
- Posts: 209
- Joined: Thu Dec 20, 2007 11:22 am
- Location: Swindon, UK
- rjlittlefield
- Site Admin
- Posts: 23972
- Joined: Tue Aug 01, 2006 8:34 am
- Location: Richland, Washington State, USA
- Contact:
That's not an easy test to run, but the answer seems to be "some combination, leaning toward worst".Graham Stabler wrote:How do things look if you use the output of CZP and HF as images in a two image stack? Best or worst of both worlds?
The reason the test is not easy to run is that the alignment algorithms of CZP and HF produce slightly different results. This effectively warps the geometry a little bit, so the resulting images won't align with each other no matter what you do. This can be worked around by pre-aligning the stack and then telling HF and CZP not to try aligning it themselves.
The reason the answer is "some combination, leaning toward worst" is partly that you have to ask, "Who's doing the combining?".
If you combine with HF, then you get HF's preferences. Those lean toward halo, but this is partially overridden by the increased contrast and noise that CZP adds. The result is some reduction in halo but also quite an increase in noise, compared to what HF does on its own with the original stack.
If you combine with CZP, then you get CZP's preferences. Those amplify noise, but halo is a form of lower-frequency detail, so CZP preserves some of that too.
Combining the results of several algorithms is an attractive idea, but it's challenging to get the details right to do it automatically. No doubt that will happen, just not clear when.
If you combine manually, then you the human can pick and choose to make different tradeoffs in different areas. That works OK if you have the time. A variation in that approach is to run HF a couple of times with different settings, and combine those using say Photoshop layers. That avoids several difficulties, most notably the geometry problem. Several people have used this approach successfully to manually touch out halo in HF results.
--Rik
Edit: to add the part about manual combining.
- Charles Krebs
- Posts: 5865
- Joined: Tue Aug 01, 2006 8:02 pm
- Location: Issaquah, WA USA
- Contact:
Graham,
If time is not an issue, with the "batch" feature you can run a variety of setting options. I'll sometimes do this and then, as Rik mentioned, "manually" choose different sections by using layers in Photoshop. I primarily do this with low contrast areas with marginal detail that produce (what Rik calls) "stacking mush". Often the best setting to minimize this "mush" do not do great things for the rest of the image.
If time is not an issue, with the "batch" feature you can run a variety of setting options. I'll sometimes do this and then, as Rik mentioned, "manually" choose different sections by using layers in Photoshop. I primarily do this with low contrast areas with marginal detail that produce (what Rik calls) "stacking mush". Often the best setting to minimize this "mush" do not do great things for the rest of the image.
I'm still comparing output from HF 4.1 and CZP (Pyramoid Maximum Contrast).
This is a worker Forest Yellowjacket Vespula acadica
I took 34 frames @ 0.1mm at 2.37x mag on sensor. Reversed 50/2.8 El Nikkor enlaring lens. See HERE for set up.
The full frame image is from a HF stack of just 18 frames. The stack from the 34 frames brought all of the antennae in focus (good) but also brought into focus many hairs on the thorax behind the head making for a very 'busy' image (bad). The critical ID characters are on the face.
I converted the 18 frames from their original NEF to TIF and stacked with CZP. I converted the 16 bits HF stack to 8 bits.
2nd image: 400 x800 px selection (actual pixels) from: left HF stack, right CZP stack. Both selections from the unprocessed stacks.
3rd image has had some levels and USM adjustment (identical treatment for each half).
The CZP image shows less resolution and has lost detail in the antennal segment.
Bottom line: I will initially use HF but if halos are too great I will do a CZP stack. Possibly not worth buying HF when one can use CZP for free, but as I have HF I will keep using it.
This is a worker Forest Yellowjacket Vespula acadica
I took 34 frames @ 0.1mm at 2.37x mag on sensor. Reversed 50/2.8 El Nikkor enlaring lens. See HERE for set up.
The full frame image is from a HF stack of just 18 frames. The stack from the 34 frames brought all of the antennae in focus (good) but also brought into focus many hairs on the thorax behind the head making for a very 'busy' image (bad). The critical ID characters are on the face.
I converted the 18 frames from their original NEF to TIF and stacked with CZP. I converted the 16 bits HF stack to 8 bits.
2nd image: 400 x800 px selection (actual pixels) from: left HF stack, right CZP stack. Both selections from the unprocessed stacks.
3rd image has had some levels and USM adjustment (identical treatment for each half).
The CZP image shows less resolution and has lost detail in the antennal segment.
Bottom line: I will initially use HF but if halos are too great I will do a CZP stack. Possibly not worth buying HF when one can use CZP for free, but as I have HF I will keep using it.
NU.
student of entomology
Quote – Holmes on ‘Entomology’
” I suppose you are an entomologist ? “
” Not quite so ambitious as that, sir. I should like to put my eyes on the individual entitled to that name.
No man can be truly called an entomologist,
sir; the subject is too vast for any single human intelligence to grasp.”
Oliver Wendell Holmes, Sr
The Poet at the Breakfast Table.
Nikon camera, lenses and objectives
Olympus microscope and objectives
student of entomology
Quote – Holmes on ‘Entomology’
” I suppose you are an entomologist ? “
” Not quite so ambitious as that, sir. I should like to put my eyes on the individual entitled to that name.
No man can be truly called an entomologist,
sir; the subject is too vast for any single human intelligence to grasp.”
Oliver Wendell Holmes, Sr
The Poet at the Breakfast Table.
Nikon camera, lenses and objectives
Olympus microscope and objectives