Lou Jost wrote:I would think [using more frames than necessary] would add noise to the final image. Every frame has a bit of noise, and especially in PMax, that noise tends to get into the final result.
True, but the impact is a lot less than you're probably imagining. It is certainly a lot less than
I imagined, when I first thought to look at the issue of noise accumulation.
Basically what PMax does is to retain in the output the worst absolute noise (either positive or negative) that appears in any frame of the stack, at each pixel position. In the stacks where I have looked closely, it has turned out that the distribution of noise values at each pixel position has a "short tail", with the result that the accumulated noise rises quickly from 2 frames to 4, rather less from 4 to 8, and so on, so that the difference from say 50 to 100 frames was detectable but not immediately obvious.
To quickly model this, we can look at the expected values of max
(abs(normsamp)), where normsamp is random normal with mean 0 and stdev 1, and the number of frames N varies as 2, 4, 8, and so on. Evaluating this model with Excel and taking the average max value over 1000 runs of the experiment, I get:
Code: Select all
N 1 2 4 8 16 32 64 128 256 512 1024
Noise 0.77 1.14 1.48 1.77 2.07 2.34 2.59 2.82 3.04 3.24 3.45
Here's a graph:
If you want a quick heuristic you can think of noise growth as happening at a rate of log(N), but it's actually a bit slower than that, and
much slower in a perceptual sense. Going from 1 frame to 2 frames adds 0.37 noise, increasing the single-frame noise by almost 50%, while going from 128 to 256 increases it by only 0.22, less than a 10% increase by that point. So, the more frames you have to start with, the less you care about adding more.
For DMap, noise accumulation is not a significant issue because DMap never increases noise level, only reduces it at a relatively few pixels by weighted averaging of consecutive frames on transition boundaries. Noise level for DMap matters only to the extent that noise can get confused with real detail, which you can handle by adjusting the noise threshold.
I used to worry about frame count as an important factor in PMax noise accumulation. These days I do not, and instead I just emphasize getting enough frames to be sure of not losing any detail.
That issue of "losing detail" becomes especially important when the shooting setup has issues with vibration, because vibration can affect focus point just as much as lateral framing.
In the current stack, it's easy to see that the image is bouncing around by 5-6 pixels or so, side to side. EXIF reports this is with a Canon 5D Mark III, so that's about 40 microns on sensor, about 4 microns on subject at 10X. Assuming that the vibration affects all axes equally, the focus vibration is something like half of the nominal DOF. In this situation, using a small focus step is a good defense against focus banding caused by environmental vibration. Empirically, when I reprocess the stack with only half as many frames (skip factor 2), it becomes trivial to detect places where detail is lost by using fewer frames.
mawyatt wrote:co-adding similar waveforms actually reduces the random noise (non-correlated) and enhances the waveform similarities (correlated)
That's true also, but it's not relevant to current stacking algorithms because nothing except Helicon's Method A does a significant amount of averaging across frames.
Of course you can average source frames outside of any stacking software.
There's also an item someplace on Zerene Stacker's requested-features list to do averaging inside Zerene Stacker, for example to handle multiple exposures at each focus step, or to handle seriously over-sampled stacks such as might result from video input. Both methods could be expected to yield some effective improvement in resolution, certainly from pushing the noise down, and possibly from super-resolution (in combination with up-sampling).
--Rik