Combining images to get better noise or resolution
Moderators: rjlittlefield, ChrisR, Chris S., Pau
Combining images to get better noise or resolution
Admin restructuring...
This topic began as a series of off-topic replies to very basic questions asked in another thread started by TsumbeGuy. Admin has split that thread, to clean up the forum structure and prevent continued confusion between a title of "Hello! + First post, questions" versus esoteric discussions of pixel values. Please see the earlier thread if further context is needed for the start of this one.
Back to original content...
Hi guys,
maybe I can add some infos: The process to improve the resolution by shifting the sensor is well known in astrophotography since a long time. There it is called "dithern" what means, that the sensor would be shifted by a random pixel amount between each shot.
Than there is a additional processing step necessary, which is called "drizzle". This algorithm was developed for the Hubble Space Telescope to improve the weak resolution of its CCDs in the WFPC1. (Please google for some more information, because its to much to explain it here).
A similar process is used in the Olympus camera (and similar types of cameras) to improve image resolution. But this high resolution is only usable to max. a 10x magnification. When I use it with my 20x Mitutoyo, I see only airy discs with diffraction pattern on all high lights. Maybe there is a way to use it with 20x with very diffuse light and a homogene surface object...
I have also ordered a 1x, 2x and 5x Mitutoyo. But I still waiting for arrival...
BTW: Stacking to improve the noise is not necessary in photomicrography, because we have normally enough light. This is necessary, when you work near the detection limit of the camera system like in astrophotography. Than different noise sources will affect the signal in the image and give a weak s/n. In photomicrography the shotnoise will be the dominating noise source, so all other noise sources are not important. And to improve the shotnoise, which will occur through the random nature of light (poisson statistics), the only way is to illuminate with more light or take longer exposure times. To increase the ISO is contra productive because you also increase the noise linear...
Cheers Markus
Admin edit [rjl, 2/10/2016]: split the thread as described above.
This topic began as a series of off-topic replies to very basic questions asked in another thread started by TsumbeGuy. Admin has split that thread, to clean up the forum structure and prevent continued confusion between a title of "Hello! + First post, questions" versus esoteric discussions of pixel values. Please see the earlier thread if further context is needed for the start of this one.
Back to original content...
Hi guys,
maybe I can add some infos: The process to improve the resolution by shifting the sensor is well known in astrophotography since a long time. There it is called "dithern" what means, that the sensor would be shifted by a random pixel amount between each shot.
Than there is a additional processing step necessary, which is called "drizzle". This algorithm was developed for the Hubble Space Telescope to improve the weak resolution of its CCDs in the WFPC1. (Please google for some more information, because its to much to explain it here).
A similar process is used in the Olympus camera (and similar types of cameras) to improve image resolution. But this high resolution is only usable to max. a 10x magnification. When I use it with my 20x Mitutoyo, I see only airy discs with diffraction pattern on all high lights. Maybe there is a way to use it with 20x with very diffuse light and a homogene surface object...
I have also ordered a 1x, 2x and 5x Mitutoyo. But I still waiting for arrival...
BTW: Stacking to improve the noise is not necessary in photomicrography, because we have normally enough light. This is necessary, when you work near the detection limit of the camera system like in astrophotography. Than different noise sources will affect the signal in the image and give a weak s/n. In photomicrography the shotnoise will be the dominating noise source, so all other noise sources are not important. And to improve the shotnoise, which will occur through the random nature of light (poisson statistics), the only way is to illuminate with more light or take longer exposure times. To increase the ISO is contra productive because you also increase the noise linear...
Cheers Markus
Admin edit [rjl, 2/10/2016]: split the thread as described above.
- rjlittlefield
- Site Admin
- Posts: 23964
- Joined: Tue Aug 01, 2006 8:34 am
- Location: Richland, Washington State, USA
- Contact:
Direct links: the algorithm is outlined in Wikipedia and described in detail in the paper abstracted at http://adsabs.harvard.edu/abs/2002PASP..114..144F, from which a PDF of the full paper can be obtained.etalon wrote:Than there is a additional processing step necessary, which is called "drizzle". This algorithm was developed for the Hubble Space Telescope to improve the weak resolution of its CCDs in the WFPC1. (Please google for some more information, because its to much to explain it here).
To clarify... This is true assuming that the camera sensor can capture enough light to get the low noise that you want. Small sensor cameras may not do this, in which case improvement can be made by averaging multiple exposures of the same scene. In the end the results are similar whichever way you capture the same number of photons.Stacking to improve the noise is not necessary in photomicrography, because we have normally enough light.
skrylten provided a good example in 2014 and recently revisited the technique HERE.
--Rik
Well small sensors normally have small pixels and therefor a small FWC/pixel. At a given photon flux the smaller pixel collect also less photons per time what may result in the problem, that the shot noise is not any longer the dominating noise source. Than other noise sources must be taken into account (i.e. read noise), and this would increase noise with each picture you stack for the same scene...Small sensor cameras may not do this, in which case improvement can be made by averaging multiple exposures of the same scene. In the end the results are similar whichever way you capture the same number of photons.
Cheers Markus
Etalon, my tests seem to show that even with lots of light (full sun, fairly wide effective aperture), noise affects APS-C sensors, and simple averaging of exposures removes it. Note the sky in the test photos at:
http://www.photomacrography.net/forum/v ... highlight=
http://www.photomacrography.net/forum/v ... highlight=
Lou, I think, that´s not a fair comparison, because you have resampled the denoised pic. With this, you get a distribution from the amount of one pixel over more pixel by an algorithm. In this case, you also loose the fix pattern noise through the bayer matrix, etc. For a fair comparison you should stack all pics without preprocessing. Than you will see, whether s/n increase.
What I would say is, because of the limited FWC of small pixels, the dynamic range is not enough to get in dark areas enough photons (and therefor shot noise limited) to make all other noise sources negligible and avoid burning out the lights on the other side. In this case, i.e. read noise would increase the over all noise in the dark areas, when several frames where stacked. The statistical noise improvement through stacking would only work proper, when the single frames are shot noise limited, also in the darkest areas. Otherwise a single exposure with large enough pixels for a proper FWC would give the better results in noise reduction...
Cheers Markus
What I would say is, because of the limited FWC of small pixels, the dynamic range is not enough to get in dark areas enough photons (and therefor shot noise limited) to make all other noise sources negligible and avoid burning out the lights on the other side. In this case, i.e. read noise would increase the over all noise in the dark areas, when several frames where stacked. The statistical noise improvement through stacking would only work proper, when the single frames are shot noise limited, also in the darkest areas. Otherwise a single exposure with large enough pixels for a proper FWC would give the better results in noise reduction...
Cheers Markus
Markus, I don't understand your comment. The pictures in my test were not "de-noised" or otherwise processed prior to stacking, except for upsampling. You can see one of the pictures on the left in my post. The noise is easy to see in the sky. By averaging many such shots, the noise disappears, as seen in the right image.
Markus, the image I posted on the left is an original image, enlarged. I upsampled these using bilinear interpolation before stacking. I see now what you meant, and you are correct that this would reduce noise slightly, if noise is measured per pixel on the new upsampled image. Nevertheless the point is that the original image is noisy, even though there was plenty of light.
You said "read noise would increase the over all noise in the dark areas, when several frames where stacked." But the individual images are not summed, they are averaged, and any constant read noise would have that same constant value in the averaged image.
You said "read noise would increase the over all noise in the dark areas, when several frames where stacked." But the individual images are not summed, they are averaged, and any constant read noise would have that same constant value in the averaged image.
- rjlittlefield
- Site Admin
- Posts: 23964
- Joined: Tue Aug 01, 2006 8:34 am
- Location: Richland, Washington State, USA
- Contact:
At this point I'm no longer sure who is saying what. So let me take this from the beginning.
1. Averaging reduces all forms of random pixel noise, by a factor of roughly sqrt(N) for N frames.
2. Compared to the shot noise that is unavoidable when counting random photons, other sources of random noise ("read noise") may be relatively larger for smaller sensors than for larger ones. Both sources of noise will be reduced by averaging, but when you're done counting K photons with each sensor, the image from the smaller sensor will still be noisier than the image from the larger one. (That's why I wrote "similar" instead of "identical". It would be a matter for some experiment to see exactly how close "similar" is.)
3. If there is a "fixed noise" pattern, meaning that the expected value for pixel A is darker than pixel B when exposed to the same light level, then simple averaging will not reduce that noise, while averaging with random displacement will reduce it. However, I do not believe that this is a significant issue with most cameras. If it were, then the slightly darker pixels would act like fine dust on the sensor, which in turn would cause PMax to consistently produce a fine zoomburst pattern on every stack where scale changes with focus, and that doesn't happen. I have seen this defect, but only very rarely -- like less than 1 in 1000 support requests.
--Rik
1. Averaging reduces all forms of random pixel noise, by a factor of roughly sqrt(N) for N frames.
2. Compared to the shot noise that is unavoidable when counting random photons, other sources of random noise ("read noise") may be relatively larger for smaller sensors than for larger ones. Both sources of noise will be reduced by averaging, but when you're done counting K photons with each sensor, the image from the smaller sensor will still be noisier than the image from the larger one. (That's why I wrote "similar" instead of "identical". It would be a matter for some experiment to see exactly how close "similar" is.)
3. If there is a "fixed noise" pattern, meaning that the expected value for pixel A is darker than pixel B when exposed to the same light level, then simple averaging will not reduce that noise, while averaging with random displacement will reduce it. However, I do not believe that this is a significant issue with most cameras. If it were, then the slightly darker pixels would act like fine dust on the sensor, which in turn would cause PMax to consistently produce a fine zoomburst pattern on every stack where scale changes with focus, and that doesn't happen. I have seen this defect, but only very rarely -- like less than 1 in 1000 support requests.
--Rik
- rjlittlefield
- Site Admin
- Posts: 23964
- Joined: Tue Aug 01, 2006 8:34 am
- Location: Richland, Washington State, USA
- Contact:
The distribution is constant, but each pixel in each frame is an independent sample from that distribution.Lou Jost wrote:Rik, isn't "read noise" more or less a constant from shot to shot and from pixel to pixel? If so, then averaging will leave it unchanged.
Quoting from http://qsimaging.com/ccd_noise.html:
Compare against:CCD Read Noise (On-chip)
There are several on-chip sources of noise that can affect a CCD. CCD manufacturers typically combine all of the on-chip noise sources and express this noise as a number of electrons RMS (e.g. 15e- RMS).
• CCD Read Noise is a fundamental trait of CCDs, from the one in an inexpensive webcam to the CCDs in the Hubble Space Telescope. CCD read noise ultimately limits a camera’s signal to noise ratio, but as long as this noise exhibits a Gaussian, or normal distribution, its influence on your images can be reduced by combining frames and standard image processing techniques.
--RikPixel Non-Uniformity
Today’s CCDs are made to exacting standards, but they still are not perfect. Each pixel has a slightly different sensitivity to light, typically within 1% to 2% of the average signal.
• Pixel non-uniformity can be reduced by calibrating an image with a flat-field image. Flat field images are also used to eliminate the effects of vignetting, dust motes and other optical variations in the imaging system.
I think that if the noise at a given pixel has a Gaussian distribution with mean m and variance v, averaging a bunch of photos will never get rid of it. It will appear as a signal with strength m and it will have to be subtracted out after averaging. This is probably included in the "standard image processing techniques" alluded to in your quote.
Also, many noise components can't have a Gaussian distribution if the mean is close to zero, because that distribution is symmetric about the mean, and if the mean is close to zero, the distribution would have a significant tail in the negative quadrant. There can never be negative numbers of electrons sent (well, actually there could be back-current, but I don't think that happens in photographic systems), so a real noise distribution will be asymmetric near the origin, and its expected value will be positive. In an extreme case like a Poisson distribution the expected value will be equal to the variance. This will have to be subtracted out of the averaged pictures, it won't go away by averaging, no matter how many frames are averaged.
That's probably one reason why astronomers always take many kinds of "dark frames" to use when correcting their images.
Also, many noise components can't have a Gaussian distribution if the mean is close to zero, because that distribution is symmetric about the mean, and if the mean is close to zero, the distribution would have a significant tail in the negative quadrant. There can never be negative numbers of electrons sent (well, actually there could be back-current, but I don't think that happens in photographic systems), so a real noise distribution will be asymmetric near the origin, and its expected value will be positive. In an extreme case like a Poisson distribution the expected value will be equal to the variance. This will have to be subtracted out of the averaged pictures, it won't go away by averaging, no matter how many frames are averaged.
That's probably one reason why astronomers always take many kinds of "dark frames" to use when correcting their images.
Yes, I agree completely.rjlittlefield wrote:At this point I'm no longer sure who is saying what. So let me take this from the beginning.
1. Averaging reduces all forms of random pixel noise, by a factor of roughly sqrt(N) for N frames.
2. Compared to the shot noise that is unavoidable when counting random photons, other sources of random noise ("read noise") may be relatively larger for smaller sensors than for larger ones. Both sources of noise will be reduced by averaging, but when you're done counting K photons with each sensor, the image from the smaller sensor will still be noisier than the image from the larger one. (That's why I wrote "similar" instead of "identical". It would be a matter for some experiment to see exactly how close "similar" is.)
3. If there is a "fixed noise" pattern, meaning that the expected value for pixel A is darker than pixel B when exposed to the same light level, then simple averaging will not reduce that noise, while averaging with random displacement will reduce it. However, I do not believe that this is a significant issue with most cameras. If it were, then the slightly darker pixels would act like fine dust on the sensor, which in turn would cause PMax to consistently produce a fine zoomburst pattern on every stack where scale changes with focus, and that doesn't happen. I have seen this defect, but only very rarely -- like less than 1 in 1000 support requests.
--Rik
But, because read noise affect every single acquisition, it would increase the over all noise in each acquisition, if the shot noise have not an amount, which makes the read noise negligible (shot noise limited acquisition). Than you need to stack more acquisitions to get the same s/n as a single acquisition with comparable acquisition time (enough FWC assumed) or an stack with only shot noise limited acquisitions.
But, as I wrote in my first answer, thats normally not a problem in photomicrography. This became important in low light level applications...
Lou, noise (it doesn´t matter which) is at no time a constant. That´s the reason, why the s/n can be improved by stacking or long exposure times. The signal increases linear, because it´s like a constant (except its noise part). All random distributed noise parts increase with sqrt(n) of the observed events (photons, acquisitions, etc...). So the fluctuation range of the background smooth out with a better statistic, while the signal increase linear.isn't "read noise" more or less a constant from shot to shot and from pixel to pixel?
That´s absolutely correct. The Gaussian distribution is only a approximation to avoid calculating with asymmetric distributions. But it works only, when enough events be observed. Only than, the Poisson distribution get the same shape like a Gaussian distribution and the approximation will work...Also, many noise components can't have a Gaussian distribution if the mean is close to zero, because that distribution is symmetric about the mean, and if the mean is close to zero, the distribution would have a significant tail in the negative quadrant. There can never be negative numbers of electrons sent (well, actually there could be back-current, but I don't think that happens in photographic systems), so a real noise distribution will be asymmetric near the origin, and its expected value will be positive. In an extreme case like a Poisson distribution the expected value will be equal to the variance.
Well that was enough off topic. Sorry for highjacking the thread...
Cheers Markus
Yes, I guess I am still learning the terminology in optics. Looking at astrophotography noise pages, I guess you would describe this more accurately as a "bias" with a "noise" component."Lou, noise (it doesn´t matter which) is at no time a constant."
But here is my problem: whatever the noise distribution, its mean value is some fixed nonzero value X, and the mean signal is some other fixed value Y, so no matter how many frames you average, you end up with X+Y and you have to subtract the X. Right?
No, thats not the reason. All calibration frames (darks, flats, bias) are needed to extract the pure signal from the object. I.e. darks are necessary to subtract the dark current, which occur when thermal electrons will be collected in the pixel wells. To reduce the dark current, scientific grade CCD cameras are deep cooled on the sensor (for silicon, the dark current would be halved with each 7 Kelvin lower temperature). But, as the photoelectrons, the dark current is random distributed, too. That is called "thermal noise".That's probably one reason why astronomers always take many kinds of "dark frames" to use when correcting their images.
However, with each image calibration you will increase the over all noise! It´s always a trade off...
Cheers Markus