Can deconvolution counter diffraction softening?
Moderators: Chris S., Pau, Beatsy, rjlittlefield, ChrisR
Can deconvolution counter diffraction softening?
I gave a talk about extreme macro last night at a camera club in Kingston, and one person made a point of raising deconvolution software as something that could overcome diffraction softening.
I've come across deconvolution software myself before and used it to create some focus on out of focus pictures, but I wasn't aware that it was possible to overcome the law of physics that I understand diffraction softening to be.
Would anyone be able to shed some light on this question - aye or nay - can deconvolution counter diffraction softening?
Thank you,
-Johan
I've come across deconvolution software myself before and used it to create some focus on out of focus pictures, but I wasn't aware that it was possible to overcome the law of physics that I understand diffraction softening to be.
Would anyone be able to shed some light on this question - aye or nay - can deconvolution counter diffraction softening?
Thank you,
-Johan
My extreme-macro.co.uk site, a learning site. Your comments and input there would be gratefully appreciated.
I trolled the web looking for an answer to this some months ago. Short answer I came up with was "In theory, yes. In practice, no".
The problem is that for deconvolution to work effectively it needs to be supplied with an accurate point spread function. Blind deconvolution (no PSF) doesn't cut it because a good general purpose sharpening algorithm will do pretty much as well, in less time.
Having said that, blind deconvolution is a very good sharpening algorithm in it's own right. Try RAWTherapee (free). It has a version of the Richardson-Lucy deconvolution algorithm they used for hubble images until the optics were upgraded. Does a very good job of sharpening up that last bit if softening with few artefacts.
Edit: Here's a small example I did. Left pic was as good as I could get using Topaz detail 3 ("overall detail light 2" setting). Right pic was further processed with RAW Therapee RL deconvolution.

The problem is that for deconvolution to work effectively it needs to be supplied with an accurate point spread function. Blind deconvolution (no PSF) doesn't cut it because a good general purpose sharpening algorithm will do pretty much as well, in less time.
Having said that, blind deconvolution is a very good sharpening algorithm in it's own right. Try RAWTherapee (free). It has a version of the Richardson-Lucy deconvolution algorithm they used for hubble images until the optics were upgraded. Does a very good job of sharpening up that last bit if softening with few artefacts.
Edit: Here's a small example I did. Left pic was as good as I could get using Topaz detail 3 ("overall detail light 2" setting). Right pic was further processed with RAW Therapee RL deconvolution.

- rjlittlefield
- Site Admin
- Posts: 24319
- Joined: Tue Aug 01, 2006 8:34 am
- Location: Richland, Washington State, USA
- Contact:
Johan, see the discussion here: http://www.photomacrography.net/forum/v ... 198#203198 .
Here's what I got by just adding a ordinary Photoshop USM, 200% at 0.7 pixels, on top of Beatsy's "Original".

The key is to recognize that diffraction cuts the high frequencies by a lot. If you want to restore those, you have to increase them by a lot also. Gentle treatment is not appropriate for this use.
Perhaps the biggest advantage of deconvolution is that people are willing to just let it do its thing without trying to second-guess it, while applying USM at 200% would seem to be a recipe for disaster.
--Rik
I agree. But the comparison is not exactly a great sales pitch for Topaz, or at least for this particular use of it.Lou Jost wrote:That's a huge improvement!
Here's what I got by just adding a ordinary Photoshop USM, 200% at 0.7 pixels, on top of Beatsy's "Original".

The key is to recognize that diffraction cuts the high frequencies by a lot. If you want to restore those, you have to increase them by a lot also. Gentle treatment is not appropriate for this use.
Perhaps the biggest advantage of deconvolution is that people are willing to just let it do its thing without trying to second-guess it, while applying USM at 200% would seem to be a recipe for disaster.
--Rik
- enricosavazzi
- Posts: 1562
- Joined: Sat Nov 21, 2009 2:41 pm
- Location: Västerås, Sweden
- Contact:
I think we should qualify what we mean by "improvement". The information lost through diffraction is truly lost, not just "hidden", or "encoded" in the image and recoverable by some mathematical processing (some other types of blurring, like defocusing, behave differently and part of the information is still "present but hidden" and recoverable).
What sharpening algorithms do on a diffraction-blurred image, including deconvolution, is creating the visual impression of a sharper image, by making assumptions about the geometry of the subject. In astronomy, for example, deconvolution as ordinarily applied assumes (quite reasonably) that stars are point sources of light, and processes the image to "improve" it in this particular direction. In photomacrography, "improving" an image usually means rendering a surface as "solid" (i.e. a sharp border between air/water and the subject) rather than "fuzzy". This is also a reasonable assumption in many cases, but like all assumptions when some information has been truly lost, and especially in diffraction-affected images, it should be expected to create detail (or lack thereof) that is not actually present in the subject.
We therefore may obtain an "improved" image that is more pleasing to the eye and more natural-looking, and at the same time less faithful to the original subject (which is far from an improvement if the goal is accurate documentation).
What sharpening algorithms do on a diffraction-blurred image, including deconvolution, is creating the visual impression of a sharper image, by making assumptions about the geometry of the subject. In astronomy, for example, deconvolution as ordinarily applied assumes (quite reasonably) that stars are point sources of light, and processes the image to "improve" it in this particular direction. In photomacrography, "improving" an image usually means rendering a surface as "solid" (i.e. a sharp border between air/water and the subject) rather than "fuzzy". This is also a reasonable assumption in many cases, but like all assumptions when some information has been truly lost, and especially in diffraction-affected images, it should be expected to create detail (or lack thereof) that is not actually present in the subject.
We therefore may obtain an "improved" image that is more pleasing to the eye and more natural-looking, and at the same time less faithful to the original subject (which is far from an improvement if the goal is accurate documentation).
--ES
-
- Posts: 350
- Joined: Sun Sep 14, 2014 10:53 am
It all depends on how much information is lost. Diffraction is,as I understand it, not an on or off phenomenon. I assume once we go beyond the DLA the amount of information destruction is dependent on how far we move from the DLA. if the DLA is 8 then at F9 the information loss is much less than at F12. When I move into diffraction territory( at low mags) I try to balance DOF against the Diffraction.. in other words I use the largest aperture I can to reduce the information loss while maintaining the DOF/magnification I am using. De-convolution works best with lenses that have a known profile. In Phase ones Capture one pro.. I appear get better deconvolution results with lenses that transfer the lens profile with the picture. With lenses that don't transfer that information (all of my adapted lenses) I sometimes create an LCC ( lens cast calibration) profile. This doesn't have anything to do with diffraction but it provides information to the soft ware that allows it to compensate for the specific lens cast of the optic I am using. It seems to assist at times."...............The information lost through diffraction is truly lost, not just "hidden", or "encoded" in the image and recoverable by some mathematical processing (some other types of blurring, like defocusing, behave differently and part of the information is still "present but hidden" and recoverable)"
Last edited by austrokiwi1 on Wed Mar 29, 2017 8:17 am, edited 2 times in total.
Still learning,
Cameras' Sony A7rII, OLympus OMD-EM10II
Macro lenses: Printing nikkor 105mm, Sony FE 90mm F2.8 Macro G, Schneider Kreuznach Makro Iris 50mm , 2.8, Schnieder Kreuznach APO Componon HM 40mm F2.8 , Mamiya 645 120mm F4 Macro ( used with mirex tilt shift adapter), Olympus 135mm 4.5 bellows lens, Oly 80mm bellows lens, Olympus 60mm F2.8
Cameras' Sony A7rII, OLympus OMD-EM10II
Macro lenses: Printing nikkor 105mm, Sony FE 90mm F2.8 Macro G, Schneider Kreuznach Makro Iris 50mm , 2.8, Schnieder Kreuznach APO Componon HM 40mm F2.8 , Mamiya 645 120mm F4 Macro ( used with mirex tilt shift adapter), Olympus 135mm 4.5 bellows lens, Oly 80mm bellows lens, Olympus 60mm F2.8
Very interesting answers, thank you gents. Thought provoking.
My extreme-macro.co.uk site, a learning site. Your comments and input there would be gratefully appreciated.
- enricosavazzi
- Posts: 1562
- Joined: Sat Nov 21, 2009 2:41 pm
- Location: Västerås, Sweden
- Contact:
Absolutely correct for what regards the amount of diffraction gradually increasing with effective aperture. We notice the loss of resolution by diffraction increasing rather sharply beyond a certain point, when diffraction fuzziness exceeds both the circle of confusion and the size of pixels, but it is present (masked by the CoC and/or the limits on resolution imposed by the pixel nature of the sensor, plus the limitations of demosaicing and anti-aliasing features) also before this point.austrokiwi1 wrote:It all depends on how much information is lost. Diffraction is,as I understand it, not an on or off phenomenon. I assume once we go beyond the DLA the amount of information destruction is dependent on how far we move from the DLA. if the DLA is 8 then at F9 the information loss is much less than at F12. When I move into diffraction territory( at low mags) I try to balance DOF against the Diffraction.. in other words I use the largest aperture I can to reduce the information loss while maintaining the DOF/magnification I am using. De-convolution works best with lenses that have a known profile. In Phase ones Capture one pro.. I appear get better deconvolution results with lenses that transfer the lens profile with the picture. With lenses that don't transfer that information (all of my adapted lenses) I sometimes create an LCC ( lens cast calibration) profile. This doesn't have anything to do with diffraction but it provides information to the soft ware that allows it to compensate for the specific lens cast of the optic I am using. It seems to assist at times."...............The information lost through diffraction is truly lost, not just "hidden", or "encoded" in the image and recoverable by some mathematical processing (some other types of blurring, like defocusing, behave differently and part of the information is still "present but hidden" and recoverable)"
I would guess that the lens profile allows deconvolution or other post-processing to correct some lens-specific aberrations other than diffraction. This would explain the better results.
The main reason why diffraction fuzziness cannot be truly reversed is that diffraction is a quantum phenomenon with an indeterminacy component, unlike the CoC (which obeys the deterministic rules of classical geometric optics).
--ES
- rjlittlefield
- Site Admin
- Posts: 24319
- Joined: Tue Aug 01, 2006 8:34 am
- Location: Richland, Washington State, USA
- Contact:
I agree. And as described in the other thread that I linked to (HERE), what I mean is to end up with a system MTF that stays near 1 until fairly near the cutoff frequency imposed by diffraction. Something like this diagram from that other thread:enricosavazzi wrote:I think we should qualify what we mean by "improvement".

Here again we need to be careful. Information beyond the cutoff frequency is truly lost, but information below that cutoff frequency is still present in the image, just diminished in magnitude and spread across a larger area.The information lost through diffraction is truly lost, not just "hidden", or "encoded" in the image and recoverable by some mathematical processing (some other types of blurring, like defocusing, behave differently and part of the information is still "present but hidden" and recoverable).
I disagree with the part about "making assumptions about the geometry".What sharpening algorithms do on a diffraction-blurred image, including deconvolution, is creating the visual impression of a sharper image, by making assumptions about the geometry of the subject.
If deconvolution is based on a known or assumed PSF, then nothing about the geometry needs to be assumed. You just run the deconvolution, and whatever pops out the back side is a prediction of what the geometry actually was. That prediction is more or less accurate depending on accuracy of the PSF model, random noise in the image acquisition procedure, and possibly approximations in the numerical method, but it does not need to assume anything about the geometry.
If the PSF is not known or assumed, but instead is an output of the calculation, then I do agree that assumptions about subject geometry have to be made. Of course this is true with any sort of "sharpening to taste", to use my own favorite description of what I usually do.
I can't match up that statement with any physics that I know, except that photons arrive randomly so you always have to deal with shot noise. Can you explain in more detail?enricosavazzi wrote:The main reason why diffraction fuzziness cannot be truly reversed is that diffraction is a quantum phenomenon with an indeterminacy component
--Rik
- enricosavazzi
- Posts: 1562
- Joined: Sat Nov 21, 2009 2:41 pm
- Location: Västerås, Sweden
- Contact:
Rik,
at least in principle, with geometric optics you can ray-trace back from an image point through a refractor/reflector system to the corresponding source point. With diffracted light, you cannot.
Then there are practical problems that substantially reduce the success of deconvolution. One is sensor noise, as you mention. An example of another problem is that the point spread function depends on wavelength. With a Bayer sensor you can only tell that the wavelength falls into one of three (rather wide and partly overlapping) color channels, so there is no precise wavelength to plug into the point spread function, only an estimate that may be off by maybe 30%.
at least in principle, with geometric optics you can ray-trace back from an image point through a refractor/reflector system to the corresponding source point. With diffracted light, you cannot.
Then there are practical problems that substantially reduce the success of deconvolution. One is sensor noise, as you mention. An example of another problem is that the point spread function depends on wavelength. With a Bayer sensor you can only tell that the wavelength falls into one of three (rather wide and partly overlapping) color channels, so there is no precise wavelength to plug into the point spread function, only an estimate that may be off by maybe 30%.
--ES
-
- Posts: 91
- Joined: Fri Dec 30, 2016 1:59 pm
- Location: Lake Forest, IL, USA
Roger Clark, an astrophysicist with extensive digtal image processing experience, has published a three part treatise on image restoration and sharpening and he discusses Richardson-Lucy deconvolution and the unsharp mask (deconvolution methods are often described as image restoration rather than sharpening, since the former can actually restore image detail). His methods are more complex than many would likely apply, but the articles are well worth reading.Pau wrote:Comparing the images posted side by side by Rik, I still find the RL deconvolution version better than the USM: clearer detail and less chromatic aberration. Detail fake or not it seems an excellent algorithm.
Bill Janes
-
- Posts: 728
- Joined: Thu Dec 16, 2010 2:49 pm
- Location: Barcelona, more or less
I always use deconvolutions in my astronomical images. I have tried to use deconvolutions in micro images and except to eliminate noise I have not achieved much better results than the images obtained through a high pass filter or a focus mask.
But in astronomical images the improvement is very important, and in planets it is almost essential to use wavelets or deconvolutions. I can show a before-after sunspot taken this week, where the difference is outside any doubt.

But in astronomical images the improvement is very important, and in planets it is almost essential to use wavelets or deconvolutions. I can show a before-after sunspot taken this week, where the difference is outside any doubt.
