You're right. I noticed that too but then forgot about it so I didn't mention it.mawyatt wrote: What first struck me is that I believe the 1st equation 7.1 is incorrect, the Fourier transform equation has omitted the time domain function f(t) within the integral!!
Sample density needed to resolve Rayleigh features
Moderators: Pau, rjlittlefield, ChrisR, Chris S.
wpl.
Disclaimer, I'm not a Signal Processing expert, nor Math expert.
Just another quick look at the beginning of this and equation 7.2 seems like nothing more than the Discrete Inverse Fourier Transform of function f(ndelta), not sure why this is attributed to Shannon tho? Which seems strange because the incorrect reference Fourier Transform 7.1 is in the continuous integral form and then jumps to the discrete inverse summation form and not the continuous integral form!
The quote "In practice, many response functions are not bandlimited (example: a Gaussian line profile). It is common practice in those cases to neglect the Fourier power for frequencies above W, and to reconstruct f(t) from this cutoff Fourier transform. The arguing is that for sufficiently large W, the neglected power in the tail of the Fourier spectrum is small and hence (7.2) gives a fair approximation to the true signal." doesn't seem to agree with my experience.
Generally in practice we never just ignored frequencies above W, antialiasing filters were designed to bandlimit signals before Analog to Digital Conversion such that the resultant aliased signal did not corrupt the converted signal beyond a specified amount, usually an Least Significant Bit (LSB), or no more than the equivalent noise power.
Since continuous time "Brickwall" filters require an infinite number of components, some reasonable assessment of what the stop band rejection and transition band were required and how this effects the system were in order. In many systems these filters were critical to the overall performance, and much effort and expertise required in their design.
Generally to "neglect the Fourier power for frequencies above W" doesn't seem like a good idea unless one knows that those energy levels are below the level to cause issues with the system, example an LSB or noise level.
Just getting past the first few paragraphs and equations seems difficult and raises some questions about the rest of the article's validity, which I have't studied in detail yet.
Can you provide the source reference for this article? Was this a PhD dissertation?
Curious on your thoughts and others on this.
Best,
Disclaimer, I'm not a Signal Processing expert, nor Math expert.
Just another quick look at the beginning of this and equation 7.2 seems like nothing more than the Discrete Inverse Fourier Transform of function f(ndelta), not sure why this is attributed to Shannon tho? Which seems strange because the incorrect reference Fourier Transform 7.1 is in the continuous integral form and then jumps to the discrete inverse summation form and not the continuous integral form!
The quote "In practice, many response functions are not bandlimited (example: a Gaussian line profile). It is common practice in those cases to neglect the Fourier power for frequencies above W, and to reconstruct f(t) from this cutoff Fourier transform. The arguing is that for sufficiently large W, the neglected power in the tail of the Fourier spectrum is small and hence (7.2) gives a fair approximation to the true signal." doesn't seem to agree with my experience.
Generally in practice we never just ignored frequencies above W, antialiasing filters were designed to bandlimit signals before Analog to Digital Conversion such that the resultant aliased signal did not corrupt the converted signal beyond a specified amount, usually an Least Significant Bit (LSB), or no more than the equivalent noise power.
Since continuous time "Brickwall" filters require an infinite number of components, some reasonable assessment of what the stop band rejection and transition band were required and how this effects the system were in order. In many systems these filters were critical to the overall performance, and much effort and expertise required in their design.
Generally to "neglect the Fourier power for frequencies above W" doesn't seem like a good idea unless one knows that those energy levels are below the level to cause issues with the system, example an LSB or noise level.
Just getting past the first few paragraphs and equations seems difficult and raises some questions about the rest of the article's validity, which I have't studied in detail yet.
Can you provide the source reference for this article? Was this a PhD dissertation?
Curious on your thoughts and others on this.
Best,
Research is like a treasure hunt, you don't know where to look or what you'll find!
~Mike
~Mike
mawyatt,
I don't know enough about this topic to comment. I found this article by doing a google search on "Nyquist binning" I think. By following the prev and up links I found that this is related to a manual on SPEX, a code for spectral Xray analysis. The author appears to be Jelle de Plaa. This is not his thesis (I looked at that).
I don't know enough about this topic to comment. I found this article by doing a google search on "Nyquist binning" I think. By following the prev and up links I found that this is related to a manual on SPEX, a code for spectral Xray analysis. The author appears to be Jelle de Plaa. This is not his thesis (I looked at that).

 Posts: 108
 Joined: Sat Jan 28, 2017 11:19 pm
One observation: please take note that the visible spectrum spans over 340THz. For many years ahead of us, it will be impossible to sample a scene with twice that frequency.wpl wrote:To tie up the loose end concerning sampling vs. binning, I found a relevant article: http://personal.sron.nl/~jellep/spex/ma ... lse95.html. The sampling theorem does apply to binning also. If the original image is bandwidth limited, then the original image can be reconstructed from the bin information provided the bin frequency is greater than twice the bandwidth.
But we don't have to do that. Our eyes are far from being perfect, and our brains are trained to deal only with a fraction of that spectrum. Our eyes can distinguish only 10M frequencies (colors) and our brains are OK with it.
This means that our biological sensors are using a strong information filter, through which a massive amount of information gets lost. That's why we don't have to transmit and reconstruct a perfect lossless message. We only have to transmit just the amount of information our eyes and brains are able to cope with.
I don't see why it wouldn't be possible. However, the depth of field example mentioned gives me a question and a corollary which I think are relevant. As context, the only FAQ entry whose title mentions depth of field was last updated in 2009, the top of page FAQ link doesn't point to the FAQ forum, the long links list has 26 or so depth of field entries, and the few of those I just sampled seem to route to Zerene Systems. Does a concise and up to date statement of the forum's DoF understanding exist? And, if so, how does one discover its location?rjlittlefield wrote:If we can make similar improvements in understanding sampling and image reconstruction, that would be superb!
As a related example, one of the discussions which just happened in this thread suggests neither of the participants is aware of point spread functions. Within my experience, it's unlikely someone will read other threads, look up stuff in them (such as the 2017 mentions of PSFs linked earlier), and synthesize the material before posting. The most common methods of addressing this patternhigher education systems and journal publicationbear little resemblance to a forum. So I'm a bit curious as to the mechanisms which would improve sampling and reconstruction understanding.
 rjlittlefield
 Site Admin
 Posts: 21400
 Joined: Tue Aug 01, 2006 8:34 am
 Location: Richland, Washington State, USA
 Contact:
I have thought much about your brief post, over the last several days. You make good observations and ask profound questions.palea wrote:I'm a bit curious as to the mechanisms which would improve sampling and reconstruction understanding.
I think my answer must start by saying that I consider "the forum" to be a group of people who carry on discussions.
When I say that the forum now has a good understanding of DOF, what I mean is that any question about DOF is very likely to result in a response that is on point, correct, properly qualified, substantiated by theory and experiment, and accepted by other participants in the forum.
So, the focus in on the backandforth in discussions.
Of course the database provides a permanent record of the discussions.
But that record is usually fragmented and rambling, with many missteps and diversions.
You ask "Does a concise and up to date statement of the forum's DoF understanding exist?"
My answer is a clearcut "No".
Nothing about the forum really encourages anyone to spend the large effort needed to produce selfcontained explanations like one would hope to find in a textbook. There's probably no one here who is qualified to write one for DOF, with the possible exception of myself, and every time that I've started to tackle it, I have found the task so difficult that I simply moved on to other tasks that had higher expected value.
Specific questions about DOF are pretty simple to answer. "Help me to understand DOF" is harder. "Point me to something that will teach me all about DOF" is still impossible as far as I know. If you have a reference that will do that job, I would surely like to know it.
So now, to try addressing your question about mechanisms, I think the answer must be "examples and discussions".
A better answer is "probably a lot of discussions, over a long period of time".
In the case of DOF, I first studied the math in the early 1970's when I tried to understand H. Lou Gibson's "Magnification and Depth of Detail in Photomacrography" well enough to make graphs for myself that covered situations not well represented in his paper. At the time, I did not understand that his blur model was based on incorrect physics, and I did not pick up on the fact that he had accidentally confused the symbols "m" and "M" at one point, so his algebra was not correct either. Nonetheless, the calculated values were good enough to survive experimental testing, and I used them for many years.
Within the forum, I started writing about DOF and diffraction about 15 years ago. It quickly became apparent that people had a wide range of opinions about things like how DOF varied with sensor size and little if any idea of how to account for diffraction. Some of the discussions became contentious and were resolved only with difficulty. However, some of the difficult discussions were also very helpful, at least to me, because they helped to improve my own understanding of both the physics and how to think and talk about the problem.
At this point, I'm pretty sure I have a good understanding of DOF, both what matters and what does not. I've incorporated the highlights of that understanding into the DOF pages at zerenesystems.com, which is probably why so many links go there. At the end of that page is the note to "send email ... if you want more information." Hardly anybody sends email about that, so I assume nobody cares much about the "why", just the "what". But within the forum, I think we tend to have more "why" people, so if I were to make another try at writing a selfcontained explanation, it would be a lot longer, with many diagrams, and would take a lot of time to write.
Mind you, all that understanding has come at the cost of many hundreds of hours of reading and writing and coding and shooting. Sometimes I wonder if it's been worth the trouble, though I always end up concluding "yeah, it has".
I will be very surprised if image reconstruction does not turn out to be at least as complex as DOF. So I do not expect a good understanding to be quick or easy.
Still, you can't finish what you never begin.
So, how to begin improving the forum's understanding of image reconstruction?
Well, you might begin by convincing people that it's worth their trouble to learn.
At https://www.photomacrography.net/forum/ ... hp?t=33724 , Beatsy presented a nice example where RAWTherapee did a much better job than Topaz. But I for one did not find that example very convincing, because a quick application of Photoshop USM produced almost the same result. Soldevilla chimed in that he regularly uses deconvolutions for his astronomical images, but has not been able to do much better than conventional sharpening for micro images. On page 2 of that thread, TheLostVertex presented nice examples where deconvolution in Focus Magic and Piccure+ did better than ordinary sharpening (in Lightroom?). But these are all "black boxes", and Piccure+ is not even available for long term use (30 day "trial" but cannot be purchased). None of this does much to encourage understanding of what goes on "under the covers".
So...
It would be great if somebody (you?) could work up a convincing example using opensource software, and describe it in enough detail that other people could begin to explore it on their own, with a minimal amount of thrashing around.
Then as questions arose, and they surely would, you could answer them in some style targeted to communicate well to the other person, as best you understand their background.
A couple of rounds of that, well executed, could make a world of difference!
Rik
What's bothering me is that in this thread "reconstruction" is used in two unrelated senses. This thread started by discussing reconstruction of the image from Nyquist samples. This somehow evolved into discussions of reconstruction of blurry images using deconvolution or whatever. The first topic regards signal analysis and the second one wave optics. These are both worthwhile topics but I wish they could be separated more clearly. And how does depth of field fit into all this?
 rjlittlefield
 Site Admin
 Posts: 21400
 Joined: Tue Aug 01, 2006 8:34 am
 Location: Richland, Washington State, USA
 Contact:
In my mind these two uses of "reconstruction" are quite closely related. In both cases we wish to recover some original signal that has been altered and sampled. Whether we call the alteration a point spread function or an impulse response makes little difference to the math necessary to do the recovery. Following the point that you raised back on page 1, pixel values have been binned as well as sampled. If we want to recover the optical image that was presented to the sensor, we have to include both aspects in the reconstruction. I suspect that if the signal is noisy (and it always is), then we can get a better reconstruction by considering both aspects simultaneously. But I could be wrong about that, since my understanding in this field is not very strong.wpl wrote:What's bothering me is that in this thread "reconstruction" is used in two unrelated senses. This thread started by discussing reconstruction of the image from Nyquist samples. This somehow evolved into discussions of reconstruction of blurry images using deconvolution or whatever. The first topic regards signal analysis and the second one wave optics. These are both worthwhile topics but I wish they could be separated more clearly.
In one sense DOF is just an example of a topic in which the forum's understanding has greatly improved over time. Study the conversation between me and palea for more about that.And how does depth of field fit into all this?
In another sense, focus stacking can be viewed as a particularly complicated sort of 3D deconvolution problem. We are trying to figure out what the camera would have seen, if only it were a pinhole not subject to diffraction, given a set of samples that have been degraded by the 3D PSF of the optics combined with occlusion and other complicating factors such as lateral "squirming" of features caused by the interaction of offaxis reflections and lens movement. In a perfect world, focus stacking would mean inferring the 3D structure of the scene being photographed, then generating a synthetic image from that perfect 3D model. I expect that will eventually happen, but I have no great hope of living long enough to see it.
Rik
I'm afraid I don't follow your argument. The sensor is presented with a blurry image. The Nyquist business can reconstruct the blurry image. Now that we know what the blurry image is, we want to generate a sharp synthetic image (to use your terminology) from that. That to me sounds like a totally orthogonal process. If we can combine those two, that would be interesting. BTW, I agree that noise is a problem. I know that, in other contexts, sophisticated numerical algorithms are useless in practice since they need accurate data, and noise kills that.rjlittlefield wrote: In my mind these two uses of "reconstruction" are quite closely related. In both cases we wish to recover some original signal that has been altered and sampled. Whether we call the alteration a point spread function or an impulse response makes little difference to the math necessary to do the recovery. Following the point that you raised back on page 1, pixel values have been binned as well as sampled. If we want to recover the optical image that was presented to the sensor, we have to include both aspects in the reconstruction. I suspect that if the signal is noisy (and it always is), then we can get a better reconstruction by considering both aspects simultaneously. But I could be wrong about that, since my understanding in this field is not very strong.
I don't really want to start a new discussion, but isn't diffraction deconvolution useless for macro photography? It works for astronomy because the distance is known (infinity). In macro photography the PSF is not really known since it depends on distance, which is different for every point in the object. How do you get around that?
This is really interesting. Sounds like a great problem to work on. I like your point about a pinhole not subject to diffraction. That is how I approach the question: is this feature distortion or not? I compare the image in question to an ideal pinhole image. That's the standard.In another sense, focus stacking can be viewed as a particularly complicated sort of 3D deconvolution problem. We are trying to figure out what the camera would have seen, if only it were a pinhole not subject to diffraction, given a set of samples that have been degraded by the 3D PSF of the optics combined with occlusion and other complicating factors such as lateral "squirming" of features caused by the interaction of offaxis reflections and lens movement. In a perfect world, focus stacking would mean inferring the 3D structure of the scene being photographed, then generating a synthetic image from that perfect 3D model. I expect that will eventually happen, but I have no great hope of living long enough to see it.
Finally, I have a question about focus stacking as deconvolution. Would this problem be any easier if we used geometric optics rather than wave optics? Or is that still extremely difficult? Since defocus effects are so large in macro photography, it seems to me that you would solve much of the problem by using just geometric optics, at least for parts of the problem.
 rjlittlefield
 Site Admin
 Posts: 21400
 Joined: Tue Aug 01, 2006 8:34 am
 Location: Richland, Washington State, USA
 Contact:
I disagree. What's being sampled is not the blurry image; it's the convolution of the blurry image and the pixel acceptance profile  the binning, if you will. To recover the blurry image, you have to deconvolve the binning.wpl wrote:I'm afraid I don't follow your argument. The sensor is presented with a blurry image. The Nyquist business can reconstruct the blurry image.
For illustration, consider again these two cases that I showed on the first page:
The sample values are shown as the rectangular bars, and they're different between the two graphs. If you do just the Nyquist reconstruction from the samples, you'll recover different signals. But the optical signal is the gray curve, and it's the same in both graphs. To recover the same signal in both cases, you have to back out the binning.
My main interest is in images that have been focusstacked. In that case the PSF is almost the same from place to place, and it can be made even more the same by shooting with a smaller focus step.I don't really want to start a new discussion, but isn't diffraction deconvolution useless for macro photography? It works for astronomy because the distance is known (infinity). In macro photography the PSF is not really known since it depends on distance, which is different for every point in the object. How do you get around that?
The value of deconvolution in focus stacked images is that, compared to conventional sharpening, deconvolution is supposed to do a better job of pushing the MTF curve up closer to ideal while simultaneously keeping noise under control.
To be clear, I'm talking about a mythical pinhole that is magically invulnerable to diffraction. Real pinholes diffract like crazy. But yes, if we imagine a perfect pinhole then we're just thinking about projective geometry and that would be the ideal result.I like your point about a pinhole not subject to diffraction. That is how I approach the question: is this feature distortion or not? I compare the image in question to an ideal pinhole image. That's the standard.
I don't understand what you're trying to describe. Even in geometric optics, the 3D PSF of a perfect lens is still a couple of cones extending forward and backward from the point of perfect focus. That's the whole basis for the classic DOF equations. It's also what causes most of the difficulties in stacking, such as halos and transparent foreground. I'm not aware of any focus stacking algorithms that are bothered more by diffracted images than they would be by undiffracted ones, if those could be obtained.Finally, I have a question about focus stacking as deconvolution. Would this problem be any easier if we used geometric optics rather than wave optics? Or is that still extremely difficult? Since defocus effects are so large in macro photography, it seems to me that you would solve much of the problem by using just geometric optics, at least for parts of the problem.
Rik
Rik,
Thanks for responding to my comments and questions. On the first point, I think I did not make myself clear. I think we are in agreement, actually. When I said "Nyquist business," I meant that to include the deconvolution of the binning. That is what is discussed in the article I referenced earlier (that mawyatt had reservations about). My point concerned what comes next. How do we generate a sharp image from our nowknown blurry image? That's a separate problem.
Your response concerning PSF in stacking is really helpful. That make sense now. Thanks.
Concerning the last point, it seems that you are saying that stacking involves mostly, if not completely, geometric optics. I think that answers my question.

Walter
Thanks for responding to my comments and questions. On the first point, I think I did not make myself clear. I think we are in agreement, actually. When I said "Nyquist business," I meant that to include the deconvolution of the binning. That is what is discussed in the article I referenced earlier (that mawyatt had reservations about). My point concerned what comes next. How do we generate a sharp image from our nowknown blurry image? That's a separate problem.
Your response concerning PSF in stacking is really helpful. That make sense now. Thanks.
Concerning the last point, it seems that you are saying that stacking involves mostly, if not completely, geometric optics. I think that answers my question.

Walter