View previous topic :: View next topic 
Author 
Message 
rjlittlefield Site Admin
Joined: 01 Aug 2006 Posts: 19634 Location: Richland, Washington State, USA

Posted: Tue May 27, 2014 9:56 pm Post subject: Draft document analyzing defocus & diffraction 


I could use some independent review of a document that I've been struggling with.
The discussion is tentatively titled "What's the depth of field (DOF) for a diffractionlimited lens?"
It's organized around a collection of data like this:
The dataset shows a combination of captured images and computed graphs showing what happens as a high quality lens is slowly shifted from perfect focus to hopelessly blurred, in fine focus steps that are only1/8 the DOF given by the most common formula.
The purpose is to help in understanding and explaining how image quality changes as a function of defocus distance.
The dataset can be downloaded in three different forms:
 A .zip archive containing 34 .png files, 1 containing the annotation as shown above and 33 others without the annotation. You can load this archive directly into Zerene Stacker and use its "press and drag" display facility to quickly scroll back and forth through the images as if you were running a focus slider.
 A Photoshopformat .psd file containing the same images as layers. You can load this directly into Photoshop or GIMP, then click the little "eye" icons that control layer visibility, to flash back and forth between any two selected amounts of defocus.
 A .mov movie file. In Quicktime Player, you can pressanddrag on the timeline slider to quickly scroll back and forth through the images as if you were running a focus slider. (Sadly, this feature is not available in most other movie players.)
The draft for review is HERE (PDF, 16 pages with a lot of pictures). With luck, it will explain what is shown in the graphs.
Be aware that at this point exactly zero other people have had a chance to read the beast, so goodness knows what egregious errors it contains.
Feel free to comment by PM, email, or public posting, whatever works best for you. Thanks!
Rik 

Back to top 


DQE
Joined: 08 Jul 2008 Posts: 1653 Location: near Portland, Maine, USA

Posted: Thu May 29, 2014 3:58 pm Post subject: 


Rik,
At my first read, It looks like a very informative "essay", and one that should help the understanding of this and related topics.
I'm not sure I understand the upper right graph. Would you please clarify the name and units of the vertical axis?
I suspect that my difficulty with this graph is related to my somewhat unusual career (image quality in medical radiography) where we generally didn't plot these variables, but instead we were happy with MTF or OTF vs spatial frequency and PSF or LSF vs distance. Although this imaging specialty copied optical image quality principles and terminology, there were some aspects of optical image quality that didn't become widely used in xray image quality R&D. _________________ Phil
"Diffraction never sleeps" 

Back to top 


rjlittlefield Site Admin
Joined: 01 Aug 2006 Posts: 19634 Location: Richland, Washington State, USA

Posted: Thu May 29, 2014 5:09 pm Post subject: 


Phil, thanks for taking a look.
I expected the upper right graph to be problematic, because its axes and method of construction are different from anything I've ever seen plotted.
This is a summary graph that shows how badly a lens loses resolution depending on how much it is defocused. The curves stay the same for all amounts of defocus. Only the vertical indicator line moves across, showing the amount of defocus that is illustrated in other ways in the other graphs and images.
The X axis (horizontal) indicates the extent of defocus, from perfectly focused on the left (wavefront error = 0) to badly defocused on the right (wavefront error = 1.0).
The Y axis (vertical) is in lines per mm, with 0 at the bottom and diffractionlimited cutoff very near the top.
Here's the way the graph is constructed:
1. Pick a value for wavefront error.
2. At that wavefront error, find the values of SPF (spatial frequency, lines/mm) where MTF = 0.1, 0.2, 0.3, 0.5, and 0.8.
3. Plot those values of SPF on the graph, associated with the wavefront error.
4. Sweep wavefront error from 0 to 1 repeating this process, to draw five curves, one for each value of MTF.
Graphically, you can think of it this way:
1. Pick a value for wavefront error.
2. For that wavefront error, draw the graph at upper left showing MTF versus SPF.
3. Flip that graph so as to swap its axes, putting SPF vertical and MTF horizontal.
4. At each of MTF = 0.1, 0.2, 0.3, 0.5, 0.8 on the horizontal axis, trace upward until you hit the bold black curve, then sideways until you hit the vertical axis. You have now just found the SPF corresponding to each of those five MTFs, and those are the five points on the summary graph associated with that wavefront error.
As it is written in the document:
Quote:  In reading this graph, the sense is "higher is better". Finer detail is always resolved with lower MTF, and if you can live with lower MTF then you can resolve more lines per mm. That's why smaller values of MTF are on top. As you move away from perfect focus, the lines go downward, meaning that the resolution at which MTF reaches some particular value gets lower and lower. That's bad because it means you're losing sharpness.
The key point about this graph is that for small amounts of defocus, there is relatively little effect on image sharpness  the lines are pretty flat near the left side. As you defocus farther, sharpness is lost increasingly quickly  the curves not only go downward, they bend downward. At some point around 0.4 lambda wavefront error, there is a really painful loss of contrast, especially for fine detail. It's best not to operate there or any farther out. 
To make further sense of the graph, play with one of the animation methods that gives you a "focus slider". Notice how as you defocus farther, the MTF curve sags farther, so that it falls to any specified value of MTF at lower and lower lines/mm. Simultaneously the indicator line in that upper right graph is moving farther to the right, where the curves are lower.
Does this help?
Rik 

Back to top 


DQE
Joined: 08 Jul 2008 Posts: 1653 Location: near Portland, Maine, USA

Posted: Sat May 31, 2014 6:33 pm Post subject: 


Rik,
Thanks for the explanation of the graph  I'm glad I asked!
I very much enjoyed reading your essay in a "first pass" mode and I look forward to studying it in more detail, with an understanding the graph you explained. I enjoyed my initial reading very much.
A thought  would it be possible and/or helpful to replace or supplement the time variable (now provided with the slider cue) with a variable in a 3D and/or contour plot? Since I think the time variable in the movie works fine, there is probably no reason to do this, but it might reveal some additional insights in some situations if not this specific one.
More later. _________________ Phil
"Diffraction never sleeps" 

Back to top 


DQE
Joined: 08 Jul 2008 Posts: 1653 Location: near Portland, Maine, USA

Posted: Sat May 31, 2014 6:59 pm Post subject: 


Just to clarify about contour plots/3D mesh graphs, etc:
What I'm thinking of is a plot of (for example) MTF(spatial frequency, lambda). Although I don't offhand expect it to reveal any new information, it may be helpful to some readers, depending on how their brains process various types of graphical information.
Again, just a thought. _________________ Phil
"Diffraction never sleeps" 

Back to top 


DQE
Joined: 08 Jul 2008 Posts: 1653 Location: near Portland, Maine, USA

Posted: Wed Jun 04, 2014 10:05 am Post subject: 


Rik,
Below are a few more thoughts for your consideration, after further studying your article and a few of the tutorials you reference, plus of course a few other references.
1. Would it be helpful to include the equation for the diffraction limited OTF? I suspect that having to include a Bessel function might not be helpful to most readers, but perhaps for completeness and for a few readers it might be helpful?
2. A few of my personal biases, as background: I have not paid too much attention to the issues with COC, since I have not been able to obtain a firm, clear conceptual quantitative basis for most of its uses. Also, in part it seems to be an attempt to put a single number on a complex curve shape, something I've usually resisted. Not being derived from electromagnetic theory (Maxwell's equations) also makes it suspect to me.
3. As you clearly describe, I agree that the COC method of describing image blur is terribly inaccurate and disinformative to say the least. It's very helpful and informative to see this in detail and with a test pattern demonstration to accompany the text.
4. What do you think about using a Gaussian approximation to the central lobe of the Airy disk, as described here? At least it would better fit the target curve for many or most purposes. I assume that the single parameter we would then use would be the Gaussian equation's single parameter ("sigma") instead of the diameter of the COC.
http://en.wikipedia.org/wiki/Airy_disk#Approximation_using_a_Gaussian_profile
What would a Gaussian model predict for Total DOF? Is such a prediction acceptably accurate for this purpose?
While a Gaussian doesn't have the oscillations and phase reversals provided by other functions and by reality, at least it "not too badly" approximates the more important lower frequency MTF defocus losses with good accuracy. I wonder why the Gaussian approximation hasn't caught on, and why the less informative COC model has remained so popular? Sometimes it seems that photography is especially prone to continuing unfortunate trends and ideologies.

Regardless of these comments, I very much enjoyed and benefited from studying your article  thanks for all the efforts, especially the graphics and accompanying images. _________________ Phil
"Diffraction never sleeps" 

Back to top 


rjlittlefield Site Admin
Joined: 01 Aug 2006 Posts: 19634 Location: Richland, Washington State, USA

Posted: Sun Jun 08, 2014 12:00 am Post subject: 


Phil, thank you for your continued consideration of this topic. The questions and suggestions are very helpful.
You've touched on several of the reasons why I wrote that this was "a document that I've been struggling with".
I originally undertook this study for two main reasons: 1) to improve my own understanding of how and why an image degrades as you defocus the lens, and 2) to help convey some of that improved understanding to other people.
Of course those goals are not as independent as they might seem at first glance. Simple models help me just as much as the next person, so if I can accomplish (2) then I will automatically also accomplish (1). And a good case can be made that if I can't explain something then I don't really understand it myself, so failing to accomplish (2) could mean that I've also failed to accomplish (1). To a large extent they're just two views of the same problem.
I'm not completely satisfied with what I've done so far. It's definitely nice to have some experimental confirmation & demonstration of what may have previously seemed to be abstract theory. It's also nice to have the captured images and several different graphical representations of the theory pulled together in one place so that they can be played with as if you were just running a focus control. And it's nice to have an explanation/demonstration of why the quarterlambda rule for DOF makes a reasonable recommendation, while also explaining/demonstrating why you might get an even better stacked result from using a shallower focus step. These are good things.
On the other hand, I am still not comfortable with the relationship between wave optics and the classic ray optics COC model of DOF. The COC model is clearly not complete, but it has a long history of being "good enough" and it has the great charms of being simple to explain, simple to compute, and ubiquitous in the literature. Such advantages are not to be discarded lightly!
This doesn't bother me so much for micro/macro work, which always seems to end up diffraction limited. But DOF is a concern at all levels of magnification, and it would be really nice to have some sort of graceful approach to covering the whole spectrum in some uniform way.
Last spring I discussed a "DOF Two Ways" approach. That basically said "Look, there are two very different ways to calculate DOF  the classic COC approach if you're sensorlimited and the wave optics 1/4 lambda approach if you're diffractionlimited. Calculate DOF both ways and use the one that's larger. It's great if they're about the same, because then you've made a near optimum choice of aperture given the sensor you're using."
I still like that approach, but lately I've been thinking harder about that transition region where the MTF curves are just different.
Wave optics says that in that transition region you're resolving finer detail than ray optics predicts, but in exchange you're getting less contrast for coarse detail.
What does that mean in terms of subjective "sharpness"?
It seems like a good question, but I don't know the answer yet, I'm still trying to make sense of it, and it probably doesn't matter for the audience here at PMN. 'Nuf said for now.
Regarding your questions and thoughts...
Quote:  1. Would it be helpful to include the equation for the diffraction limited OTF? 
For completeness, I should at least make clear that I'm using the computational procedures laid out in the references, for example http://photo.net/learn/optics/lensTutorial#part5, which in turn follows the formulation derived by Hopkins. Archiving copies of those references would be a good bit of scholarly insurance also.
Quote:  2. A few of my personal biases, as background: I have not paid too much attention to the issues with COC, since I have not been able to obtain a firm, clear conceptual quantitative basis for most of its uses. Also, in part it seems to be an attempt to put a single number on a complex curve shape, something I've usually resisted. Not being derived from electromagnetic theory (Maxwell's equations) also makes it suspect to me.
3. As you clearly describe, I agree that the COC method of describing image blur is terribly inaccurate and disinformative to say the least. It's very helpful and informative to see this in detail and with a test pattern demonstration to accompany the text. 
Well, I certainly don't want to demonize the COC method. Close to focus, it clearly does not make good predictions. And everywhere it is definitely an attempt to put a single number on a complex curve shape. But again, it has a long history of being "good enough" over a wide range of conditions. Some explanation for why that is may be found in the wisdom of Imatest, who write that
Quote:  Experience has shown that the best indicators of image sharpness are the spatial frequencies where MTF is 50% of its low frequency value (MTF50) or 50% of its peak value (MTF50P). 
It turns out that MTF50 as computed by the blur circle model is within a few percent of MTF50 as computed by the wave optics model for any amount of defocus beyond about 0.4 lambda wavefront error  around the middle of what my draft document identifies as the "transition region". So, if we imagine that all this time people have been using "acceptable COC" as a shorthand for "acceptable MTF50", then the blur circle model has been giving them the same answer that wave optics would in that wide range of defocus values.
Below 0.4 lambda the standard blur circle model becomes less accurate. At 0.25 lambda the MTF50 discrepancy between wave optics and standard blur circle reaches about 17%, and below that it grows rapidly as the blur circle MTF50 erroneously goes to infinity while the wave optics MTF50 converges to that of the Airy disk. However (unless I've blown some work today), it turns out that there's a simple way to inject the fixed Airy disk diameter into the focusdependent blur circle computation so as to reproduce the wave optics MTF50 within 3% for all amounts of defocus. Would that be a good thing to do? For myself I don't know, but I have some friends who write DOF calculators that I suspect would be pretty happy with that approach because their users come from a background where blur circles and acceptable COC are trusted friends.
Quote:  4. What do you think about using a Gaussian approximation to the central lobe of the Airy disk 
Mostly I think I don't understand the question. The Gaussian approximation does a nice job of reproducing the effects of diffraction at perfect focus. But I don't recall having seen a Gaussian approximation to the wave optics PSF at any specified defocus, and if there were one, it seems to me that the resulting MTF curve could not possibly match the wave optics MTF curve, because it would just retain its same shape but scale downward to some lower cutoff frequency. Can you clarify for me what you had in mind?
Rik 

Back to top 


DQE
Joined: 08 Jul 2008 Posts: 1653 Location: near Portland, Maine, USA

Posted: Sat Jun 14, 2014 12:59 pm Post subject: 


Quote:
4. What do you think about using a Gaussian approximation to the central lobe of the Airy disk
Mostly I think I don't understand the question. The Gaussian approximation does a nice job of reproducing the effects of diffraction at perfect focus. But I don't recall having seen a Gaussian approximation to the wave optics PSF at any specified defocus, and if there were one, it seems to me that the resulting MTF curve could not possibly match the wave optics MTF curve, because it would just retain its same shape but scale downward to some lower cutoff frequency. Can you clarify for me what you had in mind?
=========================
Rik,
Thanks for the your detailed and very insightful reply.
After rethinking these and assorted related issues for a few days, I now realize that my comment about possibly using a Gaussian approximation to the lowfrequency OTF (MTF50) failed to take into account that one would need to supply a different parameter for a diffraction OTF Gaussian approximation for each lambda. While this could presumably be done using empirical curvefitting techniques, it's not clear to me that this would buy very much in the way of a simplified or better understanding. In addition, you'd have inaccuracies due to using such a simple mathematical model of the diffraction OTF. With modern computers and software, it's easy to use the full diffractionbased OTF equation and make good diffractionbased predictions.
I look forward to learning more about your ideas for very accurately approximating MTF50 performance. I believe we share an opinion that low spatial frequencies are usually or often more important for assessing image quality than high frequencies. I sometimes sense that the photography community is much too preoccupied with parameters such as the cutoff frequency. Fortunately, improving the performance at the cutoff frequency often improves performance at low frequencies too, but optical systems may do complex things with their MTF curves!
This thread reminds me of a discussion I had with a senior lens designer early in my career in R&D at Kodak. Being fresh out of graduate school and full of enthusiasm for MTF, OTF, and related concepts, I asked him why he didn't just pick the lens design alternative that provided the best OTF. He smiled briefly and picked up a thick notebook of MTF graphs related to one of his inprogress lens design projects. His OTF notebook was several inches thick! He then showed me some of these graphs for various fstops, focus distances, tangential vs radial MTFs, what happens in case of common manufacturing errors, etc, etc, and then said that while he used MTF, OTF, PTF, etc, regularly, having such a plethora of curves and data made it very difficult to optimize a lens design or to understand if it would adequately work for an application. The issue of having competing system MTF curves that crossed at some frequency was also discussed. Yet at the end of the project, he and his team had to pick a single lens design and commit the company's business unit to spending the money to manufacture and sell the associated system. I was very much taken aback by his real world experience with OTF since my much simpler work in ordinary medical projection radiography didn't have anything like this degree of complexity and ambiguity.
Perhaps macrophotography and microphotography optics may also become very complex, limiting how much one can usefully do with models and analyses.
Even if we can't fully depend on simplified analyses, at the same time we can't do without them! _________________ Phil
"Diffraction never sleeps"
Last edited by DQE on Thu Jun 19, 2014 8:22 am; edited 3 times in total 

Back to top 


andreah
Joined: 18 Jun 2014 Posts: 9 Location: Italy

Posted: Thu Jun 19, 2014 3:03 am Post subject: 


Hi Rik,
I follow from lot of time this forum where I learned many things but my poor knowledge of English does not allow me an active participation.
I use the OTF diagrams to determine the step length in Focus Stacking and for this purpose I have written my calculator; this calculator allows you to calculate the step length in two different ways and provides OTF diagram for the aperture and the step length selected.
For diagrams OTF I used your same sources instead to determine the DoF and aperture I used a different way and on this I would like your opinion if I operated properly.
Mode 1
You can choose the desired CoC (diameter in microns or number of pixels sensor) and the program calculates the pair Aperture/Depth of field so that the CoC diffraction is equal to CoC defocus. This is the condition that minimizes the contribution of individual CoC to the size of the total CoC , where:
CoC_tot = sqrt (CoC_diff ^ 2 + CoC_def ^ 2)
I calculated this in the following way:
the results of the equations 4 and 6 for DoF and F with a CoC_tot 30 microns are very close to those estimated from Zerene
Mode 2
In this mode are calculated individually the CoC_def using equation 1 for a selected DoF and CoC_diff via the equation 2 for a selected F, the diagram OTF is then used for the acceptance of set values.
I use this method at high magnifications because for these values there aren't aperture open enough to satisfy the conditions imposed by method 1.
A test mode 2 can be seen here
https://www.flickr.com/photos/andreah45/13911120895/in/set72157625338698384
I hope that what is written can be understood and not get too many beatings from those who understand it


Back to top 


rjlittlefield Site Admin
Joined: 01 Aug 2006 Posts: 19634 Location: Richland, Washington State, USA

Posted: Sun Jun 22, 2014 1:24 pm Post subject: 


Andrea, thank you for the formulas and questions. I have seen some information about STACKSTEP in your Flickr stream, but it's very helpful to have this extra information about how STACKSTEP works.
Quote:  I would like your opinion if I operated properly. 
This is a deeper question than it might seem at first glance.
On the one hand, it looks to me like you have operated in accordance with standard analysis and your values will work fine for practical application. So for that much, my answer is "Yes."
On the other hand, it turns out that there's a fundamental flaw in standard analysis that introduces significant error into the calculation. So on that basis, I'll say "No, but you're in good company, and anyway it works out OK in the end."
The fundamental flaw is in that little formula at the top of your post:
CoC_tot = sqrt (CoC_diff ^ 2 + CoC_def ^ 2)
It turns out that diffraction blur and geometry blur simply do not combine according to that formula. The formula is widely used, but it has no basis in physics and it's not a very accurate approximating function either. (Aside: the formula would be correct if the two blurs were independent and Gaussian, but they're not.)
For the benefit of other readers, let me switch notation a little bit to make more clear what the symbols mean. I will use these:
AiryDiameter: diameter of the Airy disk, measured to the first zero
CoC_defocusOnly: diameter of the classic ray optics blur circle, ignoring diffraction
CoC_total: diameter of the total blur circle, including both diffraction and defocus
In this notation, the formula that you're using for CoC_tot corresponds to this:
CoC_total = sqrt (AiryDiameter ^ 2 + CoC_defocusOnly ^ 2)
To illustrate how inaccurate that formula is, let me show you three curves of MTF50 as a function of focus. MTF50 is the spatial frequency at which MTF = 0.5 = 50%. As discussed by Imatest, this is arguably the best single indicator of sharpness. So, these curves show how much sharpness is lost due to diffraction and increasing amounts of defocus.
The middle curve, shown in blue, comes from the wave optics computation. That's the one from Hopkins, which combines diffraction and defocus in a unified and accurate manner.
The upper curve, shown in red, comes from the classic ray optics blur circle calculation that completely ignores diffraction. It's quite accurate beyond about 0.4 lambda wavefront error. Below that, it becomes inaccurate, and in the limit approaching perfect focus it's just plain wrong. No surprise there.
The lower curve, shown in green, comes from using the standard formula which pretends that CoC_total = sqrt (AiryDiameter ^ 2 + CoC_defocusOnly ^ 2), and then doing a blur circle computation on CoC_total. I've labeled this "rsumsq", for "root of summed squares".
You can see in the graph that the green curve is too low everywhere. Worse, its greatest errors occur in the region close to perfect focus  exactly where we want to be operating to get the best results from our lenses. Essentially, the standard rsumsq calculation overestimates the effect of diffraction. It produces a "total blur circle" that is consistently too large, resulting in an MTF50 estimate that is consistently too small when compared to an accurate calculation.
The problem is easily fixed. All we have to do is use some different method of computing a CoC_total value that gives a better approximation to the wave optics result.
We can get arbitrarily close to the wave optics result by using table lookups and interpolation, but there's a simpler method that both appeals to intuition and produces good accuracy.
That method is to compute as follows:
p = 3.14
k = 0.715
CoC_total = ((k*AiryDiameter) ^ p + CoC_defocusOnly ^ p) ^ (1/p)
I'll call this the "pkf" method. It's a generalization of rsumsq, with the addition of parameters p and k that can be used for curvefitting. Choosing p=2 and k=1 would give rsumsq. In the values that I've given, k is chosen to match MTF50 at perfect focus, and then the value of p is chosen to give minimum relative error over the rest of the curve.
Using the pkf method, the curves look like this:
Asked to choose between the green rsumsq curve and the dashed black pkf curve, I think it's pretty clear that we should go with pkf. It's a much better fit to the wave optics MTF50, for almost no additional cost in complexity.
Now that we have a more accurate formula for CoC_total, we can repeat the classic analysis, concluding that the optimum aperture occurs when:
COC_defocusOnly = k*AiryDiameter
Using k=0.715, this means that the two methods rsumsq and pkf disagree by only about half an fstop, with pkf recommending the smaller aperture.
In terms of wavefront error, the difference is 0.305 lambda for rsumsq and 0.217 lambda for pkf. That sounds like a pretty large difference, but if we use wave optics to compute the MTF curves and plot them together, a more nuanced picture appears:
Here we see that the curve with the larger aperture has a higher cutoff frequency, but it also sags more. For most of the curve, these competing effects almost exactly cancel each other, with the result that the defocused MTF curves are virtually identical for all frequencies where MTF is 25% or more.
As predicted, the MTF50 is slighter higher using pkf than using rsumsq, but that difference is too small to matter.
Instead, the major differences are that 1) the pkfrecommended optimum aperture gives much less variation in sharpness between points of perfect focus and maximum defocus, but 2) this is achieved by making the infocus areas less sharp.
So, between these two conditions that are essentially the same based on MTF50, we would prefer different apertures depending on how much we weight uniformity versus average sharpness. Mathematics can formalize this difference, but the preference is a matter of taste. Both rsumsq and pkf make completely reasonable recommendations.
Now let's think about accuracy of the DOF estimate.
As shown in the first graph, the MTF50 estimate provided by rsumsq is significantly low compared to the wave optics and pkf estimates. In the range of recommended apertures, the discrepancy is roughly a ratio of 3:4.
On the surface, this 3:4 discrepancy in MTF50 can produce arbitrarily large errors in the DOF estimate. For example, if we choose a COC corresponding to the "Required MTF50" as shown in the graph below, then using pkf would predict almost three times as much DOF as rsumsq.
Carrying this farther, if the required MTF50 were a little higher than shown, than rsumsq would say "No, sorry, you've reached the diffraction limit and you simply can't get that MTF50," while wave optics and pkf would say "Sure, no problem."
But here's the catch: all you have to do is adjust the required MTF50 in proportion to the error provided by rsumsq, and then you're back to getting the same DOF estimate. Here's the graph showing that adjustment:
The reason I say "here's the catch" is that COC specifications and the MTF50s that go along with them are very subjective in practice. People generally have no idea what COC = 5 pixels should actually look like, and they don't need to. Their problem is easily worked in the other direction: experiment to determine what COC has to be specified in order to get predictions that match their preferences.
When we include the human user, the whole system becomes selfcorrecting in the sense that errors in the approximating function are compensated by adjustments in the threshold. Should COC for full frame be 0.030 mm or 0.022 mm? Who knows, who cares? Just pick the one that "looks good" in one case and it'll look good for other cases too, even if under the covers the approximating function isn't so great.
You mention that your curves "are very close to those estimated from Zerene". Well, sure. That's because they're both calibrated against similar criteria for what "looks good". That's where your 30 microns CoC comes from, and also it's how I decided which numbers to print in boldface at http://zerenesystems.com/cms/stacker/docs/tables/macromicrodof.
Bottom line: the standard approximation for combining diffraction and defocus is flawed, but the system as a whole works fine in practice. It's unclear how much practical improvement could be made by using more accurate math.
Rik 

Back to top 


rjlittlefield Site Admin
Joined: 01 Aug 2006 Posts: 19634 Location: Richland, Washington State, USA

Posted: Sun Jun 22, 2014 5:37 pm Post subject: 


One more comment...
It's important to be clear about what those words "optimum aperture" really mean.
When we say that the "optimum" occurs when COC_defocusOnly = k*AiryDiameter, what we really mean is either:
A. Maximum sharpness given a specific defocus distance,
or equivalently,
B. Greatest allowable defocus distance, given a specific minimum sharpness.
In terms of focus stacking, the equivalent conditions are:
A. Sharpest stack given a specified number of frames (best result for specified amount of work), or
B. Minimum number of frames, given a specific minimum sharpness (least amount of work for a specified quality of result).
If you simply want the highest quality result, without paying much attention to the amount of work involved, then COC_defocusOnly = k*AiryDiameter is not the appropriate condition.
Instead, you should:
1. Choose the sharpest aperture for whatever lens you happen to be using, then
2. Use a very fine focus step, much shallower than is indicated by the "optimum" condition.
Even though larger steps may produce no obvious focus banding, there is always some loss of contrast associated with defocus. At the 0.305 lambda wavefront error suggested by rsumsq, there's almost a 34% reduction in MTF50 compared to perfect focus. There's a significant improvement at the 0.217 lambda suggested by pkf, but still it's over 20% loss of MTF50. To suffer only a 10% loss of MTF50, you have to get clear down to 0.15 lambda, and 5% loss requires 0.104 lambda.
There's no limit to the rule that "smaller is better" for step size. It's just a matter of diminishing returns.
Rik 

Back to top 


rjlittlefield Site Admin
Joined: 01 Aug 2006 Posts: 19634 Location: Richland, Washington State, USA

Posted: Mon Jun 23, 2014 1:21 pm Post subject: 


And on the other hand...
One of the games I like to play might be called "Justify Tradition". That is, figure out under what conditions the traditional thing to do is arguably the best thing to do.
Playing that game, I have now found a good justification for the traditional formula that
CoC_total = sqrt (AiryDiameter ^ 2 + CoC_defocusOnly ^ 2)
Yesterday I pointed out that the traditional formula (which I designated "rsumsq") is not a good approximating function for MTF50, which is often presented as the best single indicator of sharpness.
But there's an interesting question: is it perhaps a good approximating function for something else that's important?
The answer to that question turns out to be "Yes. It's excellent for predicting MTF75."
Here's the graph to justify that claim.
Looking at the actual numbers, the wave optics curve and the rsumsq curve agree here at MTF75 to within 3.5% relative error.
Of course now I have to think about why MTF75 might be a better thing to look at for this application than MTF50 is. I think I think I see how to argue that case, but writing a coherent explanation will have to wait for later since I have errands to go do right now. In the meantime, don't scurry off to rewrite those calculators.
Rik 

Back to top 


andreah
Joined: 18 Jun 2014 Posts: 9 Location: Italy

Posted: Wed Jul 02, 2014 11:40 am Post subject: 


Thanks Rik for your detailed reply that stimulated me to do some checking, but I'm not having your theoretical knowledge on the subject so I have to settle for empirical evidence.
I wanted to see how varied the resolution in lp / mm at different OTF 25% 50% 75% versus the ratio CoC_DefocusOnly / AiryDiameter
How predictable are the lowest values of OTF to be more affected by changes of this ratio.
In these graphs, the vertical line at 1 reflect the solution "rsumsq" and if I well understand line at 0.75 to solution "pkf"
Looking these curves it seems don't are substantial differences between the two methods for OTF 25% and 75%, while a small difference in favor of "pkf" is known for a ratio near 0.75 at OTF 50%.
The difference between the two methods is very small, about 1.5% at the point of maximum differentiation, I believe that this difference is not noticeable to the practical effects.


Back to top 


rjlittlefield Site Admin
Joined: 01 Aug 2006 Posts: 19634 Location: Richland, Washington State, USA

Posted: Thu Jul 10, 2014 10:48 am Post subject: 


Andrea,
Thank you for the further analysis and comments.
I agree, for what you are doing there is no significant difference between rsumsq and pkf.
If we choose dz first, and then compute the "optimum" aperture for that dz, then rsumsq and pkf give results that have the same sharpness to within a couple of percent. Your graphs show, in a different way, the same thing that I showed for two specific aperture values at http://janrik.net/MiscSubj/2014/DefocusAnalysis201405/MTFatRsumsqAndPkfApertures.png.
However, I am thinking about a different problem: what happens if we choose the aperture first and then ask about the resulting DOF, given a constraint on sharpness? This is the question that is supposed to be answered by a standard DOF calculator. The user chooses a total COC, plugs in the lens FL, the aperture setting, and the focus distance, and the calculator gives the resulting near and far DOF limits.
For this application, which is different from yours, rsumsq and pkf can give quite different results. But for the reasons discussed earlier, it's unclear whether the difference actually matters to users. I suspect that users will simply adjust their specifications to match their judgment of what looks good. In that case rsumsq versus pkf again does not matter, except that it takes a different COC on the input to produce the same estimates on the output.
Rik 

Back to top 


andreah
Joined: 18 Jun 2014 Posts: 9 Location: Italy

Posted: Fri Jul 11, 2014 5:17 am Post subject: 


Rik,
I agree with you that the CoC_tot is a value that depends on the method by which it is calculated, but I give it only a relative importance, I consider it only a useful parameter to determine the initial conditions on the basis of sensor camera cutoff frequency .
If we consider the graph OTF based on the work of Hopkins correctly, this depends only on the aperture F and CoC_defocusOnly and not on the method how they are calculated; in other words them are not correlated parameters that we can calculate separately with good precision.
So considering "true" values CoC_Airy and CoC_DefocusOnly also will be "true" OTF graph and that is what I need to determine the conditions of work wished.
I arbitrarily use a CoC_tot of 30 microns for FF sensors and 20 microns for for APS sensors, probably the "true" CoC_tot will be different but the chart OTF will be "true" to the values of CoC_Airy and CoC_DefocusOnly regardless of the method used for their calculation.
I believe that the conditions resulting from the method "rsumsq" of CoC_Airy = CoC_DefocusOnly are very close to the area that you have defined as the "Transition Zone"; this is the solution provided by MODE 1 of STACKSTEP.
Then in MODE 2, where I can arbitrarily vary the aperture and DoF, I can improve this first solution to reduce the difference in sharpness between the areas of focus and defocus, and choose whether to close the aperture by reducing the sharpness in the focus areas or reduce step by increasing the sharpness in the defocus areas.
rjlittlefield wrote: 
However, I am thinking about a different problem: what happens if we choose the aperture first and then ask about the resulting DOF, given a constraint on sharpness? This is the question that is supposed to be answered by a standard DOF calculator. The user chooses a total COC, plugs in the lens FL, the aperture setting, and the focus distance, and the calculator gives the resulting near and far DOF limits.

This is partially resolved by the MODE 2 STACKSTEP .... if you fix the aperture and you change the DoF as DOF decreases you can see how the curve of the defocus areas approximates that of the theoretical diffraction limit.
In this MODE, the sharpness can't be given as input but the solution can be sought in an iterative way.
andrea 

Back to top 


