## A new way of thinking and calculating about DOF

**Moderators:** ChrisR, Chris S., Pau, rjlittlefield

- rjlittlefield
- Site Admin
**Posts:**20850**Joined:**Tue Aug 01, 2006 8:34 am**Location:**Richland, Washington State, USA-
**Contact:**

It doesn't matter. The spreadsheet assumes that the objective is used at or near rated magnification, so that the manufacturer's NA rating is correct. In that case the camera-side effective f-number is determined from the subject-side NA and the magnification, and we're off to the races.ChrisR wrote:Do we know what "Pupil Ratios", microscope objectives run at?

--Rik

To confirm then, in similar area, if we use an objective at other than its intended magnification, subject side NA stays the same so effective aperture only changes in ratio

(Memployed)/(Mintended)

and not (Memployed +1)/(Mintended +1) ?

I'm thinking that for a finite objective (with altered tube length and subject distance) the latter might be right, but it doesn't really matter because they don't normally cover enough field when used like that (eg 4x NA 0.1 used at 2x becomes f/12 instead of f/10)

Only one thing, according to the spreadsheet the steep size for 10X with mitutoyo 10/0.28 on NEX-5N is 5.6um. On the 5D mkII it suggest the same 5.6 um step size.

I find those steps too short, I have been using more like 7-8 um steps on the NEX-5N and 9-10um steps on the 5D mkII

I guess the formula asumes the sensor is perfect and has no AA filter on it, also I do not understand why the step size is same in both cameras. Is there something I am missing?

Regards

Javier

- rjlittlefield
- Site Admin
**Posts:**20850**Joined:**Tue Aug 01, 2006 8:34 am**Location:**Richland, Washington State, USA-
**Contact:**

The wave optics calculation ignores the sensor, which essentially assumes that the sensor is perfect and has no AA filter. The classic calculation in the spreadsheet uses CoC = 2.5 pixels, which is pretty stringent.seta666 wrote:Only one thing, according to the spreadsheet the steep size for 10X with mitutoyo 10/0.28 on NEX-5N is 5.6um. On the 5D mkII it suggest the same 5.6 um step size.

I find those steps too short, I have been using more like 7-8 um steps on the NEX-5N and 9-10um steps on the 5D mkII

I guess the formula asumes the sensor is perfect and has no AA filter on it, also I do not understand why the step size is same in both cameras. Is there something I am missing?

Regards

Javier

At 10X NA 0.28, the 1/4 lambda DOF is 7.0 µm.

The classic calculation goes like this. On NEX-5N (23.4mm, 4912 pixels), 2.5 pixels is CoC = 0.0119 mm. On Canon 5D Mark II (36 mm, 5616 pixels), 2.5 pixels is CoC = 0.0160 mm. 10X NA 0.28 is effective f/17.86. From there the classic formula gives NEX-5N DOF = 4.2 µm and 5D Mark II DOF = 5.7 µm.

According to these calculations, the system is diffraction-limited on both sensors so the DOF is the same.

Your value of 10 µm for 5D Mark II corresponds to a larger CoC, 4.4 pixels. 8 µm on the NEX-5N corresponds to a CoC of 4.7 pixels. With these larger CoC's, the system calculates to be sensor-limited so the DOFs would be different.

Using the commonly mentioned value of 3 pixels per CoC, the 5D Mark II with its big pixels calculates out to be "near optimum" while the NEX-5N is still diffraction limited.

I have added lines to the spreadsheet to allow setting the number of pixels per CoC and also to allow setting the overlap percentage. Your values should be reproducible now using 0% overlap and the per-camera CoC values that I mentioned.

I am reminded again that choice of step size varies depending on personal preference. I'm pretty sure the forum has other members who would consider 10 µm at 10X NA 0.28 to be leaving detail on the table.

--Rik

We all know that sensors are still long way from being perfect, it is true though that each new generation brings some improvements.rjlittlefield wrote:The wave optics calculation ignores the sensor, which essentially assumes that the sensor is perfect and has no AA filter. The classic calculation in the spreadsheet uses CoC = 2.5 pixels, which is pretty stringent.

At 10X NA 0.28, the 1/4 lambda DOF is 7.0 µm.

Your value of 10 µm for 5D Mark II corresponds to a larger CoC, 4.4 pixels. 8 µm on the NEX-5N corresponds to a CoC of 4.7 pixels. With these larger CoC's, the system calculates to be sensor-limited so the DOFs would be different.

Using the commonly mentioned value of 3 pixels per CoC, the 5D Mark II with its big pixels calculates out to be "near optimum" while the NEX-5N is still diffraction limited.

I have added lines to the spreadsheet to allow setting the number of pixels per CoC and also to allow setting the overlap percentage. Your values should be reproducible now using 0% overlap and the per-camera CoC values that I mentioned.

I am reminded again that choice of step size varies depending on personal preference. I'm pretty sure the forum has other members who would consider 10 µm at 10X NA 0.28 to be leaving detail on the table.

--Rik

Normally I have been using a CoC od 0.30 for FF and 0.20 for APS-C cameras for Lefkowitz formula and on the nikon microscopy calculator pixel size times 2

On a normal bayern sensor a value of 4 pixels would be wise? that is counting the 2 green plus one red and one blue pixels.

I will quote Enrico Savazzi in what he say in his DOF calculator site:

In Bayer sensors, each sensel records light in one of the three base colors. The in-camera data processor subsequently interpolates information from adjacent sensels to generate (in a way, to "fake") information in the other two colors, thus producing one full-color pixel from each sensel. This means that the camera cannot actually record true detail in the final image at one-pixel scale, but rather, something closer to an average between a square of four to nine adjacent sensels of data. In my experience, detail resolution is actually better than a mathematical weighted average among a square of adjacent sensels, and I quantify it by saying that the smallest visually resolvable detail is approximately three pixels across (considering not only orthogonal lines but also oblique ones, which are an unfavorable case), or about six pixels in terms of a line-pair.

I am not to strict with step size, sometimes I would use a 7 um step and some times 9 um, since I bought the stackshot I was using sub 10um steps for such lens as the mitutoyo 10/0.28 but before when using manual actuators I would use 10um step size just because it is more convinient. Normally I would use a minimum overlap of 10% but 20-30% seems to give better quality.

Again thank you for your time developing this tool, which I am going to use a lot for sure

Regards

- rjlittlefield
- Site Admin
**Posts:**20850**Joined:**Tue Aug 01, 2006 8:34 am**Location:**Richland, Washington State, USA-
**Contact:**

My images HERE clearly show good resolution at a level of 3 pixels per line pair using my equipment and test protocol. I have no idea why Enrico saw so much worse results.seta666 wrote:I will quote Enrico Savazzi in what he say in his DOF calculator site:

In Bayer sensors, each sensel records light in one of the three base colors. The in-camera data processor subsequently interpolates information from adjacent sensels to generate (in a way, to "fake") information in the other two colors, thus producing one full-color pixel from each sensel. This means that the camera cannot actually record true detail in the final image at one-pixel scale, but rather, something closer to an average between a square of four to nine adjacent sensels of data. In my experience, detail resolution is actually better than a mathematical weighted average among a square of adjacent sensels, and I quantify it by saying that the smallest visually resolvable detail is approximately three pixels across (considering not only orthogonal lines but also oblique ones, which are an unfavorable case), or about six pixels in terms of a line-pair.

--Rik

I am thinking why I get good results if my values are so wrong and I thought the following:

Could we use the size of the airy disk in the formula instead of the CoC?

For example:

At f11 effective the airy disk size is 14.7um so we could use that 3 pixel value you say with NEX-5N

However at f16 (close to the f18 with mitutoyo 10/0.28 at 10X) the airy disk measures 21.3 um; even if the camera can resolve 3 pixels per line pair there is not such detail to be resolved.

At f22, close to what you get with mitutoyo 20/0.42 at 20X, the airy disk is 29.3 um

Also, looking for "CoC vs airy disk" on the web I came to this site http://www.rags-int-inc.com/PhotoTechStuff/DoF/ where I found this table

So, my questions are:

Should we use that 3 pixel Coc value when Airy disk is equal in size or smaller than 3 pixels?

Could we get good results if we use airy disk size as CoC when airy disk size is greater than those 3 pixels?

Regards

- rjlittlefield
- Site Admin
**Posts:**20850**Joined:**Tue Aug 01, 2006 8:34 am**Location:**Richland, Washington State, USA-
**Contact:**

Yes, and that's why the spreadsheet makes two separate calculations.seta666 wrote:it does resolve 3 pixels but I guess the shot was not diffaction limited.

If you're limited by sensor resolution or your requirements allow more blur, then it makes more sense to use the classic CoC formulation.

If you're limited by diffraction, then it makes more sense to use the wave optics formulation.

In each case, "makes more sense" also means a larger step size than when computed the other way.

Sure, but you don't want to use it directly. Most of the energy in an Airy disk goes into a circle that is smaller than the customary "diameter" = 2*1.22*lambda*fnumber, which refers to the first black ring.Could we use the size of the airy disk in the formula instead of the CoC?

If you want to match the two DOF calculations, just set CoC to be 1/1.22 (=0.8197) times the Airy disk diameter. Then the numbers produced by the classic and wave optics calculations will be exactly the same.

Well, 7 µm is what's calculated as 1/4-lambda error at NA 0.28. 10 µm really is too big for NA 0.28, but you would not notice the difference unless you were specifically looking for it. At http://www.photomacrography.net/forum/v ... p?p=104108, a step size that is 10/7 larger than the 1/4-lambda value be 3.14 µm. You can see there that a step size of 3 µm is clearly distinguishable from finer spacings by flash-to-compare, but the blurred sections would not call attention to themselves if they were just presented by themselves.I am not to strict with step size, sometimes I would use a 7 um step and some times 9 um, since I bought the stackshot I was using sub 10um steps for such lens as the mitutoyo 10/0.28 but before when using manual actuators I would use 10um step size just because it is more convinient.

I wonder, have you carefully compared your results against a finer step size, using some method like at http://www.photomacrography.net/forum/v ... p?p=104108?I am thinking why I get good results if my values are so wrong

If not, then I suspect you have just not been noticing that some contrast or detail was left on the table. The differences would not be huge, and the larger-step results would look good by themselves. It is only in comparing against a best-possible result that you would see the difference.

Of course this reopens the age-old question: how good is good enough? The whole concept of CoC is based on the idea that at some size the viewer "can't tell the difference". Why 0.030 mm for full-frame format? That's 4.7 pixels even on Canon 5D Mark II, and over 6.1 pixels on Nikon D800. Where would you like to draw the line? Likewise the 1/4 lambda criterion is based on a worst case contrast loss of about 20% compared to perfect focus. Why 20%? Where would you like to draw the line?

--Rik

- rjlittlefield
- Site Admin
**Posts:**20850**Joined:**Tue Aug 01, 2006 8:34 am**Location:**Richland, Washington State, USA-
**Contact:**

Correct. In the latter case pupil ratio actually does play a part, so the formula isn't exactly (Memployed +1)/(Mintended +1) either, but the differences are all too small to matter.ChrisR wrote:To confirm then, in similar area, if we use an objective at other than its intended magnification, subject side NA stays the same so effective aperture only changes in ratio

(Memployed)/(Mintended)

and not (Memployed +1)/(Mintended +1) ?

I'm thinking that for a finite objective (with altered tube length and subject distance) the latter might be right, but it doesn't really matter because they don't normally cover enough field when used like that (eg 4x NA 0.1 used at 2x becomes f/12 instead of f/10)

--Rik

Thank you Rik, I think now I understand. The spreadshet uses the higher DOF value as correct, either is is sensor limited (DOF classic) or diffraction limited (wave optics).rjlittlefield wrote:Yes, and that's why the spreadsheet makes two separate calculations.

If you're limited by sensor resolution or your requirements allow more blur, then it makes more sense to use the classic CoC formulation.

If you're limited by diffraction, then it makes more sense to use the wave optics formulation.

You are right, I did not do any serious testing but as there was no evident banding I thought it was rightI wonder, have you carefully compared your results against a finer step size, using some method like at http://www.photomacrography.net/forum/v ... p?p=104108?

I think it would be nice if you made a sticky linking to all this DOF/Resolution tests you have done so far

Regards

- rjlittlefield
- Site Admin
**Posts:**20850**Joined:**Tue Aug 01, 2006 8:34 am**Location:**Richland, Washington State, USA-
**Contact:**

I've run some numbers today, in response to recurring questions about what step sizes to use for macro work with conventional macro lenses such as Canon 100 mm and MP-E 65.

These calculations are all based on using a circle of confusion = 2.5 pixels, and setting an aperture size (if possible) so that the classic DOF and wave optics DOF are perfectly matched. The tables are limited to a maximum aperture of f/2.8. These calculations also assume that the lens and camera follow the rule of "effective=setting*(m+1)" for determining the effective aperture given the aperture setting. Users of newer Nikon cameras macro lenses should directly set the "optimum effective aperture" value instead.

---------------------------------------------------

15 megapixel APS-C (e.g. Canon T1i, 4752 pixels across 22.3 mm sensor)

Optimum effective aperture = f/10.7

---------------------------------------------------

25 megapixel full-frame (e.g. Canon 5D Mark III, 5760 pixels across 36 mm sensor)

Optimum effective aperture = f/14.2

---------------------------------------------------

There are three caveats in using these numbers:

For critical work, you should follow standard advice:

In Zerene Stacker, the latter test is easily done by first running the stack with all frames, then in the same run reprocessing with Options > Preferences > Preprocessing > "Stack every N'th Frame" set to 2, 3, 4 etc. This guarantees that all results will have exactly the same alignment so they can be easily compared.

I hope this helps. All questions welcome, of course.

--Rik

These calculations are all based on using a circle of confusion = 2.5 pixels, and setting an aperture size (if possible) so that the classic DOF and wave optics DOF are perfectly matched. The tables are limited to a maximum aperture of f/2.8. These calculations also assume that the lens and camera follow the rule of "effective=setting*(m+1)" for determining the effective aperture given the aperture setting. Users of newer Nikon cameras macro lenses should directly set the "optimum effective aperture" value instead.

---------------------------------------------------

15 megapixel APS-C (e.g. Canon T1i, 4752 pixels across 22.3 mm sensor)

Optimum effective aperture = f/10.7

Code: Select all

```
Mag Field Width Aperture Setting Step Size
0.2 111 mm f/8.9 6.25 mm
0.3 74.3 mm f/8.2 2.78 mm
0.4 55.8 mm f/7.6 1.56 mm
0.5 44.6 mm f/7.1 1.00 mm
0.6 37.2 mm f/6.7 0.69 mm
0.8 27.9 mm f/5.9 0.39 mm
1 22.3 mm f/5.3 0.25 mm
1.5 14.9 mm f/4.3 0.11 mm
2 11.2 mm f/3.6 0.062 mm
3 7.4 mm f/2.8 0.030 mm
4 5.58 mm f/2.8 0.027 mm
5 4.46 mm f/2.8 0.025 mm
```

25 megapixel full-frame (e.g. Canon 5D Mark III, 5760 pixels across 36 mm sensor)

Optimum effective aperture = f/14.2

Code: Select all

```
Mag Field Width Aperture Setting Step Size
0.2 180 mm f/11.8 11.10 mm
0.3 120 mm f/10.9 4.93 mm
0.4 90 mm f/10.1 2.77 mm
0.5 72 mm f/9.5 1.78 mm
0.6 60 mm f/8.9 1.23 mm
0.8 45 mm f/7.9 0.69 mm
1 36 mm f/7,1 0.44 mm
1.5 24 mm f/5.7 0.20 mm
2 18 mm f/4.7 0.11 mm
3 12 mm f/3.6 0.049 mm
4 9 mm f/2.8 0.027 mm
5 7.2 mm f/2.8 0.025 mm
```

There are three caveats in using these numbers:

- Most lenses do not satisfy the assumptions.
- There is no precise cutoff. These numbers will produce very slight focus banding that I can detect only by flash-to-compare against another stack shot with half this step size. Doubling the step size will produce focus banding that is easily seen in side by side comparison but might be acceptable if the larger step result were viewed by itself.
- These numbers do not necessarily run the lens at its sharpest aperture. At higher magnifications, the indicated f-number may be too small; at lower magnifications, it's almost certainly too large.

**for guidance only**".For critical work, you should follow standard advice:

**1. Determine aperture:**run a set of test exposures, at desired magnification, to determine the smallest aperture that will provide the sharpness you need. For highest quality, go for the sharpest aperture.**2. Determine step size:**at the chosen aperture, test to determine the best step size for your application. One good way to do this is to shoot a test stack with much finer step size than you think you need, for example 1/5 or 1/10 as large. Then process that stack multiple times using only a subset of the frames, to determine the step size that's needed to avoid focus banding.In Zerene Stacker, the latter test is easily done by first running the stack with all frames, then in the same run reprocessing with Options > Preferences > Preprocessing > "Stack every N'th Frame" set to 2, 3, 4 etc. This guarantees that all results will have exactly the same alignment so they can be easily compared.

I hope this helps. All questions welcome, of course.

--Rik

Ok, you're calculating DOF by two methods.

At high mags, the blur from diffraction adds to the diameter of the tapering cone of the traditional maths method so you get more DOF than it predicts, yes?

At lower mags, the wave method is pessimistic about DOF, though I'm not sure why. (At 1:1, f/2.8, it's about half.)

I'm not clear what you're doing here. Are you saying that you're choosing the aperture where it just so happens, that the predicted blur calculated by classic methodsetting an aperture size (if possible) so that the classic DOF and wave optics DOF are perfectlymatched

**matches**the predicted blur from the wave method? If so I can't see why that's a useful aperture to use.

Or are you selecting the method which is appropriate for the magnification in the table, and working out the aperture (for 2.5 pixels CofC) from that method? That would seem to make sense -?

---

We need to sort out the thing about Nikon apertures. I have/had one

*very*old compensating, four Micro MF less-old non-compensating, one Micro AF zoom also non compensating (lost), but nothing G/newer.

- rjlittlefield
- Site Admin
**Posts:**20850**Joined:**Tue Aug 01, 2006 8:34 am**Location:**Richland, Washington State, USA-
**Contact:**

No problem -- repetition is good. I say that a lot.ChrisR wrote:I need to go back a step.

Correct. One method ignores diffraction; the other method ignores geometry.Ok, you're calculating DOF by two methods.

For the sake of clarity, I'll disagree with that.At high mags, the blur from diffraction adds to the diameter of the tapering cone of the traditional maths method

The two blurs certainly combine in some sense. The word "add" is problematic because when you add the numbers 1 and 1 you get 2, but when you add blur radii of 1 diffraction and 1 geometry, you don't get 2 total. Some people model the combination of blurs like addition of orthogonal vectors: |a+b| = sqrt(|a|^2+|b|^2). That's not correct either, but it's mathematically tractable so it has a long history. (When in doubt, appeal to Pythagoras.)

Let's think harder about this, in one particular case.

We know from wave optics that focus is considered acceptable at 1/4-lambda wavefront error, and we know how to calculate the defocus distance at which that occurs, given only the effective aperture of the lens.

At the 1/4-lambda defocus distance, geometry blur is irrelevant. The perfectly focused image has resolution determined by its Airy disk, and the defocused image is identical to that except for having its MTF curve pushed slightly downward while retaining the same cutoff frequency.

What we want for focus stacking is to know the biggest step for which no part of the image will degrade noticeably from perfect focus. Assuming that the system is diffraction-limited and that we're using something close to the best aperture to meet our resolution requirements, then that "biggest step" corresponds directly to the 1/4-lambda defocus distance, maybe adjusted a bit upward or downward according to taste.

So, my basic method is to do a two-step calculation: find the effective aperture by any method, then use wave optics to calculate the 1/4-lambda DOF at that aperture.

The wave optics calculation of DOF fails when the aperture is so wide that the Airy disk is much smaller than the acceptable resolution. In that case a better estimate can be made from the classic geometry calculation using circle of confusion.

I suspect you're looking at a situation where the Airy disk is small compared to the classic circle of confusion and/or sensor resolution. In that case the 1/4-lambda criterion doesn't apply because you've decided to accept lots more degradation than just pushing down the MTF curve a bit.At lower mags, the wave method is pessimistic about DOF, though I'm not sure why. (At 1:1, f/2.8, it's about half.)

If you show me the analysis or describe the experiment, maybe I could tell better.

If the second explanation makes sense, feel free to use that.I'm not clear what you're doing here. Are you saying that you're choosing the aperture where it just so happens, that the predicted blur calculated by classic methodsetting an aperture size (if possible) so that the classic DOF and wave optics DOF are perfectlymatchedmatchesthe predicted blur from the wave method? If so I can't see why that's a useful aperture to use.

Or are you selecting the method which is appropriate for the magnification in the table, and working out the aperture (for 2.5 pixels CofC) from that method? That would seem to make sense -?

But your first explanation is more correct. The resulting aperture is useful because if you made it smaller you would fail to meet your resolution target because of diffraction, and if you made it larger you would lose DOF. If you're a fan of Pythagoras, then observe that sqrt((x)^2+(k/x)^2) is minimized when x = k/x; both components equal. The resulting number is also the same as you'd get by starting from the optimum effective aperture and working backward from that.

Summary is that I'm choosing an aperture for which you get the most DOF while not exceeding 1/4-lambda defocus and (if possible) also meeting the resolution requirement as described by the COC. At the highest magnifications in the tables that would require an aperture wider than f/2.8, so the resulting images will be softer than ideal.

--Rik