Effective Aperture
Moderators: rjlittlefield, ChrisR, Chris S., Pau

 Posts: 476
 Joined: Fri Apr 13, 2012 3:54 pm
 Location: Canada
 Contact:
Effective Aperture
In several places on and off this forum, I've seen effective aperture being used in discussions of image quality of macro lenses and microscope objectives, suggesting that these effective apertures lead to diffraction in the same way that physical aperture does. This has led me to a several hourlong Google hunt, producing absolutely no clear explanations of what is going on here. I am still quite skeptical of the claim that effective aperture is anything more than a clever trick to represent reduction in light when close focusing.
When I have my MPE 65 stopped down to f/8 at 3x magnification, I have an "effective aperture" of f/32. There's a little chart/equation provided by Canon to calculate this. The same equation can be used to roughly estimate the effective aperture of any lens. However, these calculations are used purely in the context of adjusting exposure. I have not been able to find a single authoritative source (yet) expanding this to discussions of diffraction. Yes, I can find forum posts and blog articles that do this... but I am skeptical of all of these so far.
Can someone point me in the right direction?
When I have my MPE 65 stopped down to f/8 at 3x magnification, I have an "effective aperture" of f/32. There's a little chart/equation provided by Canon to calculate this. The same equation can be used to roughly estimate the effective aperture of any lens. However, these calculations are used purely in the context of adjusting exposure. I have not been able to find a single authoritative source (yet) expanding this to discussions of diffraction. Yes, I can find forum posts and blog articles that do this... but I am skeptical of all of these so far.
Can someone point me in the right direction?
 rjlittlefield
 Site Admin
 Posts: 23828
 Joined: Tue Aug 01, 2006 8:34 am
 Location: Richland, Washington State, USA
 Contact:

 Posts: 476
 Joined: Fri Apr 13, 2012 3:54 pm
 Location: Canada
 Contact:
Interesting...
I think I understand. Interference isn't a difficult concept (as long as I keep thinking about bath tubs and water waves...), but I think the last piece of the puzzle would be to try to picture in my mind how close focusing could serve to restrict the angles reaching the sensor while the aperture remains unchanged. I think what I need to see is an optics diagram of a lens showing possible angles of light at both close and infinity focus.
I think I understand. Interference isn't a difficult concept (as long as I keep thinking about bath tubs and water waves...), but I think the last piece of the puzzle would be to try to picture in my mind how close focusing could serve to restrict the angles reaching the sensor while the aperture remains unchanged. I think what I need to see is an optics diagram of a lens showing possible angles of light at both close and infinity focus.
 rjlittlefield
 Site Admin
 Posts: 23828
 Joined: Tue Aug 01, 2006 8:34 am
 Location: Richland, Washington State, USA
 Contact:
The focus distance really doesn't matter. Anytime the lens is perfectly focused, the light going to the sensor is a spherical wavefront, confined to an exit cone whose angular width depends on the effective fnumber. The cone angle and the wavelength tell you everything there is to know about how fine the interference pattern can be. That's why effective aperture is such an effective tool for thinking about diffraction blurring.
There's another useful relationship dealing with magnification, which is that the effective fnumber on the camera side is always just the effective fnumber on the subject side, multiplied by the magnification. For numerical aperture it's the inverse, NA on the camera side is NA on the subject side divided by the magnification. This relationship holds for any wellcorrected lens or combination of lenses, which for our purposes is all of them. The nice thing here is that it puts diffraction blur on an equal footing on both sides of the lens  you can think of diffraction blur as occurring on the subject side (traditional in microscopy) or occurring on the camera side (traditional in photography), and the results are the same  the blur on the camera side is just the blur on the subject side multiplied by the magnification. It's a nice unification.
Rik
There's another useful relationship dealing with magnification, which is that the effective fnumber on the camera side is always just the effective fnumber on the subject side, multiplied by the magnification. For numerical aperture it's the inverse, NA on the camera side is NA on the subject side divided by the magnification. This relationship holds for any wellcorrected lens or combination of lenses, which for our purposes is all of them. The nice thing here is that it puts diffraction blur on an equal footing on both sides of the lens  you can think of diffraction blur as occurring on the subject side (traditional in microscopy) or occurring on the camera side (traditional in photography), and the results are the same  the blur on the camera side is just the blur on the subject side multiplied by the magnification. It's a nice unification.
Rik

 Posts: 476
 Joined: Fri Apr 13, 2012 3:54 pm
 Location: Canada
 Contact:
Actually, what I'm trying to figure out is *why* the effective fnumber determines the light cone's angular size.
Take the MPE 65. As I understand it, this is a fixed focal length, fixedfocus lens with a builtin bellows. So increasing magnification means putting more distance between the lens elements and the sensor, like so:
Well, it's poorly drawn, but you can see that as the distance between the lens group and the sensor increases, the angular size of the light cone decreases. Great. This jives with your explanation.
This also works fine to explain bellows, extension tubes, etc.
But what about using a 1:1 macro lens with variable focus? I can back away from the subject or I can get in really close. This changes magnification. In this case, it's harder to visualize what the optics are doing in the lens and how that affects the light cone. That's what I'm trying to wrap my head around now.
Take the MPE 65. As I understand it, this is a fixed focal length, fixedfocus lens with a builtin bellows. So increasing magnification means putting more distance between the lens elements and the sensor, like so:
Well, it's poorly drawn, but you can see that as the distance between the lens group and the sensor increases, the angular size of the light cone decreases. Great. This jives with your explanation.
This also works fine to explain bellows, extension tubes, etc.
But what about using a 1:1 macro lens with variable focus? I can back away from the subject or I can get in really close. This changes magnification. In this case, it's harder to visualize what the optics are doing in the lens and how that affects the light cone. That's what I'm trying to wrap my head around now.
 rjlittlefield
 Site Admin
 Posts: 23828
 Joined: Tue Aug 01, 2006 8:34 am
 Location: Richland, Washington State, USA
 Contact:
That relationship is really by definition.Rylee Isitt wrote:Actually, what I'm trying to figure out is *why* the effective fnumber determines the light cone's angular size.
All the formulas for how to compute effective fnumber from focal length, nominal fnumber, added extension, pupil ratio, teleconverter magnification, etc etc end up being what they are so that the computed effective fnumber matches the angular size.
In a few simple cases, it's easy to see directly. As the optics get more complicated, working through the details gets more complicated too. Before very long, the details end up getting unmanageable and/or depend on things you don't know and can't measure directly, and you just have to take it on faith that if only you could work through all the details, the relationships would work out as I've described. At least that's been my experience.
Take the MPE 65. As I understand it, this is a fixed focal length, fixedfocus lens with a builtin bellows.
Good example. The MPE is not fixed focal length. Focal length is easy to measure indirectly by watching the increase in magnification as you add empty extension between the lens and the camera. It turns out that the MPE is 65 mm at 1:1, but at 5:1 it's more like 40 mm. You can quickly see that something like that must be happening because the physical lengthening of the lens, while pretty large, is not as large as the 260 mm necessary to go from 1:1 to 5:1 at 65 mm focal length.
The MPE 65 is complicated in other ways too. The entrance pupil actually stays fixed while the focal length shortens, which means that the wideopen nominal aperture based on diameter and focal length grows from f/2.8 to something like f/1.7 as the lens extends. But at the same time, the pupil ratio changes from about 1 to something much smaller, so that the effective aperture ends up doing pretty much what you'd expect if you didn't know about the focal length shortening. What a mess, eh?
There's also no guarantee, by the way, that the published tables for effective fnumber are actually correct. You have to measure to be sure. A good rough measurement can be obtained by looking at exposure time for matched histograms between the system under test and a normal lens at infinity focus, looking at a uniformly illuminated surface. I say "rough" because that measurement doesn't include differences in transmission due to absorption and reflections within the lens. If you really want to get it dead on, you'd have to do something like focus a pinhole and then directly measure the angular width of the light cone. That's what I sometimes call "conceptually straightforward", meaning that it seems simple enough but turns out to be fiddly to set up and difficult to take an accurate measurement. See HERE for one I set up several years ago to measure subjectside aperture, while trying to understand differences between ordinary lenses and microscope objectives.
By the way, that thread sort of whimpers to a unsatisfying end with some discrepancies unresolved. I've since gone back and reanalyzed the results in terms of pupil factor, and indeed everything is consistent once all the important aspects are thrown in. It just gets confusing because for example the Olympus 38 mm f/2.8 lens doesn't act like a (simple) f/2.8 lens under any circumstances that you can actually focus it. It specs as f/2.8 based on focal length and entrance pupil diameter, but due to its asymmetric design the effective fnumber grows faster than it would for a simple lens. In practice it acts more like an f/3.2 simple lens.
Rik
Well Rylee;
First thing you should think about is at what aperture is your camera starting to be diffraction limited.
On a 5D mkII it starts to loose detail from f12.3; on a 7D from f7.1
I try to keep aperture bellows f18 when possible and as low at possible when doing high magnification work.
Second; the MPE is not exactly a fixed lens on bellows but a variable focal length lens with floating ellements.
It is a 65mm when doing 1:1 and around 38mm when doing 5:1 work
If it was a 65mm lens at 5:1 work it would need around 300mm extension, maybe more
EDIT: Well, Rik was faster than me, again ;)
Regards
First thing you should think about is at what aperture is your camera starting to be diffraction limited.
On a 5D mkII it starts to loose detail from f12.3; on a 7D from f7.1
I try to keep aperture bellows f18 when possible and as low at possible when doing high magnification work.
Second; the MPE is not exactly a fixed lens on bellows but a variable focal length lens with floating ellements.
It is a 65mm when doing 1:1 and around 38mm when doing 5:1 work
If it was a 65mm lens at 5:1 work it would need around 300mm extension, maybe more
EDIT: Well, Rik was faster than me, again ;)
Regards

 Posts: 476
 Joined: Fri Apr 13, 2012 3:54 pm
 Location: Canada
 Contact:
Okay, one last question:
I take it that the size of the airy disk is more correctly related to the effective aperture, so an approximation would be something like 1.35 x f/number, where f/number is the effective aperture?
And I can sort of compare this to the pixel pitch of the sensor to decide whether I'm diffraction limited or not? I'm guessing in many cases with high magnification work that, in theory, using the aperture which just reaches the threshold of diffraction limited will be a good balance between reducing lens aberrations and keep diffraction down? Since I'd normally be focus stacking for such work, selecting the aperture to find this balance between diffraction and aberration seems ideal. And it's not as simple as "f/8 and be there"...
I take it that the size of the airy disk is more correctly related to the effective aperture, so an approximation would be something like 1.35 x f/number, where f/number is the effective aperture?
And I can sort of compare this to the pixel pitch of the sensor to decide whether I'm diffraction limited or not? I'm guessing in many cases with high magnification work that, in theory, using the aperture which just reaches the threshold of diffraction limited will be a good balance between reducing lens aberrations and keep diffraction down? Since I'd normally be focus stacking for such work, selecting the aperture to find this balance between diffraction and aberration seems ideal. And it's not as simple as "f/8 and be there"...
 rjlittlefield
 Site Admin
 Posts: 23828
 Joined: Tue Aug 01, 2006 8:34 am
 Location: Richland, Washington State, USA
 Contact:
Correct, the size of the Airy disk is directly related to the effective fnumber.
Also correct that there's a sweet spot at which diffraction blur is balanced against other issues like aberration to yield the best image.
But to tell the truth, I've never found calculation to be very helpful in determining where the sweet spot might be. Calculation is especially troublesome because there are so many factors that go into determining what the effective aperture really is.
In addition to the ones mentioned above, there's the huge factor that Canon cameras always read out in nominal fnumber, where Nikon cameras equipped with modern lenses read out in effective fnumber, already corrected for magnification. So it's completely possible that a Canon and a Nikon user could buy the same model of macro lens in two different mounts, stick them on cameras with sensors that are the same size and pixel density, run a series of experiments on each setup, and have the Canon user observe that "f/8" was best while the Nikon user observed "f/16". The results look crazy, until you realize that the numbers mean different things.
Other issues can kick in as well. Charles Krebs once ran a careful test series with his MPE in one setup and observed sharpest at one stop down, where I ran an equally careful series in a different setup and observed sharpest at wide open. Location of the sweet spot is also affected by your criteria. If the lens is aberrated, then the whole shape of the MTF curve can change as it stops down. It's completely possible for a lens to resolve finer details wide open, but look "sharper" stopped down. In terms of contrast, that happens when stopping down raises the contrast for mediumsized detail (by reducing aberrations) while lowering the contrast for fine detail (by increasing diffraction). All same setup, different criteria, different results.
These complexities are why I always recommend to run an aperture test series and look carefully at the resulting pictures to see which ones give the best result  for your own equipment and your own definition of "best". Even that turns out to be not quite as simple as you'd hope, because a lot of lenses shift focus slightly as they stop down. (The focus shift is due to under or overcorrected spherical aberration.) So if you shoot single exposures, you're liable to not have exactly corresponding features to compare. Better is to run a short focus stack. That's especially true if you taking the measurement in anticipation of stacking. Some lenses have aberrations that are not noticeable in single frames, but stack up to make highly visible artifacts.
Rik
Also correct that there's a sweet spot at which diffraction blur is balanced against other issues like aberration to yield the best image.
But to tell the truth, I've never found calculation to be very helpful in determining where the sweet spot might be. Calculation is especially troublesome because there are so many factors that go into determining what the effective aperture really is.
In addition to the ones mentioned above, there's the huge factor that Canon cameras always read out in nominal fnumber, where Nikon cameras equipped with modern lenses read out in effective fnumber, already corrected for magnification. So it's completely possible that a Canon and a Nikon user could buy the same model of macro lens in two different mounts, stick them on cameras with sensors that are the same size and pixel density, run a series of experiments on each setup, and have the Canon user observe that "f/8" was best while the Nikon user observed "f/16". The results look crazy, until you realize that the numbers mean different things.
Other issues can kick in as well. Charles Krebs once ran a careful test series with his MPE in one setup and observed sharpest at one stop down, where I ran an equally careful series in a different setup and observed sharpest at wide open. Location of the sweet spot is also affected by your criteria. If the lens is aberrated, then the whole shape of the MTF curve can change as it stops down. It's completely possible for a lens to resolve finer details wide open, but look "sharper" stopped down. In terms of contrast, that happens when stopping down raises the contrast for mediumsized detail (by reducing aberrations) while lowering the contrast for fine detail (by increasing diffraction). All same setup, different criteria, different results.
These complexities are why I always recommend to run an aperture test series and look carefully at the resulting pictures to see which ones give the best result  for your own equipment and your own definition of "best". Even that turns out to be not quite as simple as you'd hope, because a lot of lenses shift focus slightly as they stop down. (The focus shift is due to under or overcorrected spherical aberration.) So if you shoot single exposures, you're liable to not have exactly corresponding features to compare. Better is to run a short focus stack. That's especially true if you taking the measurement in anticipation of stacking. Some lenses have aberrations that are not noticeable in single frames, but stack up to make highly visible artifacts.
Rik
 rjlittlefield
 Site Admin
 Posts: 23828
 Joined: Tue Aug 01, 2006 8:34 am
 Location: Richland, Washington State, USA
 Contact:
Yep, it's very handy when you know about it.
Unfortunately it's also confusing. I've seen too many squabbles along the lines of
Rik
Unfortunately it's also confusing. I've seen too many squabbles along the lines of
That sort of advice makes me twitch something awful, 'cuz I hate to see people spend lots of money based on a misconception.User A: I can only go down to f/11 or my images get fuzzy.
User B: Hah! I can go clear down to f/22. You really ought to consider switching manufacturers so you can get some decent DOF!
Rik

 Posts: 476
 Joined: Fri Apr 13, 2012 3:54 pm
 Location: Canada
 Contact:
Yeah, sometimes technical debates get interesting. Especially when it comes to the megapixel debates!
Anyway, I did some testing on my new 100mm f/2.8 L macro (I can partially blame you for convincing me to pick up this lens). According to my calculations for 1x magnification, the maximum size of the airy disk beyond which diffraction would be noticeable is somewhere around 8.7 um (twice the pixel pitch), which corresponds to an effective aperture of 6.4. This should mean that I'd need to use f/3.2 to avoid noticeable diffraction. Yet, in real world tests, I subjectively prefer the output at f/5.6. Diffraction becomes noticeable around f/8, but isn't a big deal until f/11. All these assumptions being made in calculations add up to a pretty errorprone approximation, it seems.
And this is all at 100% on a computer screen, which probably isn't the best way to test: all of my work gets displayed reduced down to 1080p (or smaller) or printed no larger (so far) than 11x17. Given that, I can probably get away with f/11 no problem, although local contrast would likely suffer.
Anyway, I did some testing on my new 100mm f/2.8 L macro (I can partially blame you for convincing me to pick up this lens). According to my calculations for 1x magnification, the maximum size of the airy disk beyond which diffraction would be noticeable is somewhere around 8.7 um (twice the pixel pitch), which corresponds to an effective aperture of 6.4. This should mean that I'd need to use f/3.2 to avoid noticeable diffraction. Yet, in real world tests, I subjectively prefer the output at f/5.6. Diffraction becomes noticeable around f/8, but isn't a big deal until f/11. All these assumptions being made in calculations add up to a pretty errorprone approximation, it seems.
And this is all at 100% on a computer screen, which probably isn't the best way to test: all of my work gets displayed reduced down to 1080p (or smaller) or printed no larger (so far) than 11x17. Given that, I can probably get away with f/11 no problem, although local contrast would likely suffer.
A possibly offtopic reply:
This thread reminds me a little of a meeting I had with a very highly regarded optical scientist at Kodak, where I worked for about 30 years as a staff scientist. The meeting was in the early 80s, about 100 lifetimes ago.
At that time I was naively enthusiastic about MTF (and related concepts) for understanding image blur in optics  almost all of my work was in medical radiography, which is in many ways much simpler than photographic optics. After listening to my enthusiastic support for MTF as the "one, true answer", the optical scientist pulled out a thick book containing some calculated MTF graphs for a new lens design he was working on. The book of graphs was hundreds of pages thick, where a given page was for a specific aperture and for specific regions of the lens, specific focal lengths, and (I think) also included radial and tangential MTFs. He agreed with my enthusiasm for using MTF, etc, but said it was essentially impossible to create a best balanced lens design based on easily and directly using the zillions of MTFs that were required to provide a reasonably complete description of a complex lens design. And there were also wavelengthdependences for MTF, and so many other dependencies, of course. (An aside  most realworld engineering is like this, IMO. Just try and design an airliner or a nuclear reactor or an automobile!)
I came away from this discussion with a new appreciation for the simplicity of using MTF in my own narrow research field in medical radiography. I was (and am) quite overwhelmed by the inherent complexity of a lens designer's job. Enduser evaluation of camera lenses is even more overwhelming to approach in a quantitative manner.
During this time, I became more deeply skeptical about trying to calculate or measure one number to properly represent the quality of a lens.
Yet we must typically select a single aperture when using a given lens and camera for a specific task, hoping to obtain a reasonably balanced photograph. And we must and will perform enduser tests and evaluations (hopefully using a scientific method with a control!).
Perhaps the best we can do is to avoid oversimplification and gross errors of understanding and to remain humble while learning as much as we can from both our experiments and our understanding of theory and mathematical modeling.
This thread reminds me a little of a meeting I had with a very highly regarded optical scientist at Kodak, where I worked for about 30 years as a staff scientist. The meeting was in the early 80s, about 100 lifetimes ago.
At that time I was naively enthusiastic about MTF (and related concepts) for understanding image blur in optics  almost all of my work was in medical radiography, which is in many ways much simpler than photographic optics. After listening to my enthusiastic support for MTF as the "one, true answer", the optical scientist pulled out a thick book containing some calculated MTF graphs for a new lens design he was working on. The book of graphs was hundreds of pages thick, where a given page was for a specific aperture and for specific regions of the lens, specific focal lengths, and (I think) also included radial and tangential MTFs. He agreed with my enthusiasm for using MTF, etc, but said it was essentially impossible to create a best balanced lens design based on easily and directly using the zillions of MTFs that were required to provide a reasonably complete description of a complex lens design. And there were also wavelengthdependences for MTF, and so many other dependencies, of course. (An aside  most realworld engineering is like this, IMO. Just try and design an airliner or a nuclear reactor or an automobile!)
I came away from this discussion with a new appreciation for the simplicity of using MTF in my own narrow research field in medical radiography. I was (and am) quite overwhelmed by the inherent complexity of a lens designer's job. Enduser evaluation of camera lenses is even more overwhelming to approach in a quantitative manner.
During this time, I became more deeply skeptical about trying to calculate or measure one number to properly represent the quality of a lens.
Yet we must typically select a single aperture when using a given lens and camera for a specific task, hoping to obtain a reasonably balanced photograph. And we must and will perform enduser tests and evaluations (hopefully using a scientific method with a control!).
Perhaps the best we can do is to avoid oversimplification and gross errors of understanding and to remain humble while learning as much as we can from both our experiments and our understanding of theory and mathematical modeling.
Phil
"Diffraction never sleeps"
"Diffraction never sleeps"