Hi there!
This is my first post, and I've got a very beginner macro gear question, that may be quite dumb. However, since in other fields I'm relatively proficient in the effects of diffraction and what not, I can't help but try to ask it.
So the general question is: At what point of pixel density for a given sensor size, and while shooting at a 1:1 or 2:1 (object size: sensor size) magnification, am I "wasting" pixels due to diffraction?
The more particular and elaborate version of the question:
For multiple reasons, which may or may not be mainly "because I want to" I'm thinking about changing my current camera from a sony a6400 (apsc, 24 Mp), which I've been using for the past 3 years, to an a7IV (FF, 33 Mp). Although I'd like to use the a7IV full sensor for many subjects (big wildlife, family and friends, architecture, etc.) I'm wondering how much resolution I would actually lose by using it in crop mode (14 Mp) for macro photography, given that I've never been able to focus well with the a6400 at below f8, which is well within diffraction territory for 1:1 or 2:1 magnification. So, since my images are already relatively soft with 24Mp in apsc, will the lower resolution of the cropped a7IV really affect my images beyond what my poor technique and small aperture are already doing?
Perhaps a couple of comments:
1. I shoot almost exclusively handheld, with a flash and an onlens diffuser.
2. Yes, I know I should try to improve my technique, but its not happening. My pulse is simply what it is, and age is not helping either. Perhaps I could be able to improve with massive amounts of practice, but my workload and family life simply does not allow for it.
3. My main goal is therefore to make my obviously amateurish experience even more enjoyable than it already is by purchasing a camera with better ergonomics, and better resolution within the full sensor. Since a large part of the enjoyment is marveling at the beautiful details of insects and other tiny living beings, seeing a significant downgrade in the level of detail I can see in the pictures would not, in fact, be an upgrade to my experience.
4. I would love to test this, but I don't know anybody here with an a7IV, and the return policy in Taiwan is absolutely terrible. Basically, if you open the box, you keep it. Therefore my chances of just getting an a7IV for a couple of days and testing it are very very slim.
I would really appreciate some comments on this, and I'm really sorry if this is not a very clever question. Still, I just feel that if I'm going to spend a significant amount of money in this hobby, I should be certain that I am not wasting it.
Thanks!
Very beginner question re. diffraction vs sensor resolution
Moderators: rjlittlefield, ChrisR, Chris S., Pau

 Posts: 3
 Joined: Mon Feb 07, 2022 10:19 pm
 Location: Taiwan

 Posts: 3
 Joined: Mon Feb 07, 2022 10:19 pm
 Location: Taiwan
Re: Very beginner question re. diffraction vs sensor resolution
Ok, I've been reading about resolution in microscopy, and maybe I might be able to answer my question myself. Still, it'd be awesome if some of you guys could check my assumptions and math to see if they are correct, or if I've incurred some mistakes in them.
The long story short is that I think that the 14 Mp apsc crop of the a7IV will be sufficient to resolve any details produced by my lens at f/8. In other words, I don't think that the a6400 more densely packed 24 Mp apsc sensor is bringing anything to the table for my shooting style.
Here comes the long version:
If I rephrase my question, what I want to know is: When recording an image through a lens at f/8 onto either a 14Mp or a 24 Mp sensor of the same size (i.e. apsc), am I limited by the lens resolving power, or by the resolution on the sensor?
My first assumption is that, under ideal conditions and 1:1 magnification (on the apsc sized sensor for the a6400 or apsc crop for the a7IV), the resolving power of the sensor is the pixel pitch, i.e. the distance between individual photosites. Any details that are smaller than that distance will not be detected on their own, but rather contribute to the total value of the signal of the corresponding pixel.
So, checking on the specs of each of these cameras, I find that the pixel pitch for the a6400 is 3.92 um, while for the a7IV it is 5.12 um. So basically, if the resolving power of the lens at f/8 is 3.92 um or better, then the a6400 will provide more details, while if it is bigger than 5.12 um, there will be no appreciable difference between the two sensors.
Now for the resolving power of the lens. For this, and after reading the descriptions below of lens resolving power, the definition of numerical aperture, and the relationship between the two and fstop:
http://zeisscampus.magnet.fsu.edu/arti ... ution.html
https://www.microscopyu.com/microscopy ... resolution
https://www.eckop.com/resources/optics/ ... fnumber/
I assume that, under superideal conditions, i.e. perfect parallel beams entering the lens fully perpendicular to the sensor, my lens would behave like a microscope lens.
Thus, I can calculate that the numerical aperture corresponding to f/8 via the formula NA=1/(2*f) is 0.06 in the best case scenario, or, if we consider an effective fvalue of f/16, 0.03.
From the formulas I saw, the best case scenario for lens resolution for any given wavelength is given by the formula R=Wavelength/(2*NA) or basically R=f*Wavelength.
Therefore, at f/8 and for the center of the visual spectrum, i.e. 550 nm, we get a resolution power of of 4.4 um for f/8 or 8.8 um for an effective aperture of f/16.
In conclusion, the ideal resolving power of the lens at 1:1 magnification and f/8 is between 4.4 and 8.8 um which is very close, or a bit worse, than the resolving power of the 14 Mp sensor (5.12 um). Under both conditions, it is worse than the resolving power of the 24 Mp sensor (3.92 um), and therefore I should essentially see no difference in the amount of details collected by either sensor.
Further, with this math, I'd need to go to ~f/3.5 to clearly see improved details on the 24Mp sensor (i.e. then resolution of the lens would be between 2 and 4 um).
Does this make sense? Where am I thinking wrong?
Any comments, pointers, criticisms or suggestions would be really appreciated!
Thanks!
The long story short is that I think that the 14 Mp apsc crop of the a7IV will be sufficient to resolve any details produced by my lens at f/8. In other words, I don't think that the a6400 more densely packed 24 Mp apsc sensor is bringing anything to the table for my shooting style.
Here comes the long version:
If I rephrase my question, what I want to know is: When recording an image through a lens at f/8 onto either a 14Mp or a 24 Mp sensor of the same size (i.e. apsc), am I limited by the lens resolving power, or by the resolution on the sensor?
My first assumption is that, under ideal conditions and 1:1 magnification (on the apsc sized sensor for the a6400 or apsc crop for the a7IV), the resolving power of the sensor is the pixel pitch, i.e. the distance between individual photosites. Any details that are smaller than that distance will not be detected on their own, but rather contribute to the total value of the signal of the corresponding pixel.
So, checking on the specs of each of these cameras, I find that the pixel pitch for the a6400 is 3.92 um, while for the a7IV it is 5.12 um. So basically, if the resolving power of the lens at f/8 is 3.92 um or better, then the a6400 will provide more details, while if it is bigger than 5.12 um, there will be no appreciable difference between the two sensors.
Now for the resolving power of the lens. For this, and after reading the descriptions below of lens resolving power, the definition of numerical aperture, and the relationship between the two and fstop:
http://zeisscampus.magnet.fsu.edu/arti ... ution.html
https://www.microscopyu.com/microscopy ... resolution
https://www.eckop.com/resources/optics/ ... fnumber/
I assume that, under superideal conditions, i.e. perfect parallel beams entering the lens fully perpendicular to the sensor, my lens would behave like a microscope lens.
Thus, I can calculate that the numerical aperture corresponding to f/8 via the formula NA=1/(2*f) is 0.06 in the best case scenario, or, if we consider an effective fvalue of f/16, 0.03.
From the formulas I saw, the best case scenario for lens resolution for any given wavelength is given by the formula R=Wavelength/(2*NA) or basically R=f*Wavelength.
Therefore, at f/8 and for the center of the visual spectrum, i.e. 550 nm, we get a resolution power of of 4.4 um for f/8 or 8.8 um for an effective aperture of f/16.
In conclusion, the ideal resolving power of the lens at 1:1 magnification and f/8 is between 4.4 and 8.8 um which is very close, or a bit worse, than the resolving power of the 14 Mp sensor (5.12 um). Under both conditions, it is worse than the resolving power of the 24 Mp sensor (3.92 um), and therefore I should essentially see no difference in the amount of details collected by either sensor.
Further, with this math, I'd need to go to ~f/3.5 to clearly see improved details on the 24Mp sensor (i.e. then resolution of the lens would be between 2 and 4 um).
Does this make sense? Where am I thinking wrong?
Any comments, pointers, criticisms or suggestions would be really appreciated!
Thanks!
 rjlittlefield
 Site Admin
 Posts: 23360
 Joined: Tue Aug 01, 2006 8:34 am
 Location: Richland, Washington State, USA
 Contact:
Re: Very beginner question re. diffraction vs sensor resolution
Sirverdemer, welcome aboard!
Your questions and analysis are far beyond what we usually see for beginners. They are also in an area where it seems like all the issues are gray and muddy, even when fully understood. But I will try to help.
First, I find that when people speak of "resolution" in units of microns, the most likely result is confusion. This is because of ambiguity in what is being measured. If we are dealing with a grid of black and white lines, some people will measure the width of one line, while other people will measure the width of two lines (a "line pair"). Obviously there is a 2X difference between those measurements, so we cannot afford to ignore the difference!
For myself, the most reliable approach is to think in terms of cycles of a sinusoidal wave, so I will take that approach here. The formulas that I use are recounted at viewtopic.php?p=124831#124831 .
The spatial "cutoff frequency" for a lens that is limited by diffraction can be calculated as:
nu_0 = (2*NA)/lambda = 1/(lambda*fnumber)
where fnumber and NA are describing the same light cone. Since we're talking about the light cone as it strikes the sensor, this fnumber is the photographer's effective fnumber, often approximated as
effective fnumber = nominal fnumber * (magnification+1)
So then, plugging in lambda = 0.00055 mm (green light) and effective f/16, the corresponding nu_0 is 113.6364 cycles/mm, or exactly 8.8 µm per cycle.
Of course this is the same number that appears in your sentence that "we get a resolution power of of 4.4 um for f/8 or 8.8 um for an effective aperture of f/16".
But saying "per cycle" instead of "resolution power" makes the meaning much more clear. Nyquist sampling theory tells us that we need at least two samples per cycle to capture all the information in a bandwidth limited signal. The implication here is that we need at least two samples in each 8.8 µm, and that leads to pixel size 4.4 µm or smaller, at effective f/16.
At MicroscopyU: Matching Camera to Microscope Resolution, Nikon is essentially using the calculation above as a rule that sensor and optics are matched when the sensor pitch is exactly 2 pixels per cycle, at the cutoff frequency of the optics. Depending on how you look at it, this rule can be either a bare minimum or a reasonable recommendation, or both at the same time. On the one hand, it barely meets the Nyquist sampling criterion, at the spatial frequency where the lens MTF drops to zero anyway. On the other hand, it gives 4 pixels per cycle at half that frequency, where the diffraction MTF is only about 39%, so the rule also gives lots of pixels per cycle for all except the finest detail where the lens is giving out anyway.
Summarizing to this point: no, the A7IV does not have small enough pixels to capture all the information in an f/16 optical image. Given a nice clean f/16 image, you would definitely notice the difference in pixelpeeping 24 MP versus 14 MP.
Is this helping?
(If so, then my next question will be why you don't just shoot at higher magnification so as to use more of the larger sensor? If your lens goes to 1X, then any subject that needs less than 0.6X on APSC can be imaged with the same framing on fullframe with the whole 33 Mp.)
Rik
Your questions and analysis are far beyond what we usually see for beginners. They are also in an area where it seems like all the issues are gray and muddy, even when fully understood. But I will try to help.
First, I find that when people speak of "resolution" in units of microns, the most likely result is confusion. This is because of ambiguity in what is being measured. If we are dealing with a grid of black and white lines, some people will measure the width of one line, while other people will measure the width of two lines (a "line pair"). Obviously there is a 2X difference between those measurements, so we cannot afford to ignore the difference!
For myself, the most reliable approach is to think in terms of cycles of a sinusoidal wave, so I will take that approach here. The formulas that I use are recounted at viewtopic.php?p=124831#124831 .
The spatial "cutoff frequency" for a lens that is limited by diffraction can be calculated as:
nu_0 = (2*NA)/lambda = 1/(lambda*fnumber)
where fnumber and NA are describing the same light cone. Since we're talking about the light cone as it strikes the sensor, this fnumber is the photographer's effective fnumber, often approximated as
effective fnumber = nominal fnumber * (magnification+1)
So then, plugging in lambda = 0.00055 mm (green light) and effective f/16, the corresponding nu_0 is 113.6364 cycles/mm, or exactly 8.8 µm per cycle.
Of course this is the same number that appears in your sentence that "we get a resolution power of of 4.4 um for f/8 or 8.8 um for an effective aperture of f/16".
But saying "per cycle" instead of "resolution power" makes the meaning much more clear. Nyquist sampling theory tells us that we need at least two samples per cycle to capture all the information in a bandwidth limited signal. The implication here is that we need at least two samples in each 8.8 µm, and that leads to pixel size 4.4 µm or smaller, at effective f/16.
At MicroscopyU: Matching Camera to Microscope Resolution, Nikon is essentially using the calculation above as a rule that sensor and optics are matched when the sensor pitch is exactly 2 pixels per cycle, at the cutoff frequency of the optics. Depending on how you look at it, this rule can be either a bare minimum or a reasonable recommendation, or both at the same time. On the one hand, it barely meets the Nyquist sampling criterion, at the spatial frequency where the lens MTF drops to zero anyway. On the other hand, it gives 4 pixels per cycle at half that frequency, where the diffraction MTF is only about 39%, so the rule also gives lots of pixels per cycle for all except the finest detail where the lens is giving out anyway.
Summarizing to this point: no, the A7IV does not have small enough pixels to capture all the information in an f/16 optical image. Given a nice clean f/16 image, you would definitely notice the difference in pixelpeeping 24 MP versus 14 MP.
Is this helping?
(If so, then my next question will be why you don't just shoot at higher magnification so as to use more of the larger sensor? If your lens goes to 1X, then any subject that needs less than 0.6X on APSC can be imaged with the same framing on fullframe with the whole 33 Mp.)
Rik

 Posts: 3
 Joined: Mon Feb 07, 2022 10:19 pm
 Location: Taiwan
Re: Very beginner question re. diffraction vs sensor resolution
Hi Rik!
Thanks for the very warm and informative welcome. Your answer has very clearly explained where I was misunderstanding the concept of resolution, and indeed, working with cutoff frequencies makes it much easier to comprehend. Also, I did not take into account sampling frequency!
Regarding your question, it was rather the implicit million dollar question. I currently own the apsc Laowa 65mm macro lens, which can go to 2X magnification. Of course, its image circle will not cover the entirety of the a7IV sensor, so if I decided to change to the a7IV for nonmacro reasons, I'd have to decide whether to (a) keep the lens at shoot in crop mode at 14 Mp, or (b) sell the 65mm lens and buy the 100 mm 2X full frame Laowa lens. The key deciding criterion between path (a) or (b) was if I'd see any image degradation at 14Mp compared to 24Mp. Now that I know that (b) is probably the way to go, I need to think if my list of a7IV pros still outweighs the con list, which now includes having the allocate a budget for an extra lens.
Still, I'm extremely happy to be digging into this, because it is teaching me a lot about optics as they apply to photography and visible light.
Thanks again!

Thanks for the very warm and informative welcome. Your answer has very clearly explained where I was misunderstanding the concept of resolution, and indeed, working with cutoff frequencies makes it much easier to comprehend. Also, I did not take into account sampling frequency!
Regarding your question, it was rather the implicit million dollar question. I currently own the apsc Laowa 65mm macro lens, which can go to 2X magnification. Of course, its image circle will not cover the entirety of the a7IV sensor, so if I decided to change to the a7IV for nonmacro reasons, I'd have to decide whether to (a) keep the lens at shoot in crop mode at 14 Mp, or (b) sell the 65mm lens and buy the 100 mm 2X full frame Laowa lens. The key deciding criterion between path (a) or (b) was if I'd see any image degradation at 14Mp compared to 24Mp. Now that I know that (b) is probably the way to go, I need to think if my list of a7IV pros still outweighs the con list, which now includes having the allocate a budget for an extra lens.
Still, I'm extremely happy to be digging into this, because it is teaching me a lot about optics as they apply to photography and visible light.
Thanks again!
