The lenses we use

Have questions about the equipment used for macro- or micro- photography? Post those questions in this forum.

Moderators: rjlittlefield, ChrisR, Chris S., Pau

ray_parkhurst
Posts: 3417
Joined: Sat Nov 20, 2010 10:40 am
Location: Santa Clara, CA, USA
Contact:

Post by ray_parkhurst »

If a fixed pitch and size sensor is used, and the aperture is reduced, then diffraction has the effect of gradually reducing the amount of information contained in the image. But in Rik's example, if the aperture and sensor size are fixed, and sensor pitch is gradually reduced, the information contained in the image will gradually increase until the sensor pitch reaches the nyquist limit. Starting at ~90MP (where the DLA is breached), the information continues to increase all the way to ~360MP (where nyquist is breached).

Now, the thing that interests me is the effect of this oversampling on a Bayer matrix. I believe a 2x oversample (and subsequent 2x downsize) ensures "real" G channel information is in every output pixel, while 4x oversample ensures R and B info in every output pixel. Thus the concepts of oversampling by having very high pixel density, and oversampling by pixel shifting, overlap.

rjlittlefield
Site Admin
Posts: 23563
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

mjkzz wrote:IMHO, Stepping back a bit, even with your (Rik's) calculation, the diffraction free resolution is 91.25MP, so it really does not matter how many samples we take, we will NOT gain any details than 91.25MP, we are only getting "more accurate" pixel values AFTER down sampling.
Unfortunately your opinion is based on inference from a calculator that you don't really understand and does not mean what you think it does.

Let me step through the calculation again, highlighting some points of confusion.

Suppose we have effective f/5.6 and take lambda = 0.55 um.

The Airy disk diameter, measured across the first zeros, is 2.44*lambda*f_number [ref] .

That value is 2.44*0.55*5.6 = 7.5 um. This is the same value shown by the cambridgeincolour calculator, so I agree with the calculator to that point.

However, the minimum resolvable feature separation is often considered to occur when the peak of one Airy disk is lined up with the first zero of the other Airy disk. This is called the "Rayleigh criterion". In this case the two peaks will be separated by only 3.75 um.

The resulting optical image will have two bright spots separated by a slightly less bright spot. In order to capture the bright/dark/bright pattern, we need one pixel on each bright spot, and another one on the darker space between them.

As a result, the maximum pixel spacing that will completely capture the f/5.6 optical image in this scenario is 3.75/2 = 1.875 um.

Now, revisiting the calculation that I did earlier, nu_0 at f/5.6 is 325 cycles/mm, or 3.08 um per cycle, which by the sampling theorem requires 1.54 um pixels.

The difference between 1.875 and 1.54 um is due to a slight difference in assumptions. The key difference is that the Rayleigh criterion corresponds to a spatial frequency that is somewhat lower than nu_0, giving an MTF value around 9% by standard formulas. Capturing the higher frequencies with MTF < 9% is what results in the finer pitch spacing of 1.54 um. In general, the minimum pixel size needed to capture the lowest contrast information clear out to nu_o is about 4.88 pixels per Airy disk diameter.

I am always puzzled, on the cambridgeincolour page, to see the comment that "an airy disk can have a diameter of about 2-3 pixels before diffraction limits resolution (assuming an otherwise perfect lens)". I assume that phrase means something, but I have never figured out exactly what.* It clearly does not mean capturing all the detail that is present in the optical image. That said, even taking their value of 3 pixels per Airy disk, an Airy disk of 7.5 um would imply a pixel size of 2.5 um, which on a 24x36 mm sensor would be 14400x9600 = 138 MP.

--Rik

Edited to add: See musings at http://www.photomacrography.net/forum/v ... 180#101180, about what the cambridgeincolour comment may mean. That post also includes some experimental evidence that may help to explain why I'm so confident that in fact large pixel counts are required.

It's probably also worth noting, given the discussion earlier in this thread, that 2.44 pixels per Airy disk diameter will put 2 pixels per cycle at 39% MTF with a perfect lens. So, if you decide to not care about anything below 39% MTF, and you decide to not care about sampling artifacts due to only 2 pixels per cycle, then the cambridgeincolour calculator makes perfect sense even in theory.

JohnyM
Posts: 463
Joined: Tue Dec 24, 2013 7:02 am

Post by JohnyM »

"an airy disk can have a diameter of about 2-3 pixels before diffraction limits resolution (assuming an otherwise perfect lens)"
I can imagine that they have in mind other factors, like damage done by low-pass filters and debayering. Reading some papers on AA filters used in cameras, they can blur detail from 1 to 3-4 pixels.
In example, i find images from A7RII (4,5um) to be more clearly more detailed than Canon 80d (3,9um). Yet A6300 (also 3,9um) does have slight advantage over A7RII, but it's very small.

A7RII lacks AA filter, A6300 probably have weak AA filter, Canon 80d have it rather strong. This assumption is based on pixel level resolution and moire occurance.

So what they (might mean), sensor can pretty much ignore differences bellow 2-3 pixels of airy unit apertures. And what they really mean is sharpness, not resolution.

rjlittlefield
Site Admin
Posts: 23563
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

JohnyM wrote:I can imagine that they have in mind other factors, like damage done by low-pass filters and debayering.
I agree. Copying here what I wrote in the old posting that I linked:
I've spent quite some time this evening looking at images and reading the cambridgeincolour page. I think I now understand what's going on.

Cambridgeincolour is concerned with a question commonly posed by users: "What apertures can I use, given the sensor I have?" For example their number labeled "Diffraction Limits Standard Grayscale Resolution" reflects the aperture at which artifact-free resolution begins to be impacted.

In contrast, I'm answering a different question: "What sensor would I need, to capture all the detail in the image formed by my optics?" The answer to that question is 3 pixels per line pair for the finest lines resolved in the optical image.

When you work the numbers, it turns out that the answer to my question is a much higher pixel count than you'd get by working backward from cambridgeincolour's answer to their question.
For the current discussion, I've slacked off that "3 pixels per line pair for the finest lines resolved in the optical image", to be the absolute minimum 2 pixels of Nyquist-Shannon. The result will be some extra artifacts in the very finest detail, but the 2 pixels per line pair involves no judgement so that number is easier to defend.

--Rik

ray_parkhurst
Posts: 3417
Joined: Sat Nov 20, 2010 10:40 am
Location: Santa Clara, CA, USA
Contact:

Post by ray_parkhurst »

rjlittlefield wrote:
JohnyM wrote:I can imagine that they have in mind other factors, like damage done by low-pass filters and debayering.
I agree. Copying here what I wrote in the old posting that I linked:
I've spent quite some time this evening looking at images and reading the cambridgeincolour page. I think I now understand what's going on.

Cambridgeincolour is concerned with a question commonly posed by users: "What apertures can I use, given the sensor I have?" For example their number labeled "Diffraction Limits Standard Grayscale Resolution" reflects the aperture at which artifact-free resolution begins to be impacted.

In contrast, I'm answering a different question: "What sensor would I need, to capture all the detail in the image formed by my optics?" The answer to that question is 3 pixels per line pair for the finest lines resolved in the optical image.

When you work the numbers, it turns out that the answer to my question is a much higher pixel count than you'd get by working backward from cambridgeincolour's answer to their question.
For the current discussion, I've slacked off that "3 pixels per line pair for the finest lines resolved in the optical image", to be the absolute minimum 2 pixels of Nyquist-Shannon. The result will be some extra artifacts in the very finest detail, but the 2 pixels per line pair involves no judgement so that number is easier to defend.

--Rik
Good stuff Rik, and all making sense. I was a bit worried when I read "3 pixels" but your clarification put me at ease.

I had used the word "valid" to criticize the effort to get absolutely every speck of information from the image projected onto the sensor, but of course the effort is a valid one. I suppose the word "practical", or even "possible" (assuming current available equipment) would have been better.

As I think I said before, it seems your discussion centers more on pixel size rather than sensor size. My original question of what was "optimum" (another mushy term) related more to sensor size, and I would now add "with current commonly available equipment". I have not fully integrated all of your inputs into this question, so am not sure if it is "valid" (correctly used this time). I suppose a few examples would be the best start, so I will think about that further.

Now, back to the original question, though it may be far too late in this thread to pick up with it again...What are your Go-To lenses you use for your work across the mag ranges?

rjlittlefield
Site Admin
Posts: 23563
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

ray_parkhurst wrote:As I think I said before, it seems your discussion centers more on pixel size rather than sensor size.
It actually focuses more on pixel count.

Sensor size becomes a limitation only when it not possible to obtain lenses that will resolve the required number of pixels on a smaller sensor.

To continue our current example, it is certainly feasible though challenging to resolve 11,000 line pairs (22,000 pixels) across a full frame sensor, 24x36 mm. That requires only f/5.6 at minimal contrast. But the f-number scales with sensor size, so one would need f/2.8 on 12x18 mm, f/1.4 on 6x9 mm, and f/1.0 on about 4.3 x 6.4 mm.

The required f-number continues downward with even smaller sensors, reaching an absolute minimum of f/0.5 around 2.1 x 3.2 pixels. Below that you're completely stuck unless you switch to oil immersion on the sensor side, which will get you another factor of 1.4 or so.

--Rik

ray_parkhurst
Posts: 3417
Joined: Sat Nov 20, 2010 10:40 am
Location: Santa Clara, CA, USA
Contact:

Post by ray_parkhurst »

rjlittlefield wrote:
ray_parkhurst wrote:As I think I said before, it seems your discussion centers more on pixel size rather than sensor size.
It actually focuses more on pixel count.
I took your 24x36mm field as arbitrary, but from your response it seems that size (and thus the resulting pixel count) is somehow important. Can you elaborate? The underlying physics discussion did not depend on field size, so I don't see why you say pixel count is the focus.
rjlittlefield wrote:Sensor size becomes a limitation only when it not possible to obtain lenses that will resolve the required number of pixels on a smaller sensor.
Yes, in theory, but at present I am not sure if a 365MP FF sensor is available, so this seems more a limitation than is availability of optics.
rjlittlefield wrote:To continue our current example, it is certainly feasible though challenging to resolve 11,000 line pairs (22,000 pixels) across a full frame sensor, 24x36 mm. That requires only f/5.6 at minimal contrast. But the f-number scales with sensor size, so one would need f/2.8 on 12x18 mm, f/1.4 on 6x9 mm, and f/1.0 on about 4.3 x 6.4 mm.
The f-number scales because the sensor pixel size scales, correct? But again, this is still a theoretical discussion, because no 365MP 12x18mm sensors are yet available. And in practical terms, they probably never will be. Certainly the 0.8um pitch is not the ultimate limiter, given Moore's law, but how many folks would want to pay for such technology? A handful perhaps, but not enough to pay the development costs.

But the trend is well-taken. Smaller sensors require bigger apertures, but they also require lower magnification. More optics with high NA are available with small image circles than with large ones. So my question is evolving...

rjlittlefield
Site Admin
Posts: 23563
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

ray_parkhurst wrote:I took your 24x36mm field as arbitrary, but from your response it seems that size (and thus the resulting pixel count) is somehow important. Can you elaborate?
At one point, you asked the question: "Please help me out guys...for macro work, what is the lure of large format sensors?"

From my standpoint, the most compelling advantage of large format sensors is that you can get more pixels in a single exposure. So I was motivated to make that point in a way that would appeal to your sense of relevance.

The need for big pixel counts stems from the ratio between area and spot size, and I happen to know that the biggest ratio typically occurs with large areas at low magnification. It is not hard to find landscape applications where real lenses and real scenes can put a gigapixel of information onto a large piece of film, but obviously those are not macro.

So, responding to your question, I simply chose the largest subject field that almost everybody would consider "true macro" -- 1:1 onto 35-mm film. In that sense 24x36mm is not arbitrary, but it's driven by the words, not the physics.

That size field also happens to correspond to the sweet spot for your 105PN on fullframe sensor, and it's a convenient size for coin photography, so I hoped that you would feel an immediate connection to it.
The f-number scales because the sensor pixel size scales, correct?
That is certainly one correct way to look at it, but it's not the one I usually use.

The way I look at it, the f-number scales because that's the way that light works.

In any lens system that just bends the light, mag*NA has the same value in all focus planes. (This very simple property is an implication of the Lagrange invariant, which in its usual form I find quite inscrutable.)

So, if you keep the same subject-side NA so as to retain the same resolution on subject, and you change the magnification so as to fit the same FOV on different size sensors, then the sensor-side NA and corresponding effective f-number have no choice but to scale with the sensor size.

If you have agreed to keep megapixels constant, then pixel size scales the same way. They're all tied together, and all linked to the fundamental question of how much resolution do you want, over how large of an area.

But in my favorite line of reasoning, the key thing is that bit about mag*NA being constant. That's what immediately tells me what the sensor-side NA is going to be, and from there the conclusions about pixel size needed to capture all the information are forced.

Everything works out so that the lens system does retain the same information content regardless of magnification.

But emotionally, there's no reason why that has to be the case. In naive physics, it is easy to imagine that larger or smaller images have different information content. So, trying to just posit that they're the same is often a non-starter when trying to explain how things work.
But again, this is still a theoretical discussion, because no 365MP 12x18mm sensors are yet available.
I disagree -- I think it's a very practical discussion.

Several years ago you and I had a phone conversation regarding possible ways to photograph coins at high resolution. I suggested at the time that you should just buy a Nikon D800e, 36 megapixels with no AA filter, and stick your 105PN on it. That could have been done with no more effort than writing a check. But at the time, it was not possible to get that many pixels on any smaller sensor.

The numbers have changed today, but there's the same overall pattern. You can get 20 megapixels on micro 4/3, 50 megapixels on fullframe, and 100 megapixels on medium format.

So IF the criterion is to capture as much information as possible in one exposure, then larger is better, at least in the upper size ranges of "macro". I don't see any correct way to argue otherwise.

That said, it's very clear that most people don't buy medium format. I'm pretty sure that's for economic reasons, not technical ones, but in any case what it says is that for most people the criterion is not to capture as much information as possible in one exposure. Instead the criterion must be to optimize some more complicated function that includes (at least) cost and operating characteristics too.

In the end, I think that large sensors must have a "lure" for some people more than others because some people care more about resolution and pixel noise than other people do.

--Rik

ray_parkhurst
Posts: 3417
Joined: Sat Nov 20, 2010 10:40 am
Location: Santa Clara, CA, USA
Contact:

Post by ray_parkhurst »

Rik...for sure that conversation several years ago has stuck in my mind. It is why I purchased the A7RIII, and the 5DSR, but neither of those cameras met expectations. I would have purchased a D850 if it had tethering support. Today, I believe it does, and yet there is always the next best thing on the horizon that keeps me from just writing that check...

My goal is still the same for "closeup" work, and I still use the 105PN with the HRT2i to do the best I can. Perhaps a D850 or a Z-Mount will be in my future.

That said, going back to my OP, you can quickly glean that my system is finite. I have a Mitty 5x and 10x, and yet I rarely use them since they don't fit well into the system. I am loathe to create a second system for high mag work, mostly for desk space reasons, so flexibilty wins over ultimate performance.

I bring this up because I really don't know of many objectives other than the Mitty's that have high IQ and can cover FF with long working distances. The MM series I use have good enough IQ but won't do FF. So if I go with FF, I will almost certainly have to figure out how to swap my system back and forth easily between infinite and finite. I'm not looking forward to that, though I of course have ideas on how to do it.

But simply moving to FF won't give me the improved results you are describing. I must also increase my entire magnification range by ~60% in order to maintain the FOV to take advantage of the bigger sensor with more pixels. So my 5x really needs to be a 7.5x, and I don't even know about the 10x. As we discussed long ago, the 105PN, now being operated at the outside edge of its range around 0.72x on APS-C, goes to 1.1x on FF, nearly perfect for the lens.

So while the move to FF may be "interesting" and involve some new high mag optics, I can't even contemplate the move the LF, which would require nearly a tripling of magnification and tripling of coverage vs APS-C. To achieve the same FOV as my 5x and 10x, I'd need 15x and 30x, with 55mm image circles. Do these optics exist?

Seems daunting, which is why I asked the question about the lure of large format for macro. I know you tried to answer the question by skirting right at the edge of the macro range, and indeed the 105PN at 1:1 is a good example of an optic that will do the job on FF, and its good friend the 95PN can do similarly on LF, but what can be done deeper into the magnification range? Is this the quest that so many folks are on?

mjkzz
Posts: 1681
Joined: Wed Jul 01, 2015 3:38 pm
Location: California/Shenzhen
Contact:

Post by mjkzz »

rjlittlefield wrote:
mjkzz wrote:IMHO, Stepping back a bit, even with your (Rik's) calculation, the diffraction free resolution is 91.25MP, so it really does not matter how many samples we take, we will NOT gain any details than 91.25MP, we are only getting "more accurate" pixel values AFTER down sampling.
Unfortunately your opinion is based on inference from a calculator that you don't really understand and does not mean what you think it does.

Let me step through the calculation again, highlighting some points of confusion.

Suppose we have effective f/5.6 and take lambda = 0.55 um.

The Airy disk diameter, measured across the first zeros, is 2.44*lambda*f_number [ref] .

That value is 2.44*0.55*5.6 = 7.5 um. This is the same value shown by the cambridgeincolour calculator, so I agree with the calculator to that point.

However, the minimum resolvable feature separation is often considered to occur when the peak of one Airy disk is lined up with the first zero of the other Airy disk. This is called the "Rayleigh criterion". In this case the two peaks will be separated by only 3.75 um.

The resulting optical image will have two bright spots separated by a slightly less bright spot. In order to capture the bright/dark/bright pattern, we need one pixel on each bright spot, and another one on the darker space between them.

As a result, the maximum pixel spacing that will completely capture the f/5.6 optical image in this scenario is 3.75/2 = 1.875 um.

Now, revisiting the calculation that I did earlier, nu_0 at f/5.6 is 325 cycles/mm, or 3.08 um per cycle, which by the sampling theorem requires 1.54 um pixels.

The difference between 1.875 and 1.54 um is due to a slight difference in assumptions. The key difference is that the Rayleigh criterion corresponds to a spatial frequency that is somewhat lower than nu_0, giving an MTF value around 9% by standard formulas. Capturing the higher frequencies with MTF < 9% is what results in the finer pitch spacing of 1.54 um. In general, the minimum pixel size needed to capture the lowest contrast information clear out to nu_o is about 4.88 pixels per Airy disk diameter.

I am always puzzled, on the cambridgeincolour page, to see the comment that "an airy disk can have a diameter of about 2-3 pixels before diffraction limits resolution (assuming an otherwise perfect lens)". I assume that phrase means something, but I have never figured out exactly what.* It clearly does not mean capturing all the detail that is present in the optical image. That said, even taking their value of 3 pixels per Airy disk, an Airy disk of 7.5 um would imply a pixel size of 2.5 um, which on a 24x36 mm sensor would be 14400x9600 = 138 MP.

--Rik

Edited to add: See musings at http://www.photomacrography.net/forum/v ... 180#101180, about what the cambridgeincolour comment may mean. That post also includes some experimental evidence that may help to explain why I'm so confident that in fact large pixel counts are required.

It's probably also worth noting, given the discussion earlier in this thread, that 2.44 pixels per Airy disk diameter will put 2 pixels per cycle at 39% MTF with a perfect lens. So, if you decide to not care about anything below 39% MTF, and you decide to not care about sampling artifacts due to only 2 pixels per cycle, then the cambridgeincolour calculator makes perfect sense even in theory.
Reading through the calculator page on Cambridge In Color, it seems they are addressing and are aware of "Rayleigh criterion" which I was aware of that fact, too.

Anyways, essentially you are saying I did not understand the assumptions in that calculator (Cambridge In Color) and I did not understand the result it produces, right? OK, I will stop using that calculator :D

I will read through your elaboration and learn more. Thanks.

rjlittlefield
Site Admin
Posts: 23563
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

mjkzz wrote:Reading through the calculator page on Cambridge In Color, it seems they are addressing and are aware of "Rayleigh criterion" which I was aware of that fact, too.
Yes, that's what is so confusing. They present the Rayleigh criterion and explain that diffraction sets a fundamental limit to resolution, but then without warning they switch gears and use a completely different analysis for the remainder of the page, which does not relate to Rayleigh. (OK, to be more precise it does sort of relate to Rayleigh, but it's different by a factor of around 2X, which is why your number for pixel count is different from mine by a factor of about 2*2.)

--Rik

mjkzz
Posts: 1681
Joined: Wed Jul 01, 2015 3:38 pm
Location: California/Shenzhen
Contact:

Post by mjkzz »

rjlittlefield wrote:
mjkzz wrote:IMHO, Stepping back a bit, even with your (Rik's) calculation, the diffraction free resolution is 91.25MP, so it really does not matter how many samples we take, we will NOT gain any details than 91.25MP, we are only getting "more accurate" pixel values AFTER down sampling.
Unfortunately your opinion is based on inference from a calculator that you don't really understand and does not mean what you think it does.

Let me step through the calculation again, highlighting some points of confusion.

Suppose we have effective f/5.6 and take lambda = 0.55 um.

The Airy disk diameter, measured across the first zeros, is 2.44*lambda*f_number [ref] .

That value is 2.44*0.55*5.6 = 7.5 um. This is the same value shown by the cambridgeincolour calculator, so I agree with the calculator to that point.

However, the minimum resolvable feature separation is often considered to occur when the peak of one Airy disk is lined up with the first zero of the other Airy disk. This is called the "Rayleigh criterion". In this case the two peaks will be separated by only 3.75 um.

The resulting optical image will have two bright spots separated by a slightly less bright spot. In order to capture the bright/dark/bright pattern, we need one pixel on each bright spot, and another one on the darker space between them.
I agree completely up to here, [edit]min[/edit] separation of the images of the two objects on sensor (film) is 1.22*lambda*f_number.
Rik wrote: As a result, the maximum pixel spacing that will completely capture the f/5.6 optical image in this scenario is 3.75/2 = 1.875 um.

Now, revisiting the calculation that I did earlier, nu_0 at f/5.6 is 325 cycles/mm, or 3.08 um per cycle, which by the sampling theorem requires 1.54 um pixels.

The difference between 1.875 and 1.54 um is due to a slight difference in assumptions. The key difference is that the Rayleigh criterion corresponds to a spatial frequency that is somewhat lower than nu_0, giving an MTF value around 9% by standard formulas. Capturing the higher frequencies with MTF < 9% is what results in the finer pitch spacing of 1.54 um. In general, the minimum pixel size needed to capture the lowest contrast information clear out to nu_o is about 4.88 pixels per Airy disk diameter.

I am always puzzled, on the cambridgeincolour page, to see the comment that "an airy disk can have a diameter of about 2-3 pixels before diffraction limits resolution (assuming an otherwise perfect lens)". I assume that phrase means something, but I have never figured out exactly what.* It clearly does not mean capturing all the detail that is present in the optical image. That said, even taking their value of 3 pixels per Airy disk, an Airy disk of 7.5 um would imply a pixel size of 2.5 um, which on a 24x36 mm sensor would be 14400x9600 = 138 MP.

--Rik

Edited to add: See musings at http://www.photomacrography.net/forum/v ... 180#101180, about what the cambridgeincolour comment may mean. That post also includes some experimental evidence that may help to explain why I'm so confident that in fact large pixel counts are required.

It's probably also worth noting, given the discussion earlier in this thread, that 2.44 pixels per Airy disk diameter will put 2 pixels per cycle at 39% MTF with a perfect lens. So, if you decide to not care about anything below 39% MTF, and you decide to not care about sampling artifacts due to only 2 pixels per cycle, then the cambridgeincolour calculator makes perfect sense even in theory.
Can I interpret the rest of your post as difference in assumptions of how "separate" the airy disk should be in order to get "clear sharp" image? It is clear that we can not simply try to sample two airy disks when the first zero is SO CLOSE to the peak of another (therefore simply using 1.22*lambda*f_number). There have to be some kind of separation to be acceptable. If so, could this factor be more or less empirical and accepted by the industry?
Last edited by mjkzz on Fri Aug 03, 2018 8:38 pm, edited 1 time in total.

mjkzz
Posts: 1681
Joined: Wed Jul 01, 2015 3:38 pm
Location: California/Shenzhen
Contact:

Post by mjkzz »

Rik wrote: I am always puzzled, on the cambridgeincolour page, to see the comment that "an airy disk can have a diameter of about 2-3 pixels before diffraction limits resolution (assuming an otherwise perfect lens)". I assume that phrase means something, but I have never figured out exactly what.* It clearly does not mean capturing all the detail that is present in the optical image. That said, even taking their value of 3 pixels per Airy disk, an Airy disk of 7.5 um would imply a pixel size of 2.5 um, which on a 24x36 mm sensor would be 14400x9600 = 138 MP.
So, this is the part that you do not know, either. And it seems you do not agree with Cambridge In Color. Maybe someone can write to Cambridge In Color or invite them over here and explain as I think a lot of people are using their calculator.

mjkzz
Posts: 1681
Joined: Wed Jul 01, 2015 3:38 pm
Location: California/Shenzhen
Contact:

Post by mjkzz »

rjlittlefield wrote:
mjkzz wrote:Reading through the calculator page on Cambridge In Color, it seems they are addressing and are aware of "Rayleigh criterion" which I was aware of that fact, too.
Yes, that's what is so confusing. They present the Rayleigh criterion and explain that diffraction sets a fundamental limit to resolution, but then without warning they switch gears and use a completely different analysis for the remainder of the page, which does not relate to Rayleigh. (OK, to be more precise it does sort of relate to Rayleigh, but it's different by a factor of around 2X, which is why your number for pixel count is different from mine by a factor of about 2*2.)

--Rik
Yes, thanks Rik, you are absolutely right, I did not know what they are doing there, I just took their calculation for granted. As in my last post, maybe we can invite them over by some well known figure here (ie, I do not think they would respond to my invitation as I am nobody :D)

rjlittlefield
Site Admin
Posts: 23563
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

ray_parkhurst wrote:It is why I purchased the A7RIII, and the 5DSR, but neither of those cameras met expectations.
I don't recall how the 5DSR failed to meet expectations. Can you point me to that discussion?
That said, going back to my OP, you can quickly glean that my system is finite. I have a Mitty 5x and 10x, and yet I rarely use them since they don't fit well into the system.
That's interesting. I often treat the Mitty+Raynox as a unit that acts like a finite objective with M42 threads on the back, which I stick on bellows adjusted to provide the proper back focus for the Raynox. For practical purposes, that lens combo is just the same as any other finite conjugate lens, for example a Componon for use at lower magnification.
But simply moving to FF won't give me the improved results you are describing. I must also increase my entire magnification range by ~60% in order to maintain the FOV to take advantage of the bigger sensor with more pixels.
Or take advantage of the larger FOV at same nominal magnification. Lots of people have shown that Mittys hold up to full frame at nominal magnification, and the 7.5X is even better.
but what can be done deeper into the magnification range? Is this the quest that so many folks are on?
The first question depends on how much deeper you go. At some point, the information content becomes so small that lots of different sensors capture everything there is in the optical image. As an extreme example, 50X NA 0.55 on full frame only needs 2880 pixels per field width, 5.5 megapixels total, to meet the criterion of Nyquist at MTF=0. Any other sensor covering the same FOV at the same NA 0.55 will also only need 5.5 megapixels -- pretty simple these days!

The answer to the second question is I don't know. Most of what I do is just 15 megapixel APS-C, frequently cropped from there, because that combination is cheap, convenient, and lets me see what I want to see. Beyond that level of description, I can't really say what my criteria are. Obviously I'm not going for maximum pixels per output image or I'd be stitching. I'm not even going for maximum pixels per single stack or I'd be using my D800E all the time. In the wasp antennae that I posted out this evening, I didn't even use the highest resolution optics that would have covered the subject. That would have been 20X NA 0.42, but I chose to use 10X NA 0.28 because it was good enough and looked likely to make the stacking easier. Lots of tradeoffs!

--Rik

Post Reply Previous topicNext topic