The lenses we use

Have questions about the equipment used for macro- or micro- photography? Post those questions in this forum.

Moderators: rjlittlefield, ChrisR, Chris S., Pau

ray_parkhurst
Posts: 3413
Joined: Sat Nov 20, 2010 10:40 am
Location: Santa Clara, CA, USA
Contact:

Post by ray_parkhurst »

rjlittlefield wrote:I don't recall how the 5DSR failed to meet expectations. Can you point me to that discussion?
I did not publish at the time, but take a look at top of page 2 of this thread...
rjlittlefield wrote: That's interesting. I often treat the Mitty+Raynox as a unit that acts like a finite objective with M42 threads on the back, which I stick on bellows adjusted to provide the proper back focus for the Raynox. For practical purposes, that lens combo is just the same as any other finite conjugate lens, for example a Componon for use at lower magnification.
Yeah, I figured I would do something like that.
rjlittlefield wrote: Or take advantage of the larger FOV at same nominal magnification
So I'd be giving up any advantage the bigger sensor has in terms of resolution improvement by downsizing? And if the pixels on the larger format are larger, my captured image with same FOV would be smaller. Seems counterproductive to this discussion.
rjlittlefield wrote:At some point, the information content becomes so small that lots of different sensors capture everything there is in the optical image.
Exactly, hence my original question.

clarnibass
Posts: 130
Joined: Fri Jun 10, 2016 11:33 pm

Post by clarnibass »

ray_parkhurst wrote:Please help me out guys...for macro work, what is the lure of large format sensors?
Probably the same as for non-macro, I guess with the exception of possibly smaller DOF, even blurrier background, etc. Not very different from medium vs. 35mm film maybe. I guess you'd have to test with the specific setup to know for sure.

For example if a medium sensor is four times a FF sensor (double on each side), same pixel density, then it's more or less the same as combining four photos from the FF. I just did something like that, and for a reference also had one full photo. I'm not sure why it would be very different with macro. The 3m wide poster was much better than if it was a single frame, but if closest viewing was much farther away it wouldn't make any difference (for "quality", perspective would be different in the photo, which was the second reason for combining).

I happen to have both FF and APSC with the same number of pixels. The APSC is a few of years newer so any technology advantage goes to that one. Comparing as identically as possible, for the same frame and conditions on both sensors, the FF is better. I bet that a four times larger sensor, whether down sampled or even with the same number of (larger) pixels would probably be better too.

Re reducing the larger size or enlarging the smaller size when comparing, both are relevant depending on situation. If you need to upscale the photo for your final result then the latter is good.

I recently needed to print a bunch of photos at Super A3 size. Some of them, which I had to crop, were borderline, depending from how far you looked. Full screen they all look fine.
Last edited by clarnibass on Sat Aug 04, 2018 9:36 pm, edited 1 time in total.

clarnibass
Posts: 130
Joined: Fri Jun 10, 2016 11:33 pm

Post by clarnibass »

ray_parkhurst wrote:I did not publish at the time, but take a look at top of page 2 of this thread...
I'm not sure I understand how you compared them (the 5D and T2i). If results at 100% are the same, then how do they compare at the final same size? Was the 5D not better? Or was it not better by enough to be significant for you? Comparing at 100% for both then the 5D is magnified much more (for the same frame).

If an APSC crop from the 5D was practically the same as a full photo from the T2i, then the question is how much better your result is when magnified more and having the higher resolution. That can depend on the lens and other factors too.
ray_parkhurst wrote:So I'd be giving up any advantage the bigger sensor has in terms of resolution improvement by downsizing? And if the pixels on the larger format are larger, my captured image with same FOV would be smaller. Seems counterproductive to this discussion.
I think you meant cropping and not downsizing? That's what you described here.

I think he meant you could use e.g. 5x on the FF vs. (approx) 3x on the APSC and get the same frame (the frame you want), with possibly a better result on the FF.

Downsizing would be to reduce the size of a higher resolution photo, which you could get from the FF even if pixels are larger, by enough to keep the resolution higher too (for example the Nikon 36MP FX and Nikon 24MP DX.

mjkzz
Posts: 1681
Joined: Wed Jul 01, 2015 3:38 pm
Location: California/Shenzhen
Contact:

Post by mjkzz »

Rik wrote:As a result, the maximum pixel spacing that will completely capture the f/5.6 optical image in this scenario is 3.75/2 = 1.875 um.

Now, revisiting the calculation that I did earlier, nu_0 at f/5.6 is 325 cycles/mm, or 3.08 um per cycle, which by the sampling theorem requires 1.54 um pixels.

The difference between 1.875 and 1.54 um is due to a slight difference in assumptions. The key difference is that the Rayleigh criterion corresponds to a spatial frequency that is somewhat lower than nu_0, giving an MTF value around 9% by standard formulas. Capturing the higher frequencies with MTF < 9% is what results in the finer pitch spacing of 1.54 um. In general, the minimum pixel size needed to capture the lowest contrast information clear out to nu_o is about 4.88 pixels per Airy disk diameter.
I am not sure I can agree with above part, particularly about "The difference between 1.875 and 1.54 um is due to a slight difference in assumptions.".

I think the difference is that your number, 3.08um is calculated using spatial cut off frequency -- it is smallest point object resolvable by a perfect optical system. On the other hand the 3.75um is from point spread function for airy disk. These are fundamentally different and NOT as result of different assumptions.

Using spatial cutoff function to determine resolution is kind of extreme because at that frequency, the image would be close to contrastless. And applying sampling theorem, we will be going beyond this cutoff frequency (even smaller sensor size), can we really benefit from it?

Resolution determined by spatial frequency analysis can be misleading because ultimately, the PSF of optical system will limit its result, thus the term diffraction-limited.

All of above have nothing to do with the calculator from Cambridge In Color, but since I blindly trusted (not knowing their assumptions, etc) its calculation, I am curious how they calculated their numbers to validate my 96MP max.

mjkzz
Posts: 1681
Joined: Wed Jul 01, 2015 3:38 pm
Location: California/Shenzhen
Contact:

Post by mjkzz »

Rik wrote:I am always puzzled, on the cambridgeincolour page, to see the comment that "an airy disk can have a diameter of about 2-3 pixels before diffraction limits resolution (assuming an otherwise perfect lens)". I assume that phrase means something, but I have never figured out exactly what.* It clearly does not mean capturing all the detail that is present in the optical image. That said, even taking their value of 3 pixels per Airy disk, an Airy disk of 7.5 um would imply a pixel size of 2.5 um, which on a 24x36 mm sensor would be 14400x9600 = 138 MP.
I think now I figured it out what "an airy disk can have a diameter of about 2-3 pixels before diffraction limits resolution (assuming an otherwise perfect lens)" means by Cambridge In Color.

To calculate circle of confusion, there are a lot of things involved, acceptable sharpness, depth of focus (image side depth of field), etc, etc. So their calculation of circle of confusion is 2.5*pixels size (mid point of 2-3). See how complicated it is to calculate CoC

[edit]
Image
[/edit]

For 96MP full frame sensor, pixel size is 3um, so according to their formula, CoC is 7.5um, just equal to the diameter of airy disk for f/5.6 with lambda = 550nm.

Though, I think this is more or less empirical than actual theoretical value, I tend to trust it and I think the inference I have drawn from that calculator is valid.

As far as sampling theorem goes, it is either already factored in the circle of confusion calculation/estimate, or it somehow does not apply. My head hurts now with all these PSF and MFT, fourier transform, convolution of PSF, etc, etc.

The bottom line is, I do NOT agree with the analysis using spatial cutoff frequency (as Rik did) because ultimately, it is the convolution factor of PSF that limits maximum resolution (for diffraction free image).
Last edited by mjkzz on Sat Aug 04, 2018 3:40 am, edited 1 time in total.

mjkzz
Posts: 1681
Joined: Wed Jul 01, 2015 3:38 pm
Location: California/Shenzhen
Contact:

Post by mjkzz »

So, ultimately, going back to
Ray wrote:Please help me out guys...for macro work, what is the lure of large format sensors?
If the calculator by Cambridge In Color is right and their assumption is right, large format sensor tend to have larger pixel size, the circle of confusion tend to be larger (2.5*pixel size), we will be less likely diffraction limited.

Of course one can argue, large format camera has higher MP's. Well, in that case, I think it all comes down to pixel size.

ray_parkhurst
Posts: 3413
Joined: Sat Nov 20, 2010 10:40 am
Location: Santa Clara, CA, USA
Contact:

Post by ray_parkhurst »

In all these discussions, there is an elephant in the room: Bayer demosaicing. When we talk about a 50MP Bayer sensor, it is not "really" 50MP. It's not really 12.5MP either, even though only 50% of the luminance info is being captured, due to the way the G sensors are interleaved. But the reduction in luminance detail, and the upscaling/interpolation that occurs in order to fill in that missing 50%, results in an output resolution that is certainly less than full. Does anyone know the effective luminance MTF of a Bayer matrix? I can't find that info anywhere. And could this built-in blurring be part of the reason why Cambridge allows more diffraction reduction before saying it degrades the image?

ray_parkhurst
Posts: 3413
Joined: Sat Nov 20, 2010 10:40 am
Location: Santa Clara, CA, USA
Contact:

Post by ray_parkhurst »

I'm curious about the recommendation of 7.5x Mitty lens on the X1D MF camera discussed in another recent thread. How much better, and in what way, is this system (which seems to be generating such jealousy) versus using the same objective with shorter tube lens on a D850? Is it just a noise improvement issue?

I also see that I made a scaling error in my MF vs APS-C estimate of 3x, while it is really only 2x.

rjlittlefield
Site Admin
Posts: 23561
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

mjkzz wrote:My head hurts now with all these PSF and MFT, fourier transform, convolution of PSF, etc, etc.
I sympathize. It has taken me years to become comfortable with this material. But I have spent those years -- including experimental validation -- and I am comfortable with the material. So I do not mind admitting that I feel some slight annoyance when somebody who does not understand the material apparently tries to argue with me about it.

That said, let me now thank you for prompting me to simplify the presentation.

First, please note that the method I'm using is exactly the same as what Nikon uses on their Matching Camera to Microscope Resolution web page.

So if we're going to appeal to authority, then I'm appealing to Nikon and you're appealing to cambridgeincolour. I know which one I would trust more regarding scientific issues, but if you feel differently, I will not try to change your mind.

I would, however, like to make clear what the technical issues are. For that, I'm going to use math first, then follow up with a bit of experimental validation.

Assuming a circular aperture and an aberration-free lens, here are the governing equations:
lambda: wavelength of the light we're working with
Feff: effective F-number = 1/(2*NA) in the same focus plane

Airy disk diameter = 2.44*lambda*Feff
cutoff frequency nu_0 = 1/(lambda*Feff)
cutoff wavelength = 1/nu_0 = lambda*Feff
Nyquist sampling interval at cutoff frequency = cutoff wavelength / 2 , that is, two samples per cycle.
(Note: I'm using the "quote" tag here only for its formatting, not to imply that I'm copying letter for letter material that appears someplace else. If the forum provided an [indent] tag, I would use that.)

Now just do the algebra:
cutoff wavelength = lambda*Feff = Airy disk diameter / 2.44
Nyquist sampling interval at cutoff = cutoff wavelength / 2 = (Airy disk diameter / 2.44) / 2 = Airy disk diamater / 4.88
The conclusion, in English, is that you need at least 4.88 pixels per Airy disk diameter to meet the Nyquist criterion for capturing the finest detail that is present in the optical image.

Any sampling rate that is less dense will result in losing information, and will likely result in aliasing some of the high frequency information so that it looks like lower frequency information.

Barring typos, these statements are facts of physics. If you disagree with the statements, then please point out the errors in my text.

To understand the effects of magnification, we need one more simple fact: mag/Feff is constant at all focus planes in the system.

Combining that fact with the equation relating NA and Feff in the same focus plane, we get the relationship for a microscope that
Feff_sensor = magnification / (2*NA_subject)
We now have enough formulas in hand to step through the calculations on the Nikon web site.

Example: for magnification 10 and NA 0.25, Nikon quotes a required pixel size of 5.5 microns. Stepping through the calculation...
Feff_sensor = 10/(2*0.25) = 20
Airy disk diameter = 2.44 * lambda * Feff = 2.44 * 0.55 * 20 = 26.84 microns.
Cutoff wavelength = Airy disk diameter / 2.44 = 11 microns
Nyquist sampling interval at cutoff = cutoff wavelength / 2 = 11/2 = 5.5 microns
To summarize: Nikon's table says 5.5 microns is the minimum required pixel size; the formulas say that 5.5 microns is Nyquist limit, two pixels per cycle, at the diffraction-limited cutoff frequency.

You can repeat the calculation for yourself, for any other row in Nikon's table. The numbers all say the same thing: the required pixel size is 4.88 pixels per Airy disk diameter, to capture all the information that is present in the optical image.

Now, going back to cambridgeincolour's text, what their words say is that "an airy disk can have a diameter of about 2-3 pixels before diffraction limits resolution".

Clearly there is a discrepancy between 2-3 pixels per Airy disk (cambridgeincolour) and 4.88 pixels per Airy disk (Nikon).

The challenge is how to understand that discrepancy.

The way I make sense of the discrepancy is that cambridgeincolour and Nikon are simply addressing different questions:
  • Nikon is addressing the question of what sensor do you need, to capture all the information in the optical image for the aperture you have in hand.
  • Cambridgeincolour is addressing the question of what aperture do you need, before something bad happens to the image captured by the sensor you have in hand.

mjkzz wrote:The bottom line is, I do NOT agree with the analysis using spatial cutoff frequency (as Rik did) because ultimately, it is the convolution factor of PSF that limits maximum resolution
I see some confusion in this statement, given that you're arguing against my application of Nikon's method. For an aberration-free lens, the PSF is just the Airy disk, and it is convolution with that PSF which establishes the cutoff frequency. So on the one hand it sounds like you believe in the relevance of convolution with the PSF, but on the other hand you're arguing against the conclusions that pop out of that analysis.

If you want experimental confirmation of the numbers, go to http://www.photomacrography.net/forum/v ... 164#101164 , look at the f/11 images, and read the accompanying text.

Reproducing the images here, on the left is a blowup of an f/11 optical image, and on the right is that optical image captured by direct projection onto a camera sensor:

Image Image

The accompanying text reads as follows:
rjlittlefield wrote:What we have here is an f/11 image that is clearly not fully captured by a 15 mp APS sensor (pixel size 4.7 microns).


The Airy disk at f/11 is 14.7 microns. Using Nikon's method, an f/11 image requires pixel size 3 microns, so the experimental result at 4.7 microns is no surprise. In comparison, using cambridgeincolour's guideline of 2-3 pixels per Airy disk diameter would suggest a pixel size around 4.9 to 7.4 microns. That criterion is met by the particular camera that I'm using, and clearly it does not capture all the line pairs that are present in the optical image.

The conclusion seems pretty clear: Nikon's method gives a good number for Nikon's question, while cambridgeincolour's method does not. Presumably cambridgeincolour's method does give a good number for whatever issue it is that they're addressing. It's just not clear exactly what that issue is.

--Rik

[Edits: tweak wordings about aliasing & what question is being addressed by each model. ]
Last edited by rjlittlefield on Sat Aug 04, 2018 4:22 pm, edited 2 times in total.

rjlittlefield
Site Admin
Posts: 23561
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

ray_parkhurst wrote:In all these discussions, there is an elephant in the room: Bayer demosaicing. When we talk about a 50MP Bayer sensor, it is not "really" 50MP. It's not really 12.5MP either, even though only 50% of the luminance info is being captured, due to the way the G sensors are interleaved. But the reduction in luminance detail, and the upscaling/interpolation that occurs in order to fill in that missing 50%, results in an output resolution that is certainly less than full.
With a monochrome target in white light, the resolution loss should not be much if any. This is because all photosites collect almost equivalent information about the image. For the red site, there is an issue that the red sample comes from a lower resolution optical image because of the longer wavelength. But that's only at one photosite out of 4, and the red filter probably passes colors nearing green anyway.
Does anyone know the effective luminance MTF of a Bayer matrix? I can't find that info anywhere.
I don't recall ever seeing that.
And could this built-in blurring be part of the reason why Cambridge allows more diffraction reduction before saying it degrades the image?
One problem: Cambridge actually allows less diffraction, at least as I see the numbers. Cambridgeincolour's rule allows the Airy disk to be only 2-3 pixels "before diffraction limits resolution", whatever those words mean. But we see (in my previous post) that the Airy disk can be up to 4.88 pixels wide before it contains less information than the sensor can capture.

--Rik

rjlittlefield
Site Admin
Posts: 23561
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

ray_parkhurst wrote:I'm curious about the recommendation of 7.5x Mitty lens on the X1D MF camera discussed in another recent thread. How much better, and in what way, is this system (which seems to be generating such jealousy) versus using the same objective with shorter tube lens on a D850? Is it just a noise improvement issue?
That, plus a slightly higher pixel count. For me it's mostly a social pat on the back. If somebody handed me a check for the price of the Hasselblad, for sure I would spend it on other things.

--Rik

ray_parkhurst
Posts: 3413
Joined: Sat Nov 20, 2010 10:40 am
Location: Santa Clara, CA, USA
Contact:

Post by ray_parkhurst »

rjlittlefield wrote: With a monochrome target in white light, the resolution loss should not be much if any.
That is an excellent point that I had not considered. I suppose that would point me toward using an Ag coin, which is essentially monochrome, rather than a Cu coin, which is heavily toward Red, as a test subject. I was already considering doing that in order to better assess CA issues. The Cu coin's red shift tends to obscure CA a bit.

ray_parkhurst
Posts: 3413
Joined: Sat Nov 20, 2010 10:40 am
Location: Santa Clara, CA, USA
Contact:

Post by ray_parkhurst »

So, question to everyone...what lenses do you use for your work across different magnification ranges, and why? Have you settled on particular lenses, or do you still use multiples? Maybe different ones for different applications, cameras, or subjects?

rjlittlefield
Site Admin
Posts: 23561
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

ray_parkhurst wrote:what lenses do you use for your work across different magnification ranges, and why?
For microscope objectives, mostly Mitutoyo M Plan Apo. Sometimes on Raynox DCR-105 for near-rated magnification, sometimes on Canon 100 mm L IS USM as tube lens for half-magnification. For demos, usually Nikon CFI BE because the quality is good and it freaks people out less than a Mitty.

For intermediate magnifications, usually a reversed 50 mm f/2.8 Schneider-Kreuznach Componon S. (By the way, that lens handily out-resolves my Nikon D800E, 36 MP full frame, at 1:1 and f/4 nominal, f/8 effective. See http://www.photomacrography.net/forum/v ... 540#163540 for the proof shots. The same is true even for a basic EL Nikkor 50 mm f/2.8 at f/5.6; see one post up from the previous. Taking the time to look close at what your lenses are actually putting out can be a real eye-opener about the limitations of sensors.)

For less than 1:1, typically Canon macro 100mm f/2.8L IS USM.

--Rik

rjlittlefield
Site Admin
Posts: 23561
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

Ray, I'd like to go back to a point about sensors that I didn't take time to respond to earlier.
rjlittlefield wrote:
ray_parkhurst wrote:It is why I purchased the A7RIII, and the 5DSR, but neither of those cameras met expectations.
I don't recall how the 5DSR failed to meet expectations. Can you point me to that discussion?
You pointed me to this:
ray_parkhurst wrote:I did not publish my results and impressions, so here they are...I had fairly high expectations from the 5DSR. In particular, the "Fine Detail" mode (or whatever it is called) seemed to offer perhaps an improved processing of the image at pixel level, better than my old HRT2i. The pixels are similar size vs the T2i, so what I expected from the 5DSR was a combination of improved 100% pixel detail and a FF sensor. My initial impressions were favorable, and indeed the 100% detail was good. What intrigued me was the new revision of demosaicing/editing software that came with the camera, which included a new sharpness function that offered better fine detail adjustments. I tried using the same settings on an image from my HRT2i, and got virtually the same output. Looks like all they did was to improve the sharpening algorithms, calling this "Fine Detail" mode. I can do this with my existing camera!
Obviously I don't understand what "this" means, in that last sentence.

As far as I can see, you confirmed that the 5DSR has the same sensor characteristics as your HRT2i (a T2i with AA filter removed, right?). That's great, because the 5DSR also has 2.8 times more pixels.

So, if you had shot the same subject at the same FOV (that is, using magnification proportional to the sensor size), with any decent lens (which will outresolve both sensors on something the size of a coin), and you had compared resolution on subject, you would have seen about 1.67X improvement in resolution on subject with the 5DSR. (1.67 = sqrt(2.8 ))

I do not follow the reasoning that caused you to return that camera. Can you explain?

--Rik

Post Reply Previous topicNext topic