How to decide outresolving?

Have questions about the equipment used for macro- or micro- photography? Post those questions in this forum.

Moderators: rjlittlefield, ChrisR, Chris S., Pau

Lou Jost
Posts: 5933
Joined: Fri Sep 04, 2015 7:03 am
Location: Ecuador
Contact:

Post by Lou Jost »

I don't know. It does not have standard markings (except "5x"), no focal length, no aperture markings, no NA. I will measure the effective aperture by comparison of exposure values versus a known lens.

Edited to add: It's a Nikon Engineering lens, the next-generation Ultra-Micro-Nikkors, and nothing is written about them anywhere.

mjkzz
Posts: 1681
Joined: Wed Jul 01, 2015 3:38 pm
Location: California/Shenzhen
Contact:

Re: How to decide outresolving?

Post by mjkzz »

After digging a bit more, I am surprised that I did not read Wiki :-(. Wiki link:

https://en.wikipedia.org/wiki/Airy_disk

Now from what I gathered from Wiki, it seems clear that sampling the airy disc is the right way to determine pixel size.

Image

Note in the calculation, the 4um is based on wavelength of 420nm and is the radius of airy disk.

So, it clearly states that making a pixel size smaller than the RADIUS of airy disc will not increase resolution, hence two pixels for each airy disk should be enough, which also conforms to Nyquist rule.

To further enhance the idea, take a look at the "Real Airy Disk" created by a laser beam on the right hand side, ignoring the higher order of diffraction, to adequately sample the first zero, using Nyquist rule.

There are some Gaussian approximation and discussion of "diffraction limited", and I quote: "In optical aberration theory, it is common to describe an imaging system as diffraction-limited if the Airy disk radius is larger than the RMS spotsize determined from geometric ray tracing (see Optical lens design)."

Without doing math, I can only GUESS that MAYBE the Gaussian analysis contributes to the "2-3 pixel" calculation for CoC by Cambridge In Color. This is just a guess, I got some idea in my head, but not quite sure yet.

What does it mean? It means sure, you can have smaller pixels (than radius of airy disc), but you will not increase resolution and system designed such way will have non-zero MFT

Lou Jost
Posts: 5933
Joined: Fri Sep 04, 2015 7:03 am
Location: Ecuador
Contact:

Post by Lou Jost »

Not so. Airy disks have a central peak, they are not "all or nothing".

rjlittlefield
Site Admin
Posts: 23543
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Re: How to decide outresolving?

Post by rjlittlefield »

mjkzz wrote:What does it mean?
What it really means is that the Wiki article is not written clearly, at best. More likely it is simply wrong, with the author's fingers running faster than his brain. Notice, in the Wiki article, that it says "where x is the separation of the images of the two objects on the film". So we have an object, a space, and an object, and the wiki article then goes on to somehow conclude that we only need one pixel per object, with none for the space between them!

Look, this is not hard.

Again, consider a line of bright points in a dark field, that are spaced according to the Rayleigh criterion. The center of the Airy disk for each bright dot is lined up with the first zero of the Airy disk for the next bright dot. The superposition results in a line of peaks and slight dips, with one peak and one dip per Airy disk radius. It takes at least two pixels to capture the peak and the dip, hence 2 pixels per radius, 4 pixels per diameter, to capture a grid spaced at the Rayleigh criterion.

This spacing will still not capture quite all the detail that can be resolved by the lens. The bright points can be spaced slightly closer and still be resolved, with lower and lower contrast, until finally a limit is reached where the peaks and valleys merge into a single level and stay that way for all finer spacings. That limit occurs at the diffraction limited cutoff frequency of the lens, nu_0 = (2*NA)/lambda = 1/(lambda*f-number), corresponding wavelength = lambda*f-number. Apply Nyquist to that, and you get a minimum pixel size = (1/2) * lambda * f-number, which corresponds to Nikon's "matching" value from their https://www.microscopyu.com/tutorials/m ... resolution . It occurs at 4.88 pixels per Airy disk diameter.

Finer spacing than 4.88 pixels per Airy disk diameter will not capture more detail, but will more accurately capture whatever detail is resolved by the optics. But coarser spacing than 4.88 pixels per Airy disk diameter will definitely fail to capture all the detail that can be resolved by the optics.

Surprising as it may seem to some participants in this thread, Nikon really does understand the physics and math quite well.

--Rik

Justwalking
Posts: 137
Joined: Sun Jun 10, 2018 3:54 pm
Location: Russia

Re: How to decide outresolving?

Post by Justwalking »

rjlittlefield wrote: Surprising as it may seem to some participants in this thread, Nikon really does understand the physics and math quite well.

--Rik
From the article of this Nikon calc there note:

Adequate resolution of a specimen imaged with the optical elements of a microscope can only be achieved if at least two samples are made for each resolvable unit, although many investigators prefer three samples per resolvable unit to ensure sufficient sampling.

As example
It must be noted that, although the Nyquist theorem needs the minimum resolution to be divided in order to determine the pixel size, monochrome sensors will resolve small dimensions more efficiently if the smallest dimension is divided by three; and by four for color sensors, because of the Bayer filter pattern’s layout on the pixels.
https://www.azooptics.com/Article.aspx?ArticleID=1152

rjlittlefield
Site Admin
Posts: 23543
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Re: How to decide outresolving?

Post by rjlittlefield »

Justwalking wrote:From the article of this Nikon calc there note:

Adequate resolution of a specimen imaged with the optical elements of a microscope can only be achieved if at least two samples are made for each resolvable unit, although many investigators prefer three samples per resolvable unit to ensure sufficient sampling.
I agree completely with this aspect.

4.88 pixels per Airy disk diameter is the minimum requirement to satisfy Nyquist at the diffraction limit: 2 pixels per cycle at nu_0.

But as walked through at http://www.photomacrography.net/forum/v ... php?t=2439 , having 3 or more pixels per cycle, instead of only 2 as required by Nyquist, is more robust.

My pushback is against the recurring claim that 2 pixels per Airy disk diameter is adequate.

--Rik

Justwalking
Posts: 137
Joined: Sun Jun 10, 2018 3:54 pm
Location: Russia

Re: How to decide outresolving?

Post by Justwalking »

So what is practical conclusion we can take from this?

Higher NA and less magnification needs smaller pixel on sensor to achieve best result in resolution.

As example for 1X lens with 0.05NA (typically) for green light the smallest resolvable dimension will be = 0.61x0.55/0.05 = 6.71um and projected size on sensor without coupler is also 6.71um.
So the pixel on the color sensor must be no more than 1.67um, and for the blue light - 1.34um. All size above means loss of lens resolution on the sensor.
(imho)

lonepal
Posts: 322
Joined: Sat Jan 28, 2017 12:26 pm
Location: Turkey

Post by lonepal »

Hi all;

Again thanks for very useful information.

I now clearly understood the light-sensor size relation.
I myself also investigated a bit after asking the questions and saw that the f number is not related with sensor size but only related with lens spesifications.

I think I will get more dof with a m4/3 sensor than the apsc right?
Regards.
Omer

Pau
Site Admin
Posts: 6038
Joined: Wed Jan 20, 2010 8:57 am
Location: Valencia, Spain

Post by Pau »

lonepal wrote:I think I will get more dof with a m4/3 sensor than the apsc right?
Sorry, this is wrong again if you shot at the same magnification on sensor and aperture, always think in the crop concept. It has been debated recently, for example at http://www.photomacrography.net/forum/v ... hp?t=37669 and too mixed with other subjects at http://www.photomacrography.net/forum/v ... p?p=235326
Pau

rjlittlefield
Site Admin
Posts: 23543
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

Pau wrote:
lonepal wrote:I think I will get more dof with a m4/3 sensor than the apsc right?
Sorry, this is wrong again if you shot at the same magnification on sensor and aperture, always think in the crop concept. It has been debated recently
The term "aperture" is vague and leads to much confusion.

More precisely, if you shoot the same FOV with the same NA on the subject side, then you get the same DOF and the same amount of diffraction blur relative to the subject, on all sizes of sensor. When the resulting images are finally displayed at same size, they look just the same.

An equivalent condition is to shoot the same FOV on the subject, with effective f-number on the sensor side scaled to match the linear dimensions of the sensors.

In other words, at same FOV there is no escaping the one master curve that trades off DOF versus diffraction blur. Different setups can lie at different points on the curve, but if you adjust for same DOF then you get same diffraction blur, and vice versa.

The statement that m4/3 gives more DOF than APSC comes from declsions like "same f-number", which cause the m4/3 system to have smaller NA on the subject side due to shorter FL to match the different sensor sizes. It's the smaller NA that gives the greater DOF for m4/3, at the cost of greater diffraction blur. Adjust appropriately, and the differences disappear.

--Rik

rjlittlefield
Site Admin
Posts: 23543
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

rjlittlefield wrote:In other words, at same FOV there is no escaping the one master curve that trades off DOF versus diffraction blur. Different setups can lie at different points on the curve, but if you adjust for same DOF then you get same diffraction blur, and vice versa.
In re-reading, I realize that the above wording will probably cause some more debate. Let me see if I can avoid some of that.

In some sense there is not "one master curve", but rather a whole family of curves that depend on how you choose to define DOF. For example allowing a larger COC gives a bigger number for DOF, even though the underlying image hasn't changed.

The idea the post was intended to convey is that once you've picked a curve based on how much blur you're willing to accept at the edge of the DOF zone, all setups lie somewhere on that same curve.

--Rik

Justwalking
Posts: 137
Joined: Sun Jun 10, 2018 3:54 pm
Location: Russia

Post by Justwalking »

rjlittlefield wrote:
More precisely, if you shoot the same FOV with the same NA on the subject side, then you get the same DOF and the same amount of diffraction blur relative to the subject, on all sizes of sensor. When the resulting images are finally displayed at same size, they look just the same.

--Rik


Rik, can you say that their DoF looks absolutely just the same?
http://resourcemagonline.com/2014/02/ef ... eld/36402/

mjkzz
Posts: 1681
Joined: Wed Jul 01, 2015 3:38 pm
Location: California/Shenzhen
Contact:

Re: How to decide outresolving?

Post by mjkzz »

rjlittlefield wrote:Surprising as it may seem to some participants in this thread, Nikon really does understand the physics and math quite well.
Yes, I actually do think Wiki is right :D.

Wiki is more democratic and topic like this is viewed by many experts in this field, any mistake will be corrected by expert viewers. Of course, it could be wrong, in that case I think we, members of this community, should edit that wiki and correct it.

Another reason is this: at cutoff frequency it is meaningless to further subsample it with pixels. Why? Because the law of physics says at cutoff frequency, MFT is zero, any higher frequency than cutoff frequency will have MFT equal to zero. So instead of subsampling the cutoff wavelength, we should use it as basic sampling unit.

Take Mitty 5X as example, resolving power is 2um (just happen to be the cutoff wavelength), this means any two light point sources on specimen spaced less than 2um apart will have same value even if there might be a black and white details. Any two light point sources on specimen spaced more than 2um, say spaced at 2.1um, resulting a frequency component less than the cutoff, thus having non-zero MFT, will be detected if we have minimum two pixels of size 5*2um = 10um on the sensor.

OK, I think this sounds a bit crazy as it goes against Nikon, but in reality, Nikon is NOT wrong as it basically recommends oversampling. Sure, I might be accused of "knowing more" than Nikon, but please think about it -- it is meaningless to use half cutoff wavelength (times mag) as pixel size because at cutoff frequency, MFT is zero, therefore the previous equation might be wrong if we look at things in frequency domain.

Basically what I am saying is to use (at least) the resolving power to sample the image instead of half of it (which will result the same due to over sampling).

I will see if I can find my old version of MATLAB and produce some visual illustration by calculating Fourier matrix (viewed as image). I think the sequence is: Image -> Fourier Transform -> Apply Airy Disk convolution (dot product?) -> Fourier on the resultant matrix to get convoluted images. Of course, if I have time for this.

JH
Posts: 1307
Joined: Sat Mar 09, 2013 9:46 am
Location: Vallentuna, Stockholm, Sweden
Contact:

Post by JH »

I use to think of the camera sensor as a kitchen floor with square tiles and airy disk as round plates. If I at random place some plates on the floor most plates will touch more than one tile. This means that - if I match the size of disks and tiles I will for most disks/plates need more than one tile.
Best regards
Jörgen Hellberg
Jörgen Hellberg, my webbsite www.hellberg.photo

Pau
Site Admin
Posts: 6038
Joined: Wed Jan 20, 2010 8:57 am
Location: Valencia, Spain

Post by Pau »

Justwalking wrote:
rjlittlefield wrote:
More precisely, if you shoot the same FOV with the same NA on the subject side, then you get the same DOF and the same amount of diffraction blur relative to the subject, on all sizes of sensor. When the resulting images are finally displayed at same size, they look just the same.

--Rik


Rik, can you say that their DoF looks absolutely just the same?
http://resourcemagonline.com/2014/02/ef ... eld/36402/
Evidently DOF is not the same in both pictures, why?
© 2013 Robert OToole Photography | Lens: Sigma Macro 150mm F2.8 EX DG OS HSM | Camera: NIKON D800E | ISO: 100 | f8 | Shutter speed: 1/250 sec | single SB-R200 flash. Same camera settings and lens with only camera distance and sensor format changed .
Camera distance is different so angle of light also is, so effective aperture is also different, smaller in DX mode.
The article author doesn't take it in consideration so the whole statement is wrong
Pau

Post Reply Previous topicNext topic