Camera and Nyquist Questions

This area is for the discussion of what's new, what's on your mind, and general photographic topics. A place to meet, make comments on this site, and get the latest community news.

Moderators: rjlittlefield, ChrisR, Chris S., Pau

germpore
Posts: 3
Joined: Wed Nov 02, 2011 7:03 pm

Camera and Nyquist Questions

Post by germpore »

I have some admittedly technical questions about microscope cameras and Nyquist optimization that I was hoping the knowledgeable folks here could help me out with.

1) Could someone point me to a good introduction to microscope cameras and mounts? Either a website or book? Specifically, I know very little about what chip sizes are suited for mounting on particular microscopes, how to pick out the intermediate lens magnification factor (if any), and other aspects of c-mounts and intermediate lenses. I can’t find anything about this in current microscopy textbooks I have.

2) Dynamic range on cameras is usually described in decibels (db). How do I interpret that? I usually think of dynamic range in terms of bit depth, and how wide a histogram of the initial capture can be relative to all grey levels.

Concerning Nyquist calculations –

3) Some books give the formula for resolution limit as 0.5 x (lambda)/NA, while others give it as 0.61 x (lambda)/NA. I’m assuming the former one is under optimal conditions while the latter is what is realized under more typical conditions. What are the assumptions behind each formula?

4) For calculating resolution limits for brightfield microscopy, does one assume the “peak” visibility of 550 nm? Or a lower limit where you’re still going to get a large amount of visible light – say, 500 nm for a halogen light source or 450 nm for an LED one? (I realize chromatic aberration is always going to be a factor limiting resolution in full-spectrum brightfield.)

5) Finally, for optimal resolution, I know the formula is (total lens magnification) x (resolution limit)/2, and that you try to match your pixel size to that. However, pixel dimensions are typically given as the square dimensions of the sides of the pixels. But shouldn’t you be using the diagonal measurement of the pixel size as your target, to best capture resolution limit objects that are as much as 45 degrees off of the lines of the pixel array?

Pau
Site Admin
Posts: 6038
Joined: Wed Jan 20, 2010 8:57 am
Location: Valencia, Spain

Post by Pau »

Hi germpore, welcome aboard!

I can't respond to all your questions, so just some points:

- Camera mounts: Most dedicated microscope cameras use C mount thread and small sensors, usually with relay lenses ranging from 0.25X to 0.6X.
(inexpensive C mount or eyepiece mount cameras are usually of bad quality)
Many amateurs do use DSLRs or mirrorless cameras because they have excellent sensors and are more affordable than good quality dedicated cameras
In any case you want to match the sensor size with the microscope image circle, often about the eyepiece image circle (field number).
In some cases direct projection can be used without relay optics, in other cases you need relay lenses to match the size, this can be done with projective eyepieces, afocal (normal eyepiece + camera lens) or with dedicated adapters.

Sensor sizes:
https://en.wikipedia.org/wiki/Image_sensor_format

-Fundamentals of microscope digital imaging, take a look at the tables scrolling at middle page
https://www.microscopyu.com/digital-ima ... al-imaging
https://www.microscopyu.com/tutorials/m ... resolution
and
https://www.microscopyu.com/techniques/ ... quirements
For calculating resolution limits for brightfield microscopy, does one assume the “peak” visibility of 550 nm?
Yes, at least this is the more standard
Pau

rjlittlefield
Site Admin
Posts: 23543
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Re: Camera and Nyquist Questions

Post by rjlittlefield »

A hearty "welcome aboard" from me also!

One of your questions allows a definite answer:
germpore wrote:What are the assumptions behind each formula?
The formula 0.5 x (lambda)/NA relates to the spatial frequency at which MTF drops clear to zero. It can also be written as nu_0 = (2*NA)/lambda, where nu_0 is the cutoff frequency. At this frequency, alternating bright/dark lines with a sinusoidal intensity profile will create an image that is indistinguishable from what you would get by looking at a uniform gray subject.

The formula 0.61 x (lambda)/NA relates to the spacing of point sources for which an Airy disk centered on one point will have its first minimum on the other point. For equal intensity point sources, the resulting image will be an oblong shape that slightly necks and dims in the middle. A sinusoidal grid with this frequency will be rendered with something like 9% MTF.

Your other questions are more complicated, and I think the best answer is to go clear back to basics.

You write that "for optimal resolution, I know the formula is (total lens magnification) x (resolution limit)/2, and that you try to match your pixel size to that."

That statement is at best an oversimplification. If you feel compelled to believe it, then you should understand that it's best interpreted as being some definition of "optimal", and I would argue that it's not the best definition.

Let me note that there are two places where things go completely bad. The first is that nu_0 spatial frequency that I talked about earlier, where MTF goes clear to zero. The second is at 2 pixels per cycle, which is the Nyquist sampling criterion for the point where higher frequencies are guaranteed to alias.

Unfortunately, things start to degrade long before they reach those two points of complete failure. If you read through "On the resolution and sharpness of digital images...", you'll see that anything approaching 2 pixels per cycle gives severe sampling artifacts. Something like 3 pixels per cycle is needed to avoid the worst of them, 4 pixels per cycle is better yet, and even more pixels per cycle produces even cleaner renditions, albeit with diminishing returns as the pixels/cycle count rises.

Now, Pau has referenced you to https://www.microscopyu.com/tutorials/m ... resolution.

That is an excellent resource, highly regarded and been around for a long time. It's a reliable source. But when I slogged through the math, I discovered that the numbers produced by that calculator correspond to putting 2 pixels per cycle at nu_0. In other words, the calculator ends up specifying a pixel size that fails the Nyquist criterion at exactly the same frequency where MTF drops to zero anyway. Not being satisfied by one unavoidable failure, the calculator goes for two at once!

Now, I've written that to sound crazy, but it's not as bad as it seems. Putting 2 pixels per cycle at nu_0 means that you'll get significant sampling artifacts at frequencies near nu_0, where the MTF is necessarily low because of diffraction. But it also puts 4 pixels per cycle at half that frequency, where diffraction still leaves MTF around 40%.

So, one could equally say that their calculator gives a good rendition of detail in the range where the MTF is high, and sacrifices accuracy because of lesser sampling only in the range where the MTF is low because of diffraction.

Now, if that's your definition of "optimal", then they're using the optimal calculation.

But just personally, I'd rather optimize for image quality versus budget, and that approach tends to give a lot finer pixel spacing, more like 4 pixels per cycle even at nu_0. (That is, twice the pixel count indicated by the calculator.)

Adding to the amusement, sometimes it matters that a color camera with a typical Bayer filter has only 1 blue cell for every 4 pixels, giving it 1/2 the spatial resolution for blue that it does for green.

The bottom line is that more pixels are generally better than fewer, and you should go for fewer only if you can get something valuable in return.

I'll stop now and wait to hear if this is helping or hurting.

--Rik

mawyatt
Posts: 2497
Joined: Thu Aug 22, 2013 6:54 pm
Location: Clearwater, Florida

Post by mawyatt »

Rik,

Something somewhat related to the "Nyquist Sampling Criterion" mentioned. It's always, at least in communications, been interpreted as a "barrier" not to be broken without dire consequences. I remember designing Brick Wall Filters (BWF) to help keep all the energy within the Nyquist Limits, these filters were exceedingly difficult to design and only approximated a BWF to varying degrees.

The Nyquist Limit is actually being broken today 20~25% by some very advanced communication systems that are highly experimental. So it may not be the "barrier" that we all were taught in grad school, just as the semiconductor folks have long since left the diffraction limits behind :D

I'll report more on this in the future when things become more public, but suffice to say that the barrier has been broken! Maybe this will spill over into the optics world with the use of Metamaterial lenses :roll:

Best,
Research is like a treasure hunt, you don't know where to look or what you'll find!
~Mike

germpore
Posts: 3
Joined: Wed Nov 02, 2011 7:03 pm

Post by germpore »

Thanks, Rik!

Yes, I realized after I wrote this that I had failed to take the Bayer filter array in color cameras into account, and that color cameras have, in effect, double the pixel size either linearly or diagonally, since a color camera uses a 2x2 square to give an array of 4 pixels capturing at different wavelength peaks.

I'm not familiar with thinking of Airy disks in terms of the frequency domain, and MTF is a new concept for me. What does it mean to refer to 2 or more pixels "per cycle"?

As to the different resolution limit factors, 0.50 vs 0.61, you're referring to assumptions about the relative intensity of the maxima and the contrast between the maximum and the minimum points, correct?

Post Reply Previous topicNext topic