On the resolution and sharpness of digital images...

A forum to ask questions, post setups, and generally discuss anything having to do with photomacrography and photomicroscopy.

Moderators: rjlittlefield, ChrisR, Chris S., Pau

rjlittlefield
Site Admin
Posts: 23561
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

On the resolution and sharpness of digital images...

Post by rjlittlefield »

Have you ever worried that your digital picture doesn't look sharp enough?
Or maybe that it looks too sharp?
Or how sharp you should expect it to be?

I have. And I've seen quite a bit written about the topic of sharpness. But somehow what I've seen written hasn't exactly fitted what I wanted to know.

Finally I decided to run a few careful experiments and try to make sense of the results myself. (Sometimes that helps, sometimes it doesn't. You know how the game goes. :wink: )

After a bit of thrashing around, I got some results that are worth writing about. This post includes a good sampling of them.

The "big picture" consists of three small printed USAF resolution charts taped to a whiteboard and lit by a couple of desk lamps. Let's take a picture of the resolution charts and see what comes out.

Here is full frame and a 1:1 ("actual pixels") crop, from a Canon 300D.

Image

Notice that the picture shows very good separation of the lines in Group 0, Element 4. (That's the little block of 3 lines with the red arrow pointing to it.) Those are clearly resolved. Separation drops off from there down. Element 5 still looks OK, although a bit blurred on the 45 degree chart (right side). Element 6 is marginal (check out the 45 degree chart, at the green arrow). Group 1, Element 1, is hopeless (yellow arrow).

How well does this compare with what "could" be done?

I want to come at that question from a couple of different directions.

First, let's take the camera out of the picture and do some playing with Photoshop. That way we don't have to worry about lenses, anti-aliasing filters, demosaicing the Bayer pattern, and probably a host of other issues that haven't even occurred to us.

So here's the test. We generate a grid of evenly spaced lines -- black/white/black/white -- and see what happens when we just try to render it at various resolutions. (Photoshop CS, version 8.0, using image resize with cubic interpolation, starting from a grid that alternates 10 white pixels, 10 black, and so on -- 20 pixels per line pair.)

Image

At this point, it's tempting to invoke Moiré and Nyquist, say "well, of course", and move on to something else. But I think that misses an opportunity to develop some useful understanding, so I'd like to take a more intuitive approach. (It's my understanding we're talking about developing here, OK? You folks get to help or just to come along for the ride or even ignore me, as you choose.)

In brief, the output image -- the one we're seeing -- is constructed by making each output pixel be some weighted average of input pixels (the 20 pixels-per-line-pair grid). The exact weights depend on how the output pixel spatially maps to the input pixels.

For example, if the center of the output pixel falls exactly on a black-white transition of the input, then we would expect the output pixel to be a 50% gray. If the center of the output pixel falls exactly in the middle of a white block of input, then we'd expect the output to be, well, maybe not exactly white, but pretty close to it if the average covers mostly white pixels of the input.

Therein lies the rub. A couple of them, actually. First, when the pixels-per-line-pair gets small enough, every weighted average starts including a significant number of both white and black input values, so you never get pure white or pure black output, only lighter and darker grays. Second, even when the pixels-per-line-pair is large enough that the weighted average goes to pure white or pure black for some alignments of the pattern, there are other alignments of the pattern for which it will still be lighter or darker grays.

So, to figure out how many pixels-per-line-pair yield acceptable contrast, we might want to worry not only about best case, but also average case, and maybe even worst case!

What the illustration shows, to my eyes, is that 2.05 pixels-per-line-pair clearly doesn't cut it. That's no surprise, since the Nyquist-Shannon sampling theorem says that 2.00 would be the absolute limit.

But I'm not very happy with 2.55 either -- when I look closely at the pattern there are a whole bunch of places where what started off as black/white/black/white/black ends up being black/gray/gray/gray/black -- essentially losing the middle line!

At 3.05 pixels-per-line-pair, none of those B/W/B/W/B transitions actually get lost, although the rendition of them varies quite a bit depending on how they happen to line up with the pixels. (Some are mostly dark, some are mostly bright, some look more contrasty than others.)

3.55 pixels-per-line-pair looks pretty good, and it gets better from there on out.

These results completely support the common "safe" strategy of allocating 4 pixels per line pair. I'd say you can get by with 3, if you're more tolerant of variable appearance depending on how features happen to line up with the pixels. But pushing below that definitely strikes me as dicey -- sometimes you see a feature, sometimes you don't!

With that as background, let's look at some more real images.

At this point, I have to explain to you that I have taken some care to remove lens resolution as a factor. I'm using an excellent quality prime lens, in the center of its field of view, and to make very sure that the optical image is much better than the resulting digital sample, I have 1) stepped focus by small increments to make sure I have the best focused frame, and 2) checked the optical image by itself to be sure that it contained much more detail than what appears in the digital rendition.

In the montage that you see below, the upper left corner shows the optical image presented to the camera sensor, while the upper right corner shows the digital image recovered from that sensor. The optical image was captured by essentially focusing a microscope on the aerial image formed by the lens. The mechanics were a bit odd -- a microscope lens on bellows -- but I don't think there will be any serious doubt that the image projected by the lens contains a lot more detail than is captured by the camera.

Image

OK, so we're presenting a high resolution optical image to the camera, and it's capturing a rather lower resolution digital image for us to do with as we please.

How good is the digital image, compared with how good it could be?

At this point, we know how good it could be -- 3 pixels per line pair. Any less than that is pushing our luck.

It turns out, when you measure the images, that Group 0 Element 4 is 3.5 pixels per line pair -- less than 20% worse than our 3 pixels per line pair standard. That's pretty good!

But it is the end of the story? Not really.

At first I thought it was. But then I took the aerial image, as captured through the 10X objective, and ran it through Photoshop resizing and rotation to match the digital images recovered from the camera. Those results are interesting because I see in them that Group 1, Element 3 is resolved in the Photoshop rescaled image, versus Group 0, Element 6, for the digital camera image.

Now, it turns out that Group 1, Element 3, is almost exactly 2.0 pixels per line pair in these images. But wait --- I decided from the grid experiment that 2 pixels per line pair was unacceptable, and now it looks fine?! The same eyes and the same brain are making both decisions -- what's going on?!

The answer (I think) has to do with how my perceptions are affected by luck and attention.

When I look at the grids, I ask the question "How well can I see the detail that I know generated this image?"

But when I look at the resolution charts, I ask a different question: "What's the smallest block in which I can see lines?"

In retrospect, it should not be surprising that these different questions produce different answers. In the first case, I'm demanding to see all of the lines, but in the second, I'm content to see any of them.

Suppose it turns out, due to luck of how the test pattern is positioned, that I cannot see lines in some particular block A, but I can see them in some finer block B. In that case, do I say that the image resolves at the fine level of block B, or not even at the coarser level of block A?

This strikes me as being an interesting question that I don't recall reading about.

The issue only arose with digital photography. In traditional analog media, it simply doesn't matter exactly where you place the test chart -- if Group 0 Element 6 resolves at one position, it will also resolve at all nearby positions. But in digital, it can matter a lot. As we see in the grids, detail at 2 pixels per line pair either resolves wonderfully, or not at all, or something in between, depending on exactly how it happens to line up with the pixel grid.

OK, I think I understand this well enough now for scientific work. 4 pixels per line pair is safe, 3 is OK if I can accept some unevenness, and 2 is throwing dice.

Throwing dice is not always a bad thing, by the way. If all I want to know is whether the dice have any 3's on them, then throwing them a bunch of times and counting what comes up is one perfectly reasonable strategy. In the same way, if I have a sufficiently sharp digital camera, I can reliably expect to see some detail at a scale of 2 pixels per line pair if there's any such detail in the subject. I just can't be guaranteed to see all of it.

But what about aesthetics? What's required for a digital image to be perceived as "sharp"?

We're talking about subjective impression now, but I'd like to try understanding what that impression means in objective terms.

Based on introspection -- always dangerous, but hey, it's cheaper than dealing with human subject committees -- I'll speculate that a digital image looks "sharp" when it "shows detail at the level of individual pixels", which in turn means seeing dark/bright/dark (or bright/dark/bright) in adjacent pixels.

Hhmm...

Assuming that's right, then we've stumbled onto a bit of a Catch-22.

A digital image will only contain dark/bright/dark patterns in adjacent pixels if either (a) it's representing real detail near the level of 2 pixels per line pair, or (b) the pattern does not represent real detail, but instead is some sort of artifact due to noise, excessive USM, or something similar.

Let's presume that (b) is not the case, since nobody likes stuff that looks like detail but isn't. (OK, that's not strictly true. Some people just love stuff that looks like detail but isn't, but I'd like to stay away from fractal compression in this discussion.)

OK, so it's (a) -- our digital image looks "sharp" because it contains dark/bright/dark patterns that are due to real detail near the level of 2 pixels per line pair.

But we've seen from the grid demonstrations that detail at that level gets captured only if it's favorably positioned.

That's the Catch-22.

In order for our digital image to "look sharp", we have to shoot it or render it at a resolution that virtually guarantees some of the detail in the optical image will be lost. If you see some tiny hairs just barely separated at one place in the digital image, it's a safe bet that there are quite similar tiny hairs at other places that did not get separated, just because they happened to line up differently with the pixels.

Conversely, in order to guarantee that all the detail in the optical image gets captured in the digital image, we have to shoot and render at a resolution that completely guarantees the digital image won't look sharp.

So, there's "sharp" and there's "detailed" -- pick one or the other 'cuz you can't have both. What a bummer!

Still, I am perversely pleased to have reached this understanding. I have regularly gone through the exercise of choosing what percentage to rescale some fuzzy high magnification image so that it looked as sharp as possible while not losing any detail. Somehow the result never ended up looking as sharp as I would like. Now I understand (or think I do!) why that is, and why it has to be that way. That's a good thing, because it will free my mind to obsess over something else.

OK, now back to the one remaining question: how well does the Canon 300D image compare with what "could" be done? That is, how well does the image that the camera actually captures, compare to one from a "perfect" sensor with no anti-aliasing filter, no Bayer demosaicing, and so on?

One reasonable way to address that question is by using Photoshop to simulate the perfect sensor. We can do that because using the microscope setup, we've captured a very high resolution digital version of the optical image that would be presented to the sensor. The idea is simple enough, though a bit tedious, and for once I won't bore you with the details. You just replicate the image into a whole bunch of slightly different positions, resize to match the camera's pixel size, and see what resolves.

The answer turns out to be that the 300D's image has around 3 USAF chart elements less usable resolution than the simulated perfect sensor does. In figure 3 above, you can see that the simulated perfect sensor fairly reliably resolves Group 1 Element 3, while the actual camera does Group 0 element 6.

In the USAF charts, there's a 2X difference in size between each group, and the difference between elements is the sixth root of that. A different of 3 elements means a factor of 1.4X.

In absolute terms, the simulated perfect sensor works down to roughly 2 pixels per line pair, and the camera bottoms out at 3 pixels per line pair. This is comfortably better than the 4 pixels per line pair that's "safe" for capturing all the detail present in an optical image.

This seems like a good stopping point. My brain has been working hard to get this stuff straight, and it's time to take a hike.

Thanks for listening -- you've been a big help!

--Rik

Mike B in OKlahoma
Posts: 1048
Joined: Fri Aug 04, 2006 10:32 pm
Location: Oklahoma City

Post by Mike B in OKlahoma »

This is something I don't think about much, but I'm trying to translate it into something applicable to printing. Haven't come up with much, though I did come up with something that sounds like it is consistent with your results. What I've read as a rule of thumb for a high-quality print is that it takes 300 pixels per inch. So your 300D, producing a picture 3072 by 2048 would be good for about a 7 by 10 inch picture. This is consistent with ANOTHER rule of thumb I've read (and experienced with my EOS 10D) that a 6 megapixel DSLR can make a 8 1/2 by 11 inch print (sorry for those outside the Cro-Magnon USA, I don't remember how that converts to A3 or A4 or whatever the "standard" print size is) that will look good at close inspection. 8 1/2 by 11, subtracting some for a border, is close enough to 7 by 10.

Using your result of four dots needed to get one "accurate" dot, this means that at 300 pixels per inch, we'd be able to represent 75 lines (or maybe just points) per inch accurately with a 6 megapixel camera. That makes about 3 dots per millimeter to "look good" at close inspection. I know that's WAY fuzzier than the absolute resolution of the human eye (isn't a hair about 1/10 millimeter in diameter? we see those readily at close range in good lighting), but it sounds reasonable to say that a print would look good with three dots of color in each millimeter--In my limited experience at trying to ID hair color, I think it is fairly tough to see shades of color, rather than light or dark, in a single hair, unless it is backlit. So ten dots per millimeter might be too fine to reliably see color. For a print to look good, each separate point would have to be big enough to get resolvable color. Anyway, this sounds consistent with your results, though I'm just spinning moonbeams and making unverified statements.

It just occured to me that you are only talking about black/white/gray, and not about shades of color in your test. After pondering that, I"m not so sure applying your result to a person viewing a color print is reasonable.

Oh well, I had some amusement thinking about it, and maybe it will inspire someone else to build something more solid on this! Thanks for getting me thinking!
Mike Broderick
Oklahoma City, OK, USA

Constructive critiques of my pictures, and reposts in this forum for purposes of critique are welcome

"I must obey the inscrutable exhortations of my soul....My mandate includes weird bugs."
--Calvin

Bruce Williams
Posts: 1120
Joined: Mon Oct 30, 2006 1:41 pm
Location: Northamptonshire, England
Contact:

Post by Bruce Williams »

An interesting and informative posting Rik.

I've read it through twice (so far) and found myself grappling with a few interesting new concepts (not initially intuitive (in my case) but the sort of information you file away as "common sense" once someone else has drawn your attention to it :) ).

Your results are interesting but I still need time (and another couple of careful reads) before I will fully comprehend them - entirely down to my tired ol' brain, not the clarity of your posting :? .

Now I haven't even started to think this next question through so it may well be completely nonsensical...

I just wondered if (considering the use of Bayer arrays and Bayer demosaicing) the results would be different if the lines were of different colours (eg, a mix RGB and/or non-primary)?

Thanks for an interesting and thought provoking posting.

Bruce

*postscript (after a 3rd read) - just picked up on your point re Bayer arrays etc. I was of course thinking of the colour filter array having twice as many green pots as red or blue. So would alternating green and black lines produce the same result as alternating red and black lines?

rjlittlefield
Site Admin
Posts: 23561
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

Thanks for the comments/questions, guys.

Some of these can be disposed of in short order.

About printing, yes, the standard spec is that you need 6 line pairs per mm, which means 12 pixels per mm minimum. If you have only 12 pixels per mm, which is what everybody assumes, then what I've discussed above is the fact that you'll be exposing some of the 6 line-pairs-per-mm detail that was in the original optical image, but not all of it -- only the parts that happened to be favorably positioned with respect to the pixel grid. This is different from traditional analog media, which would have shown all such detail if it showed any of it.

However, the psychologists have also pretty well established (in situations much more broad than photo evaluation) that most observers simply do not notice what's not there. So if some of the detail is missing, that's no problem at all -- your 12 pixels-per-mm print still looks sharp!

Besides that, there's considerable leeway about the 6 line pairs per mm spec. That number was selected as being the physiological limit of "average" human vision. People are often willing to accept stuff that's quite a bit coarser than that and still call it "sharp". That's particularly important in photomacrography and photomicroscopy, where slacking off on lateral resolution allows disproportionately more DOF by stopping down. Ted Clarke, who has studied this stuff more than anybody else that I know, notes at http://www.modernmicroscopy.com/main.as ... =65&page=4 that
John Gustav Delly preferred a 4.5 lines/mm resolution requirement, which gives the greatest depth of field, when required, reaching a minimum of 3 lines/mm resolution. The test images did not appear to be blurred until their resolution was less than 3 lines/mm. These results are consistent with Abbe’s criterion that useful magnification ranges between 6 lines/mm and 3 lines/mm.
Supporting this, I vaguely recall (but no longer have the test prints) that when I sent to my favorite 12 pixels/mm printer (the Lightjet 5000 at http://www.calypsoinc.com/) a test file with black/white/black/white pixels, what came back was pretty dang close to uniform gray. Sending them a black/black/white/white pattern produced some clean whites and blacks. I don't know what their effective resolution is. Clearly it's greater than 3 lp/mm, but less than 6 lp/mm. To my old eyes, even with reading glasses, the prints look very sharp. I have to haul out a magnifier before they look otherwise.

On the other end, Roger N. Clark (no relation to Ted Clarke), at http://www.clarkvision.com/imagedetail/ ... ution.html, argues that you actually need more like 600 ppi (24 pixels per mm) even at 20 inches viewing distance. I have no trouble believing his experimental results, but I'm not sure how they fit into this discussion. There's a big difference between saying that 600 dpi has a lot better WOW! factor than 300 dpi, and saying that the observer can extract proportionally more information from it. My interests are mostly in extracting information, not exclaiming WOW!

Regarding hairs, the ones on my arms measure around 0.0015", which is about 1/26 of a mm. They're very easy to see, as various people have pointed out since I first started growing the stuff. But it's important to not confuse detection with resolution. A single hair even 1/4 that size would still be easy to see against a bright background. It would be even easier to see if it were side-lit, and viewed against a dark background. (Think of dust motes floating in a sunbeam.) Those are "detection" tasks. "Resolution" is whether you can distinguish two hairs separated by a hair-sized gap, versus two hairs with no gap. That's a much harder problem, and it's that problem for which you need 1/12 mm "hairs" separated by 1/12 mm gaps.

As to color, well, color is a bit witchy in several regards.

It turns out that people have a lot higher spatial resolution for intensity than for color, so if there's a human in the loop, the resolution requirements are set by intensity. If you do each color to the same standard as intensity, you're in great shape.

Bruce, you ask:
So would alternating green and black lines produce the same result as alternating red and black lines?
The answer to that is very simple: no -- they are much different.

The issue there is that on the sensor, the density of green-sensitive photosites is much higher than the density of red-sensitive photosites. That increase in density maps directly to increased resolution for green versus red and blue. I actually have some experimental data about that too -- see http://www.janrik.net/MiscSubj/CanonDig ... ution.html .

You may be interested to hear what prompted me to do this investigation in the first place. The issue of resizing is one that has bothered me for a long time, but by itself, that probably wouldn't have prompted this work. No, what mostly did it was some discussion on the Yahoo Microscope group, regarding how many pixels are required to catch all the information that's in some particular microscope image. It's not difficult to calculate the minimum feature size based on wavelength, numerical aperture, and magnification, but you're still left asking "OK, but how many pixels do I need?". The standard answers are that you need at least 2 pixels per cycle, and 4 pixels per cycle is "safe". I got curious about a) what happens between those two limits, and b) just how good do these cameras do, anyway. That prompted the investigation, which consumed probably 20-30 hours including a great deal of head-scratching intermixed with composing and decomposing (that's writing and erasing, if you want to be literal). It really did take me a very long time to settle into a clean way to think about the problem. When I read what I've written above, it answers a lot of questions that I've had in the distant past, and much more so over the past few days. If it makes sense to somebody else too, I'm very happy to have contributed to the group understanding.

--Rik
Edit: corrected typo, "even at 20 mm" changed to "even at 20 inches viewing distance".
Last edited by rjlittlefield on Thu May 10, 2007 10:46 pm, edited 1 time in total.

Bruce Williams
Posts: 1120
Joined: Mon Oct 30, 2006 1:41 pm
Location: Northamptonshire, England
Contact:

Post by Bruce Williams »

Thanks Rik.

A very interesting link to your earlier experimental work on colour related resolution. I'm pleased that (in this instance at least) my intuitive expectation proved to be pretty much along the right lines (sorry, couldn't resist :D ). Seeing the results so clearly demonstrated helps to consolidate an appreciation of the associated technology - so both informative and helpful too.

Bruce

Mike B in OKlahoma
Posts: 1048
Joined: Fri Aug 04, 2006 10:32 pm
Location: Oklahoma City

Post by Mike B in OKlahoma »

rjlittlefield wrote: That prompted the investigation, which consumed probably 20-30 hours including a great deal of head-scratching intermixed with composing and decomposing (that's writing and erasing, if you want to be literal).
Rik writes a long, informed, thoughtful dissertation on a highly technical topic.....And what does Mike grab onto? The memory of a phrase reminding him of an old Far Side cartoon."

"Shhh! The Master is decomposing!"
Mike Broderick
Oklahoma City, OK, USA

Constructive critiques of my pictures, and reposts in this forum for purposes of critique are welcome

"I must obey the inscrutable exhortations of my soul....My mandate includes weird bugs."
--Calvin

rjlittlefield
Site Admin
Posts: 23561
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

Mike B in OKlahoma wrote:Rik writes a long, informed, thoughtful dissertation on a highly technical topic.....And what does Mike grab onto? The memory of a phrase reminding him of an old Far Side cartoon."

"Shhh! The Master is decomposing!"
And where, pray tell, do you think that I got it? :smt003

--Rik

Mike B in OKlahoma
Posts: 1048
Joined: Fri Aug 04, 2006 10:32 pm
Location: Oklahoma City

Post by Mike B in OKlahoma »

http://www.ddisoftware.com/sd14-5d/

This review attempts to compare the Sigma SD-14 and Canon 5D cameras, and their very different ways of capturing color and pixels, and in the beginning part talks about issues similar to what Rik brought up, but dealing specifically with color. I don't know enough about the subject to evaluate it, but it sounds relevant here.
Mike Broderick
Oklahoma City, OK, USA

Constructive critiques of my pictures, and reposts in this forum for purposes of critique are welcome

"I must obey the inscrutable exhortations of my soul....My mandate includes weird bugs."
--Calvin

rjlittlefield
Site Admin
Posts: 23561
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

Thanks for the link, Mike. That's a very good article comparing the performance of a Foveon-based camera that actually senses RGB at each of a relatively few pixel positions, versus a traditional Bayer filter-array camera that only senses one color at each of a lot more positions. I'm really looking forward to the end of Bayer, but it'll take a few more years.

Right now the Foveon still has some infancy problems, one of which (though certainly not the worst) is the complete absence of any anti-aliasing filter. That's why the Sigma camera exhibits the intense Moiré effects seen in the article's first illustration and called out in the animation about half-way down. Sadly (and contrary to what's claimed in the article), that type of aliasing can not be corrected in post-processing either -- you're stuck with the Moiré patterns just like you're stuck with Bayer artifacts. Arguably the best solution would be to integrate an anti-aliasing filter with the Foveon chip, optimized to that chip's performance. The result (I predict) would be just a tiny bit of blurring -- enough to greatly reduce Moiré while not causing much falloff in the response to pixel-level detail. They'll get there...but hopefully not until they've fixed really intrusive problems like getting the colors right and having the cameras focus properly and not lock up!

--Rik

Mike B in OKlahoma
Posts: 1048
Joined: Fri Aug 04, 2006 10:32 pm
Location: Oklahoma City

Post by Mike B in OKlahoma »

If Sigma is the one to run with Foveon, I hope they also get some cameras out that will use Canon (preferably) or at least Nikon lenses. They aren't going to make the big time till they get unlocked from the Sigma lens mount. Particularly until they get some long telephotos that have image stabilization available.
Mike Broderick
Oklahoma City, OK, USA

Constructive critiques of my pictures, and reposts in this forum for purposes of critique are welcome

"I must obey the inscrutable exhortations of my soul....My mandate includes weird bugs."
--Calvin

Epidic
Posts: 137
Joined: Fri Aug 04, 2006 10:06 pm
Location: Maine

Post by Epidic »

Interesting read. But it brings up more questions than it answers. First, why not just relate this whole experiment to permissible circles of confusion. After all, regardless of pixel pitch, it is the format size that determines sharpness - just like with film. This is naturally related to the human visual system and would simply be an angular resolution problem based on viewing distance and image size - the 300dpi standard does not make much sense as it seems to be an absolute figure with no relation to either of those.

Also, resolving power and target contrast is related. High contrast targets are showing a maximum, so with lower contrast targets you will be getting different results. This also seems to be ignoring sensor size which dictates certain resolutions to be met with the optical system. As the chip gets smaller, the greater need for better optics. This is important as the MTF limit is determined by diffraction. At 100 l/mm, the theoretical MTF limit at f/2.8 is 80% and at f/8, 45%. At 200 l/mm, that changes to 60% at f/2.8 and 5% at f/8. This means that cameras with different sensor sizes are not going to act the same regardless of a certain pixel coverage.

I think it is an important topic. You have made some very good observations. It is easy to come along and make comments (like I have), but this is very complex and putting all the pieces together will take some work.
Will

rjlittlefield
Site Admin
Posts: 23561
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

Will,

Thanks for the feedback & comments. Please see if these responses help.

1. For most of the effects that I'm concerned with here, circles of confusion are irrelevant and confusing. Except for printing, I'm presuming that the human either has or can get enough magnification to examine the individual pixels if necessary. The question is, how much of the information that's present in the optical image gets captured in the digital image? That answer varies from "almost all of it" at 4 pixels per line pair, down to "maybe half of it" at 2 pixels per line pair. This is scale independent. You're right that cameras with different sensor sizes are not going to act the same because they'll be presented with images of different optical quality. However, those differences become less important for photomacrography and especially for photomicrography because at high magnifications the quality of optical image depends mostly on the relationship between lens and subject, not between lens and sensor.

2. If the 300 dpi standard seems strangely absolute, it's probably because there are a lot of qualifiers on it that are usually not stated. 300 dpi applies only to prints that are examined by normal unaided eyes at closest comfortable viewing distance -- around 12 inches. See http://www.diax.nl/pages/perception_pri ... ty_uk.html .

3. Target contrast versus resolution is an interesting issue, but mostly irrelevant here. If you have high enough pixel density to capture the finest discernable detail, which is high contrast, then you will also have high enough pixel density to capture detail that is lower contrast but coarser.

The issue of target contrast versus resolution is near and dear to my heart because it was largely responsible for me deciding to switch from 35mm film to DSLR.

The critical observation occurred in early 2004, when a colleague showed me a pair of aerial photographs shot from a low-altitude plane, using 35mm film and some DSLR. When I closely compared the photos, I discovered that I could discern finer high contrast details in the film photo, but finer low contrast details in the DSLR photo. This made absolutely no sense to me, so (of course) I set out to try understanding it mathematically.

What I finally realized is this. If you are allowed to magnify the images as much as you want, then ultimately your ability to discern detail is limited by gray-scale noise. To discern detail, you must be able to see the difference between shades of gray. But your ability to do that is limited by statistics -- the difference between two gray levels must be significantly larger than the uncertainty in either level. When an image is noisy, and the difference between nominal gray levels drops, you must average over larger and larger areas in order to reduce the uncertainty by a corresponding amount. The relationship turns out to be really simple. Uncertainty drops as the square root of the number of samples, which means that it drops in proportion to the linear dimension of the sampled area.

When noise dominates, resolution drops linearly with contrast.

Now that was an eye-opener. Suddenly the relationship between the film and digital aerial photos made perfect sense. For high contrast detail, digital resolution was limited by pixel density. For medium contrast detail, the film image became limited by noise (film grain) while the digital photo was still limited by pixel density. For low contrast detail, both images became limited by noise (film grain versus electronic), but the film noise was so much larger that the corresponding effective resolution was less.

I agree, the issues are very complex. Further comments and discussion will be most appreciated.

--Rik

Charles Krebs
Posts: 5865
Joined: Tue Aug 01, 2006 8:02 pm
Location: Issaquah, WA USA
Contact:

Post by Charles Krebs »

Great stuff Rik

Just back from some travel and did a quick read... which is not enough to digest all this. I'll need to look this over more carefully.

'til then one question...
A digital image will only contain dark/bright/dark patterns in adjacent pixels if either (a) it's representing real detail near the level of 2 pixels per line pair
Conversely, in order to guarantee that all the detail in the optical image gets captured in the digital image, we have to shoot and render at a resolution that completely guarantees the digital image won't look sharp
So is this second quote what some refer to as "over-sampling"?
The ill effects of over-sampling is something I have not quite wrapped my head around yet. With microscope optics it is possible to calculate the smallest resolvable detail, and then set up the relay optics so that this detail will "fall across" as many pixels as desired. (Possible, but problematic as it would change with different objectives and condenser aperture settings). Until now, my "gut" sense has been that it is too much of a crap-shoot to realistically expect things to line up desirably at 2 pixels per line pair (and things get even uglier if you start to consider Bayer patterns and highly colored details :shock:). And while I have a sense of how things can get mucked up (from the "sharp" perspective) when barely enough pixels are used to record all detail, my off-the-cuff impression would be that as you "oversample" to a larger extents, you should reduce the effects of misalignment of pixels with subject detail, and it would seem that you should move past your Catch-22. I really need to look a the numbers... but where do I go astray to think that a small amount of "over-sampling" could give you the detail but impact "sharpness", but very large over-sampling would give you the detail with a much smaller hit on the sharpness.

rjlittlefield
Site Admin
Posts: 23561
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

Charlie,

Good questions. Let me be slow and careful in responding.

When I wrote "and render", I was thinking of resizing for the web. More precisely, I was thinking of final output where the observer can clearly see individual pixels, either all the time or if they work at it.

Suppose my optical image is a sine wave of intensity. Consider two extreme cases.

Case 1: I shoot and/or render (resize) to get 2 pixels per cycle. In this case my image looks "sharp" because at some places it has light/dark/light/dark adjacent pixels. But that only happens where the peaks and valleys of the sine wave line up with the pixels. At places where the peaks and valleys line up between the pixels, I get uniform gray. So the image looks sharp, but some details have probably been lost.

Case 2: I shoot and render to get 20 pixels per cycle. In this case the sine wave is accurately captured no matter how it lines up with the pixels. But in exchange, adjacent pixels are highly correlated everywhere in the image -- there simply are no instances of light/dark/light/dark. So, all of the detail is retained, but the image definitely does not look sharp.

This effect is not apparent in picture 2 above, because my starting "optical" image was a square wave of just black and white -- no grays. You may be able to imagine what it would look like if I had started with a sine wave instead. In that case, the output at 2.05 and 2.55 pixels per line pair would look much the same as in picture 2 (i.e., they'd look sharp, but with a Moire pattern), while the output at 6.05 pixels per line pair would not look sharp -- it would show gray on all the transitions.

Printing introduces a different wrinkle, by potentially allowing me to output pixels that are too small to see individually.

Again, consider two extreme cases.

Case A. I shoot and/or render (resize) to get 2 pixels per cycle, which I print at 300 pixels per inch. I get 150 cycles per inch. The image looks sharp because people can just barely see 150 cycles per inch (under standard assumptions of viewing distance etc.). As before, details will be lost if they line up badly with pixels.

Case B. I shoot and render to get 20 pixels per cycle, which I print at 3 thousand pixels per inch. Again, I get 150 cycles per inch, but now, none of those 150 cycles-per-inch details get lost. Not only does the image look sharp, but all the details are still present.

Summary: if the observer can see individual pixels,then oversampling preserves the detail, but makes the image look not sharp. If the observer cannot see individual pixels, then oversampling preserves more detail without impacting apparent sharpness.

There's no free lunch, of course. My camera only has a certain number of pixels. Suppose it's 3000 pixels across. If I shoot at 3 pixels per cycle, I can get 1000 cycles with fairly even quality. If I shoot at 4 pixels per cycle, the quality gets more even, but I only get 750 cycles. At 6 pixels per cycle, the quality is even better, but I only get 500 cycles. And so on. In this case, the tradeoff is between covering a large area (1000 cycles) at moderate quality, or smaller areas (750 cycles or 500 cycles) at higher quality.

Does something in here answer your question?

--Rik

Charles Krebs
Posts: 5865
Joined: Tue Aug 01, 2006 8:02 pm
Location: Issaquah, WA USA
Contact:

Post by Charles Krebs »

One aspect I need to grasp better is due to the limited resolution you have at the sensor with the microscope. Let me offer something of a "real world" situation.


Let's say you are using a Canon Rebel 350D body on a microscope in conjunction with a 2.5X photo-eyepiece. Let's look at some numbers with three different objectives...

a) With a 40/0.65 objective you will have a maximum resolution at the sensor of 19 lp/mm. So 19 line pairs will be "spread" over 156 pixels on the sensor. You will have 8.2 pixels per lp.

b) With a 10/0.30 objective you will have a maximum resolution at the sensor of 36 lp/mm. So 36 line pairs will be "spread" over 156 pixels on the sensor. You will have 4.3 pixels per lp.

c) With a 4/0.16 objective you will have a maximum resolution at the sensor of 48 lp/mm. So 48 line pairs will be "spread" over 156 pixels on the sensor. You will have 3.25 pixels per lp.

In all three cases the diagonal of the recorded rectangle will be about 11mm of the (20mm diameter) image circle made by the objective. And obviously we are not dealing with a b/w square wave. The ultimate end use could be anything from a large print to a small web image.


Now...
This is how the images will be recorded.
Based on our discussion here, what might be the trade-off ("detail"-"sharpness") to be expected in each situation?
What, if anything, could be modified to "improve" the recorded images?

Post Reply Previous topicNext topic