How to decide outresolving?

Have questions about the equipment used for macro- or micro- photography? Post those questions in this forum.

Moderators: rjlittlefield, ChrisR, Chris S., Pau

Justwalking
Posts: 137
Joined: Sun Jun 10, 2018 3:54 pm
Location: Russia

Post by Justwalking »

Pau wrote:
Justwalking wrote:
rjlittlefield wrote:
Rik, can you say that their DoF looks absolutely just the same?
http://resourcemagonline.com/2014/02/ef ... eld/36402/
Evidently DOF is not the same in both pictures, why?
© 2013 Robert OToole Photography | Lens: Sigma Macro 150mm F2.8 EX DG OS HSM | Camera: NIKON D800E | ISO: 100 | f8 | Shutter speed: 1/250 sec | single SB-R200 flash. Same camera settings and lens with only camera distance and sensor format changed .
Camera distance is different so angle of light also is, so effective aperture is also different, smaller in DX mode.
The article author doesn't take it in consideration so the whole statement is wrong
Of course it is changed to take the same FoV. It is simple geometry. But who promissed that it must be exactly the same?
The main thing that we are seeing different DoF with same FoV of subject.

What's wrong if one DoF more than anothether with same FoV?
Absolutely nothing. You can trust to your eyes.
Just need use correct math for understanding why it happens with magnification. If author did not say why it happens by theory it is not mean that the fact is incorrect.

So lonepal is correct when wrote:
I think I will get more dof with a m4/3 sensor than the apsc right?
And answer must be YES.

rjlittlefield
Site Admin
Posts: 23543
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

Justwalking wrote:So lonepal is correct when wrote:
I think I will get more dof with a m4/3 sensor than the apsc right?
And answer must be YES.
Sorry, but the answer is that it depends entirely on how you set the optics.

Here are two images that I shot recently. One of them was shot on a 1/2.5" sensor, 5.744 x 4.308 mm, aspect ratio 4:3. The other was shot on an FF sensor and cropped on the long axis to also be aspect ratio 4:3.

FOV is 6.4 x 8.5 mm in both cases, and both sets of optics were set to give NA 0.04 on the subject side.

One sensor was 5.57 times larger in linear dimension.

Can you tell which is which? How?

Image Image

The smaller sensor on this camera happens to be 3072x2304 pixels. (It is an old camera.) So I resampled the FF image to be 3072x2304 pixels also. Here are 100% crops from both images.

Can you tell which is which? How?

Image Image

All images are single shot, no stacking.

Make your browser window wide, or zoom out, to see the images side by side.

--Rik

Justwalking
Posts: 137
Joined: Sun Jun 10, 2018 3:54 pm
Location: Russia

Post by Justwalking »

rjlittlefield wrote:
Justwalking wrote:So lonepal is correct when wrote:
I think I will get more dof with a m4/3 sensor than the apsc right?
And answer must be YES.
Sorry, but the answer is that it depends entirely on how you set the optics.

Here are two images that I shot recently. One of them was shot on a 1/2.5" sensor, 5.744 x 4.308 mm, aspect ratio 4:3. The other was shot on an FF sensor and cropped on the long axis to also be aspect ratio 4:3.

FOV is 6.4 x 8.5 mm in both cases, and both sets of optics were set to give NA 0.04 on the subject side.
I think that to set both to give NA on subject side you need complete different lens.

It was not in macro range on 1/2.5.
8.5 mm on the sensor 1/2.5" give us magnification about 0.67:1 vs about 3.75:1 on FF and you have to say that Dof stay their the same in both cases. Cool. So no matter of magnification you can take the same Dof
on single shot just with appropriate optics )).

Then it is not any problem for you to take at 11X single shot FF without cropping with same Dof as mine 2X on 5.5 crop sensor whole frame.
Very interesting! I want to see it.

Image

So it was absolutely different optics not only in distance but probably with different optics NA not as in example before with flower.
And how about the resolution of the sensors? Both was the same or close?
Simple calculation give us that on crop sensor you must have
4.56 Coc*F(eff) and on FF it must be only 0.14 Coc*F(eff)
The difference in Dof is 32X ! in terms of their Coc. To say in absolute terms we need to know how different was sensors resolution.

The lower (right) crop shot looks for me is making by 1/2.5 and 1/2.5 is the first pic
Am i right, Rik?

It seems you have set aperture for 1/2.5 to high to see the
good Dof and resolution. Why there so much CA?
But if you set is lower to F8 as example your FF image goes to diffraction debris completely.

mjkzz
Posts: 1681
Joined: Wed Jul 01, 2015 3:38 pm
Location: California/Shenzhen
Contact:

Post by mjkzz »

JH wrote:I use to think of the camera sensor as a kitchen floor with square tiles and airy disk as round plates. If I at random place some plates on the floor most plates will touch more than one tile. This means that - if I match the size of disks and tiles I will for most disks/plates need more than one tile.
Best regards
Jörgen Hellberg
Thanks Jorgen, I agree, we need more than one tile, but how many is "good" enough is the questions here, 2.44 vs 4.88, I believe. :D

Anyways, I think Wiki was edited, now it reads like this :D :

Image

There was a post on Cambridge in Color with exactly the same question, yet with no result. The same person (I think) posted repeat of it on a Physics related website.

I could not get the MATLAB going, I am going to give up. However, I am inclined to agree with Rik without proving it myself. Essentially, a lens is like a low pass filter and the sensor is sampling the output of this low pass filter, so I do not see why Nyquist rule does not apply here.

rjlittlefield
Site Admin
Posts: 23543
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

mjkzz wrote:I think Wiki was edited, now it reads like this
I don't know anything about the cambridgeincolour post or the Physics related website.

But I'm the fellow who fixed the Wikipedia article. I also explained the edit on their Talk page, as follows:
Size as it pertains to digital camera resolution

This article is the first place I stopped when researching the topic of theoretical resolution of digital cameras, but I see some problems. First the derivation of the figure of 4 µm is very unclear. Additionally, such a derivation should be based on optimal, not typical, conditions; as a good photographer will use equipment to exploit its strengths and is more concerned with the performance limits than "typical use." A f1.4 aperture is much larger than one of f8 and so would seem to make for significantly better resolving power. Third, the math does not seem to take into account that in the case of digital photography, blue light is not sampled at the same spatial resolution as green light, which is much more important to the eye; and furthermore, the distance between sensor elements of like color is not the same as the element pitch. A 4000-element-wide sensor has only 2000 elements of a given color across its width, making the Airy resolving power only part of the equation. This is in contrast to film, which samples all wavelengths of light at roughly equal spatial resolutions. If digital (or any) photography is to be discussed in this article, I recommend it be done by someone who is versed in both physics and photography. — Preceding unsigned comment added by 184.153.114.71 (talk) 23:21, 7 January 2012 (UTC)

Indeed, it's quite complicated, as the Airy disk is only one of many mechanisms that blur and limit resolution. It gets combined with aberrations, anti-aliasing filter, and the area of pixel microlens, and then sampled in unequal pattrens, making it very hard to put numbers on things. And at f/1.4 it's almost certainly irrelevant, as it's hard to make a diffraction-limited f/1.4 lens; aberrations will dominate there. You have a good point about the comment in the article. If the Airy disk is 4 microns, you can probably win a bit by making pixels in a Bayer sensor as small as 2 microns; and at f/4, maybe even smaller. Better look for sources if you want to say anything useful, but it's hard to find much sensible about it in sources, so good luck. Dicklyon (talk) 01:53, 8 January 2012 (UTC)

It occurs to me that you're not going to find much freely available information on the subject. Digital camera technology is a jealously guarded arms race and its protection from the eyes of competitors is a billion dollar poker game. The Eastman Kodak bankruptcy problems are a good example. Trilobitealive (talk) 16:53, 8 January 2012 (UTC)

I just now fixed a long-standing glitch in this section. To resolve objects spaced at the Rayleigh criterion, you need two pixels per object: one for the object and one for the space between. The earlier wording implied one pixel per object, none for the spaces. I made a minor wording change to correct the error. The wording now implies 4 pixels per Airy disk diameter. The minimum requirement is actually 4.88 pixels per Airy disk diameter, to meet the Nyquist minimum of 2 pixels per cycle at the diffraction-limited cutoff frequency. 4.88 pixels per Airy disk diameter is the value that ends up getting used by Nikon in their explanation at https://www.microscopyu.com/tutorials/m ... resolution . But it requires some math to ferret out that fact, so rather than complicate the discussion here in Airy disk I just noted that pixels smaller than 2 per radius (4 per diameter) would not give "significant" improvement. RikLittlefield (talk) 00:01, 25 August 2018 (UTC)
As you can see, that section of the article has, in fact, not received much attention from experts in the field. Dick Lyon is the fellow who wrote "Depth of Field Outside the Box", which I have previously referenced. He was also one of the co-founders of Foveon and holds about 70 US patents. I've talked with Dick several times, mostly in 2006 and 2009, regarding NA and DOF and such, and I'm quite confident he would agree with my interpretation of these things.

--Rik

lonepal
Posts: 322
Joined: Sat Jan 28, 2017 12:26 pm
Location: Turkey

Post by lonepal »

Thanks again for the information.

I will experiment this myself using same lens at same aperture with both APSC and 4/3 cameras then let you know.
I will try to fix the working distance too.
May be with a 5X Mitty or a Apo Rodagon 50.
Regards.
Omer

rjlittlefield
Site Admin
Posts: 23543
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

lonepal wrote:I will experiment this myself using same lens at same aperture with both APSC and 4/3 cameras then let you know.
If "same aperture" means for example "f/8" on both cameras, then at same FOV for sure you'll get more DOF with the smaller sensor.

That's because the smaller sensor goes with a shorter lens, so "f/8" gives a smaller diameter hole in the lens. It's the smaller diameter hole that gives more DOF. Everything depends on the angular size of the aperture, as seen by the subject.

--Rik

Justwalking
Posts: 137
Joined: Sun Jun 10, 2018 3:54 pm
Location: Russia

Post by Justwalking »

lonepal wrote:Thanks again for the information.

I will experiment this myself using same lens at same aperture with both APSC and 4/3 cameras then let you know.
I will try to fix the working distance too.
May be with a 5X Mitty or a Apo Rodagon 50.
For a given magnification, DOF is independent of focal length. In other words, for the same subject magnification, at the same f-number, all focal lengths used on a given image format give approximately the same DOF.

Better to take not so high magnification. Take Rodagon and setup it about 2X.
Take two thin coins and put them together one below another. Find very sharp view of bottom coin and see how blurred is upper.
Set same Fov for both sensors (it means you need less magnification for 4/3) and compare.

Image

mjkzz
Posts: 1681
Joined: Wed Jul 01, 2015 3:38 pm
Location: California/Shenzhen
Contact:

Post by mjkzz »

rjlittlefield wrote:
mjkzz wrote:I think Wiki was edited, now it reads like this
I don't know anything about the cambridgeincolour post or the Physics related website.

But I'm the fellow who fixed the Wikipedia article. I also explained the edit on their Talk page, as follows:
Size as it pertains to digital camera resolution

This article is the first place I stopped when researching the topic of theoretical resolution of digital cameras, but I see some problems. First the derivation of the figure of 4 µm is very unclear. Additionally, such a derivation should be based on optimal, not typical, conditions; as a good photographer will use equipment to exploit its strengths and is more concerned with the performance limits than "typical use." A f1.4 aperture is much larger than one of f8 and so would seem to make for significantly better resolving power. Third, the math does not seem to take into account that in the case of digital photography, blue light is not sampled at the same spatial resolution as green light, which is much more important to the eye; and furthermore, the distance between sensor elements of like color is not the same as the element pitch. A 4000-element-wide sensor has only 2000 elements of a given color across its width, making the Airy resolving power only part of the equation. This is in contrast to film, which samples all wavelengths of light at roughly equal spatial resolutions. If digital (or any) photography is to be discussed in this article, I recommend it be done by someone who is versed in both physics and photography. — Preceding unsigned comment added by 184.153.114.71 (talk) 23:21, 7 January 2012 (UTC)

Indeed, it's quite complicated, as the Airy disk is only one of many mechanisms that blur and limit resolution. It gets combined with aberrations, anti-aliasing filter, and the area of pixel microlens, and then sampled in unequal pattrens, making it very hard to put numbers on things. And at f/1.4 it's almost certainly irrelevant, as it's hard to make a diffraction-limited f/1.4 lens; aberrations will dominate there. You have a good point about the comment in the article. If the Airy disk is 4 microns, you can probably win a bit by making pixels in a Bayer sensor as small as 2 microns; and at f/4, maybe even smaller. Better look for sources if you want to say anything useful, but it's hard to find much sensible about it in sources, so good luck. Dicklyon (talk) 01:53, 8 January 2012 (UTC)

It occurs to me that you're not going to find much freely available information on the subject. Digital camera technology is a jealously guarded arms race and its protection from the eyes of competitors is a billion dollar poker game. The Eastman Kodak bankruptcy problems are a good example. Trilobitealive (talk) 16:53, 8 January 2012 (UTC)

I just now fixed a long-standing glitch in this section. To resolve objects spaced at the Rayleigh criterion, you need two pixels per object: one for the object and one for the space between. The earlier wording implied one pixel per object, none for the spaces. I made a minor wording change to correct the error. The wording now implies 4 pixels per Airy disk diameter. The minimum requirement is actually 4.88 pixels per Airy disk diameter, to meet the Nyquist minimum of 2 pixels per cycle at the diffraction-limited cutoff frequency. 4.88 pixels per Airy disk diameter is the value that ends up getting used by Nikon in their explanation at https://www.microscopyu.com/tutorials/m ... resolution . But it requires some math to ferret out that fact, so rather than complicate the discussion here in Airy disk I just noted that pixels smaller than 2 per radius (4 per diameter) would not give "significant" improvement. RikLittlefield (talk) 00:01, 25 August 2018 (UTC)
As you can see, that section of the article has, in fact, not received much attention from experts in the field. Dick Lyon is the fellow who wrote "Depth of Field Outside the Box", which I have previously referenced. He was also one of the co-founders of Foveon and holds about 70 US patents. I've talked with Dick several times, mostly in 2006 and 2009, regarding NA and DOF and such, and I'm quite confident he would agree with my interpretation of these things.

--Rik
Oh wow! I was wondering what happened to Wiki :D

Here are links I mentioned:

https://www.physicsforums.com/threads/p ... re.516322/

https://www.cambridgeincolour.com/forum ... 1619-2.htm

They were left un-answered.

Justwalking
Posts: 137
Joined: Sun Jun 10, 2018 3:54 pm
Location: Russia

Post by Justwalking »

About correct sampling of Diffraction Limited Images

http://wiki.astro.cornell.edu/twiki/pub ... 111212.pdf

rjlittlefield
Site Admin
Posts: 23543
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

Justwalking wrote:About correct sampling of Diffraction Limited Images

http://wiki.astro.cornell.edu/twiki/pub ... 111212.pdf
A good paper, though easily misunderstood in places.

The key point is summarized in its last sentence (emphasis added here): "should sample ... with at least 3 samples per beam, where one beam is defined to be the radius of the airy disk".

The paper also demonstrates that 2 samples per radius is not enough. It does not evaluate any other sampling densities.

Both of the stated results are completely consistent with what I have now written several times, that the minimum is 4.88 pixels per diameter = 2.44 pixels per radius, to just barely meet the Nyquist criterion at the lens's cutoff frequency.

Thank you for providing the reference.

--Rik

rjlittlefield
Site Admin
Posts: 23543
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

Got it. Those threads are from 2011, long dormant. I am not inclined to revive them. The thread at cambridgeincolour is particularly confusing because once again it does not clearly distinguish between choosing an aperture to not degrade an image captured by a sensor, and choosing a sensor to capture all the image passed by an aperture. They are very different questions, with correspondingly different answers.

--Rik

mjkzz
Posts: 1681
Joined: Wed Jul 01, 2015 3:38 pm
Location: California/Shenzhen
Contact:

Post by mjkzz »

Yes, right, it is not a good idea to revive an old dormant thread on CiC, plus most of the replies there seem to be concerned with practical use instead of theoretical discussion.

mjkzz
Posts: 1681
Joined: Wed Jul 01, 2015 3:38 pm
Location: California/Shenzhen
Contact:

Post by mjkzz »

Justwalking wrote:About correct sampling of Diffraction Limited Images

http://wiki.astro.cornell.edu/twiki/pub ... 111212.pdf
Thanks Justwalking. What the author did in the paper is exactly what I meant to do using MATLAB, depicting Fourier transform after applying convolution and see what happens. It is very hard to visualize this (at least for me).

But the same idea, trying to set it up in MATLAB, made me realize a lens is just like a lowpass filter from its mathematical formulation,this shows that how little I know about optics because I bet everybody here knows it, and I bet I have read it all along but just did not make the connection (stubborn).

Justwalking
Posts: 137
Joined: Sun Jun 10, 2018 3:54 pm
Location: Russia

Post by Justwalking »

Need to say that in astronomy there is specific reason for take 3 pixels. There works usually with bright beams on dark field and there need to avoid any artefacts to count flux from the stars of very different brigthness very accurate.
Although these aliased artifacts are at a low level, they would be significant if they were from a bright point source, and they impinged on a nearby dim source.
The question is the continuos space of the object in macro need so correct image of each point of subject when there is no so bright sourses contrast with dark field and so different brightness as in case with astronomy?

Post Reply Previous topicNext topic