Sensor size: how does it matter?

A forum to ask questions, post setups, and generally discuss anything having to do with photomacrography and photomicroscopy.

Moderators: rjlittlefield, ChrisR, Chris S., Pau

rjlittlefield
Site Admin
Posts: 23561
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Sensor size: how does it matter?

Post by rjlittlefield »

A recent posting in the macro forum posed the following suggestion:
cactuspic wrote:I haven't thought through all the benefits and disadvantages of using the small sensored G9 versus the full sizes sensor on the 1Ds MarkII but I think that would be an interesting post for the technical section.
I agree. There's been a lot written about sensor size, both as postings in our forum and as some long web pages appearing on other sites. But the information is scattered around, most of it's hard to read and interpret correctly, and there's a fair smattering of outright errors.

Here is my version of the short story, for your consideration and review.

Sensor size does not affect image quality if you're talking about equivalent images. That means same illumination, same camera position, same field size, same exposure time, and same DOF. That is, the pictures look the same even if the subject is moving.

The larger sensor allows to collect more light, giving less noise, but only by changing to a non-equivalent image by using a wider aperture, exposing longer, or using brighter illumination. The larger sensor also requires the use of longer lenses (to get the same field size at the same camera position), which allows the use of a larger diameter aperture to reduce DOF and increase sharpness.

On the other hand, the smaller sensor naturally comes with a shorter lens, which allows to easily get in closer, which gives more of a "wide-angle macro" appearance and also works better with auto-focus.

Which one works better depends on what you're doing. If you have time to set up and can afford either a longer exposure or brighter light, then you can get a quieter picture at same DOF from the larger sensor. If the smaller sensor gives you too much DOF even when it's wide open, then you need the larger sensor with its longer/wider lenses. If you need to work fast and easy, and the DOF and noise of the smaller sensor are acceptable, then the smaller camera is better.

There are other issues such as ability to change lenses and ability to shoot through eyepieces.

Perhaps the most confusing aspect is that to get equivalent images, different sensor sizes require different settings for ISO and f-number. The smaller camera's f/8 is roughly equivalent to the larger camera's f/22, while the smaller camera's ISO 64 roughly matches the larger camera's ISO 400. This confusion often leads to false hopes that the larger camera will give more DOF because it provides bigger f-numbers. It won't.

Does this help?

--Rik

[Edit: to replace the incorrect phrase "equivalent exposures" with "equivalent images", as discussed below.]
Last edited by rjlittlefield on Fri Jan 18, 2008 8:39 am, edited 1 time in total.

Epidic
Posts: 137
Joined: Fri Aug 04, 2006 10:06 pm
Location: Maine

Post by Epidic »

Sensor size does not affect image quality if you're talking about equivalent exposures. That means same illumination, same camera position, same field size, same exposure time, and same DOF. That is, the pictures look the same even if the subject is moving.
Actually, with the same exposure time and same DOF, they will have different exposures as the relative aperture will be different:

Exposure = time X intensity

I would say your characterization of what photography and exposure is, is a personal bias - I do not mean any offence, I just think you are holding to some conclusions of what is necessary in photographic quality that don't really apply universally. Your position that DOF is an absolute equivalent is certainly not true. And there are ways around controlling the plane of focus not limited by aperture.

Sorry, Rik.
Will

rjlittlefield
Site Admin
Posts: 23561
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

Will,

I'm sorry too, but everything I've said is just a summary of what's discussed in the long writeup (30+ pages) at http://www.josephjamesphotography.com/e ... /index.htm.

I don't plan to take the time and space here to repeat the analysis that's documented there.

I presume your point about controlling the plane of focus refers to the use of tilt/shift lenses.

If so, then I agree completely. It's not really a matter of sensor size, but certainly that's another advantage of cameras that support interchangeable lenses.

--Rik

Epidic
Posts: 137
Joined: Fri Aug 04, 2006 10:06 pm
Location: Maine

Post by Epidic »

Well Rik, I am going through this essay and all I can say is it is awful. It is starting myths that do not need to be started - f-number is the intensity of light, but entrance pupil is the amount of light?? No clear understanding of display size or viewing distance either. It seems this "system" is just a matter of hold enough variables constant to make a conclusion. Something like exposure is only dependant on exposure time because intensity should always be the same.

I will read more later, but I am not hopeful that this is anything useful.
Last edited by Epidic on Tue Jan 15, 2008 11:42 am, edited 1 time in total.
Will

Epidic
Posts: 137
Joined: Fri Aug 04, 2006 10:06 pm
Location: Maine

Post by Epidic »

I have been reading more. I think the author's hypothesis is flawed (his knowledge is certainly incomplete) and has no practical value. What is wrong with teaching the old way by telling folks what effect each parameter has. Linking all of this into a "theory of everything" makes no sense. It does not even have value if you are comparing systems as it is so limiting where the systems may not be operating in a practical or real way. While this is interesting in that it shows where some parameters intersect, it is rather an academic exercise.
Will

Epidic
Posts: 137
Joined: Fri Aug 04, 2006 10:06 pm
Location: Maine

Post by Epidic »

This "test" requires equal entrance pupil and equal shutter speed. This makes for valid comparisons between systems. Let us take the author's assumption.

Entrance pupil determines the amount of light used. The area of the pupil is the collection area and so that determines how much energy enters the system. Since photons are packets of energy, the amount of photons is constant with collection area. Seems resonable if light was only a particle, but it also has wave properties.

Exposure is not equal to energy X time, it is equal to INTENSITY X time. Magnification (focal length) matters. If you have a 50mm entrance pupil, you may get the same amount of energy with a 100mm focal length as a 200mm focal length, but the image intensity is not the same - the Airy disk can contain the same energy, but its size and amplitude change and so at both focal lengths the amount of energy is the same, but the amplitude, which is proportional to intensiy, changes. Image intensity is fundamental in imaging systems. After all, photography is light dependent.

So the author by maintaining a constant entrance pupil and shutter speed, is creating a difference in image intensity at the image plane - the larger the sensor, the less light. This is not how photography works. It will naturally give an advantage to smaller sensor sizes and give a disadvantage to larger ones. This contradicts the claim that this approach makes a level playing field for valid comparisons. The author's position that varying the response of the sensor to compensate for less exposure is simple a sight of hand. Why is a comparison of two systems with two different exposures and two different sensor responses any more valid than comparing two systems with equal exposures and equal responses?
Will

rjlittlefield
Site Admin
Posts: 23561
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

Sorry for the delay in responding to Will's concerns. My life has been a bit busy lately.

First, I would like to apologize for using an incorrect expression.

In my original posting, I wrote "equivalent exposures" when I should have written "equivalent images". That error led Will off in the wrong direction, and could make it difficult for anyone else to relate what I said to James's paper. I have edited the original posting to correct the error for the benefit of later readers.

Now, to revisit the concept, here is what James writes:
Equivalent images are images from two different cameras that look as similar as they possibly can. The definition of "equivalent images" is as follows:

1) Same perspective (subject-camera distance)
2) Same FOV (field of view or framing)
3) Same DOF (depth of field)
4) Same shutter speed
5) Same output size (same number of pixels)
Why is this useful? Why does he make this definition? Well, consider an example. Suppose we take a picture of some interesting subject, say a flower being visited by a butterfly.

The perspective of the picture determines the relationship of foreground to background. It is how a viewer tells the difference between a shot taken from 1 foot away using a wideangle lens, and a shot taken from 10 feet away using a telephoto.

The FOV determines whether we see just the flower and butterfly, or a larger piece of garden.

The DOF determines whether only the butterfly's head will be sharp, or its wings and the flower also.

The shutter speed determines how blurred the butterfly's wings are, as a result of their motion.

The number of pixels determines sharpness, assuming of course that the lens quality is good enough to use all the pixels.

Changing the perspective, the FOV, the DOF, or the shutter speed will change the images in ways that a viewer can immediately recognize. The results are simply different pictures.

Sometimes it makes sense to compare such images, particularly to show what one camera can do that another cannot. An example is the wonderfully shallow DOF and widely blurred backgrounds that can be produced by large lenses, normally found only on large format cameras. Similarly, if you're doing distant landscapes or flat copy, then again, DOF is not an issue and the photographer can freely choose to use larger formats and larger lenses to get arbitrarily high resolution. God bless landscapes and flat stuff.

However, in closeup & macro work, photographers often struggle to get enough DOF. Since this is photomacrography.net, I think it's pretty reasonable to allow ourselves, as James does, to consider DOF as being an important issue. Those who disagree need only stop reading -- I won't be offended.

OK, having established that we care about perspective, FOV, DOF, and shutter speed, let us now ask, "What aspects of the camera determine these things?"

Perspective -- what lines up with what -- is actually determined by the entrance pupil location. (Recall that the entrance pupil is just where the aperture appears to be, looking through whatever lens elements are in front of it.) James's characterization as "subject-camera distance" is a pretty good approximation. But let's be precise and say that it depends on entrance pupil location.

FOV -- the field size -- is determined by the relationship between sensor size and lens focal length. The required relationship is a bit complicated, particularly in the macro focusing range. Let's bypass the issue by simply presuming that either we have zoom lenses, or whatever two cameras we're comparing already come equipped with the proper ratios.

DOF -- depth of field -- is determined by the size of the entrance pupil, more specifically by its angular diameter as seen by the subject. With most cameras, that diameter is adjusted by changing the f-number, but thinking in terms of f-number is a bit of a trap. That's because the f-number bundles together the entrance pupil diameter and the lens focal length. We care about the entrance pupil diameter because it determines DOF, but (for purposes of analysis here) we really do not care about lens focal length once it produces the correct FOV.

Shutter speed -- how long does the exposure last -- needs no further explanation.

Notice that sensor speed (ISO rating) does not appear in this list. That's because sensor speed is a derived requirement. Once you know the illumination level, the entrance pupil size, the lens focal length, and the shutter speed, then you can determine the sensor speed needed to produce a proper exposure.

It turns out, of course, that we can say something about sensor speed quite easily. Assume that the illumination level is fixed, along with the entrance pupil size and the shutter speed. Under those conditions, the amount of light that goes through the entrance pupil during the exposure is constant -- completely independent of the sensor size. But that constant amount of light gets spread over an area that depends on sensor size. The relationship is simple -- to produce a proper exposure, the sensor that is X times larger (on axis) must have an ISO rating that is X*X larger. For example, a 21 mm sensor must be rated 9X faster than a 7 mm sensor.

People who are steeped in film technology are likely to balk at this concept, and rightly so. It's difficult at best to buy films that meet this requirement, and maybe they cannnot even be made. I don't know, but in any case that difficulty is not relevant here since we're specifically talking about digital cameras in this thread. If film doesn't fit the model, then the model doesn't apply to film. Simple as that, no problem.

Digital sensors, however, have no trouble meeting the requirements about sensor speed. That's because digital sensors are essentially photon counters. When exposed to the same total amount of light, two digital sensors built using the same technology will capture the same number of photons. Assuming that they also have the same number of pixels (James's last criterion), then the number of photons captured for corresponding pixels will also be the same. That implies that the statistical uncertainty of the photon counts will also be the same.

It turns out that statistical uncertainty in the photon counts is the dominant cause of pixel noise in modern digital cameras, so all of this analysis ends up with a remarkably simple result: for equivalent images, all sizes of digital sensors that use the same technology and the same pixel counts, also have the same noise level.

Larger sensors definitely have the potential to produce less noisy images, but that potential is achieved only when they are allowed to capture more light, either by increasing the illumination level, widening the entrance pupil, increasing the exposure time, or any combination of those.

I will not take more space right now to go through the analysis, but it turns out that for equivalent images, larger and smaller sensors also suffer equally from diffraction effects. The reason is that once the FOV and entrance pupil are fixed, the f-number varies directly with the sensor size. With equivalent images, a sensor that is X times larger will be running with an f-number that is also X times larger. Because diffraction blur (size of the Airy disk) depends directly on f-number, the Airy disk will also be X times larger, retaining the same proportion with respect to the image size. Everything scales in proportion.

Hopefully this discussion has filled in some of the gaps, inconsistencies, and misconceptions raised by my first posting. Again, I apologize for using an incorrect term, which has now been corrected. But I believe that with that correction, all the other points stand as stated.

--Rik

Supplemental reading:

Roger N. Clark's article at http://www.clarkvision.com/photoinfo/dof_myth/ performs essentially the same analysis in more detail. Quoting briefly from the article:
Roger N. Clark wrote:...if one keeps aperture of the larger camera the same as that in the smaller camera, the two cameras record the same image with the same signal-to-noise ratio and the same depth of field with the same exposure time.
The treatment of DOF versus entrance pupil size rather than f-number is explained further in Dick Lyon's article, "DOF Outside the Box", currently available at http://www.dicklyon.com/tech/Photograph ... d-Lyon.pdf. My life has been greatly simplified by this paper.
Dick Lyon wrote:It is not necessary to know the focal length and f-number of a camera lens to compute a depth of field, and indeed formulas that use the "outside the box" parameters, field of view and entrance pupil diameter, may be easier to understand and reason with.

DaveW
Posts: 1702
Joined: Fri Aug 04, 2006 4:29 am
Location: Nottingham, UK

Post by DaveW »

As sensor size and size of pixel sites are being discussed, you might find this table handy to find your camera:-

http://www.digitaldingus.com/reference/ ... rsizes.php

DaveW

rjlittlefield
Site Admin
Posts: 23561
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

Thanks, Dave. Good ref.

The site I usually use to make sense of sensors is http://www.dpreview.com/news/0210/02100 ... rsizes.asp .

It has a nice table that translates archaic notations like 1/2" (meaning outside diameter of a vidicon tube!) into 6.40 mm x 4.80 mm (meaning actual sensor size).

--Rik

Epidic
Posts: 137
Joined: Fri Aug 04, 2006 10:06 pm
Location: Maine

Post by Epidic »

Rik:

Before I post my reply, I wonder if you can clarify something for me:

1) Same perspective (subject-camera distance)
2) Same FOV (field of view or framing)
3) Same DOF (depth of field)
4) Same shutter speed
5) Same output size (same number of pixels)
Is it actually physically possible to keep the same perspective (camera-to-subject distance), FOV, DOF, shutter speed, and the same number of photons reaching the pixel sites? At infinity, it is possible. But if the magnification on 35mm sensor is 1x, what is happening with a smaller sensor? Is it possible to make equivalent images? To simplify matters, simply think of two 6mp sensors, one being twice the linear dimensions of the other, for example 1" and 2".
Will

rjlittlefield
Site Admin
Posts: 23561
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

Epidic wrote:Is it actually physically possible to keep the same perspective (camera-to-subject distance), FOV, DOF, shutter speed, and the same number of photons reaching the pixel sites? At infinity, it is possible. But if the magnification on 35mm sensor is 1x, what is happening with a smaller sensor? Is it possible to make equivalent images? To simplify matters, simply think of two 6mp sensors, one being twice the linear dimensions of the other, for example 1" and 2".
No problem.

If the magnification on the 2" sensor happens to be 1X, then the magnification on the 1" sensor will just be 0.5X. There's always a 2:1 ratio of magnifications, corresponding to the 2:1 ratio of sensor sizes.

The ratio of lens focal lengths will not be 2:1, however, except at infinity focus. Continuing the example, say you're using a 100 mm lens on the 2" sensor. Then (assuming thin lens model), the lens-to-subject distance for 1X will be 200 mm. The lens for the smaller sensor must also be 200 mm from subject, to preserve the same perspective. But to get 0.5X magnification, the smaller sensor must be only 100 mm away from the lens. Hence the lens for the smaller sensor will have focal length 1/(1/200+1/100) = 66.67 mm.

So, the ratio of lens focal lengths (and thus the ratio of marked f-numbers) will be only 3:2 in this case.

But when you're done taking into account lens extension for focusing, everything works out to be consistent. At the same aperture diameter, the ratio of effective f-numbers will be 2:1, same as the ratio of sensor sizes.

The same diameter aperture delivers the same number of photons to both sensors, just spread out over different size areas.

Again, this is a case where "outside the box" greatly simplifies the analysis. Once you've established the illumination, positioned the lens, set the diameter of the aperture, established the field of view, and determined how long to leave the shutter open, then you've established how many photons are going to go through the aperture and hit the sensor. Where else could they go?

Working out exactly what focal length and f-number to use, based on focusing distance and sensor size, now that's definitely a harder math problem. But it's always doable, down to the point that the smaller sensor needs a focal length that's too short to go with the aperture you picked.

--Rik

elf
Posts: 1416
Joined: Sun Nov 18, 2007 12:10 pm

Post by elf »

Again, this is a case where "outside the box" greatly simplifies the analysis. Once you've established the illumination, positioned the lens, set the diameter of the aperture, established the field of view, and determined how long to leave the shutter open, then you've established how many photons are going to go through the aperture and hit the sensor. Where else could they go?
Don't you also have to calculate how much of the image circle the sensor covers? It would seem a 4/3 sensor would capture more photons than a 3/2 sensor.

rjlittlefield
Site Admin
Posts: 23561
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

elf wrote:Don't you also have to calculate how much of the image circle the sensor covers? It would seem a 4/3 sensor would capture more photons than a 3/2 sensor.
No. The part about "same FOV" addresses this.

--Rik

Epidic
Posts: 137
Joined: Fri Aug 04, 2006 10:06 pm
Location: Maine

Post by Epidic »

Rik, I am not sure I have time to do this justice tonight, but I will make a start. I have been busy - and still am. But I want to put down a few thoughts I have had about this.

All imaging systems are photon counters. Given equal response, photo site will give the same signal and noise based on the number of photons striking them. Film is the same but here the idea is density above base plus fog and quantum sensitometry deals with exposure based on photon count per photo site.

The concept of "equivalent images" is just fixing the object space qualities. It also basing it on the same photon count and sensor response. As you have shown, with two fixed focal lengths, equivalent images can only be made at one magnification. As a system for an individual to compare two camera systems at one specific shooting condition (if the optics exist), it can be useful. As an indicator of how a system works, it is too inflexible and cumbersome to have much value. (Some claims about vignetting and viewing conditions are simply false, but that seem to be outside your point.)

Do sensors with the same number of pixels have the same noise. Under the limits of equivalent images, they will, given the strict criteria. But then you are handicapping the sensor of the larger sensor with the larger wells. Given equal exposure (intensity x time), the larger sensor has much better s/n. So it would be true to say noise is and isn't the same as sensor size changes.

And here is the problem about this system - I can only take pictures as the system dictates, and not how I actually may use the equipment. The goal of photography is not to standardize techniques, but to take the what the photographer thinks is the best. Also you are assuming equal object and illumination conditions. But why would I want to do that? Why would I not employ techniques to use the strengths on my system? I use many formats and I do not fix DOF nor shutter speed so the assumption that changing format results in the same approach to shooting a subject is simply not true. (And since stacking is such a popular approach, you surprise me as I would have thought you would optimize your aperture for resolution and take a different number frames in a stack, than just fix a particular DOF and shutter speed to standardize a process. Surely you would want to optimize a particular system under these conditions.)

The usual way to test a system is based on equal image space. And for good reason - you cannot assume the shooting conditions nor the technique of the photographer. This is a far better way to gauge the performance of the system as changes to any of the shooting parameters are easy to predict. It is also a better why to handle sensor exposure as a sensor "speed" (and hence its response) can be known where the sensor performance based on photon count cannot. The concept of equivalent images was not needed with film technology, there is nothing special about digital technology that requires it either as nothing has really changed.

Note: photo site size is difficult to determine as you simply cannot take sensor dimensions and divide by total pixels - that is pixel pitch. Nor can you assume the gap between pixels as sensor size changes to be the same. Equivalent images looks precise, but lens transmittance nor shutter efficiency is not taken into account. You simply cannot know how many photons are hitting the photo site with a given exposure time. And even if you can figure out the physical technology is the same, the signal and noise processing cannot be known to be the same.

My wife and dog are waiting. Gotta go.
Will

rjlittlefield
Site Admin
Posts: 23561
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

Will,

I’m having a little trouble understanding the thrust of your posting.

Someplace in the middle of this conversation, it seemed you were getting ready to challenge whether "equivalent images" were even possible.

Now it seems that you’ve accepted the mathematical correctness of the model, and you’re down to haggling over whether it’s useful for anything.

What I don't understand is that you seem to be under the impression that "equivalent images" analysis says you can't change your techniques to take advantage of the larger sensor. Quite the contrary, its major message is that you have to!

I personally find this model and this message to be quite helpful –- it tells me when I’m likely to see some advantage from a larger sensor, and when I’m not, and how I might change the situation to exploit the larger sensor’s potential.

There’s no reason that you or anybody else has to share my valuation –- different strokes for different folks, and all that.

But I do like to understand where the differences come from, and in this case I'm puzzled.

I’ve carefully read this last post of yours several times. As far as substance is concerned, what I see looks like just fancier language to raise exactly the same points that I did in my very first posting in this topic.

Perhaps I’ve overlooked something.

Can you go back and re-read my as-corrected first posting, please?

I’d really like to know if there’s something I said in there that’s wrong, or that failed to cover in a general sense the same points that you’re making in your last post.

If there’s not, then I think we must now be in furious agreement about everything except perceived usefulness of the model.

–-Rik

Post Reply Previous topicNext topic