First post and hi-res lens on eBay

Have questions about the equipment used for macro- or micro- photography? Post those questions in this forum.

Moderators: rjlittlefield, ChrisR, Chris S., Pau

rjlittlefield
Site Admin
Posts: 23561
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

Small horizontal shifts wouldn't do this. If your source images march horizontally across the frame, then any sensor dust or defect could be turned into a trail like this. But if they just tweak a little bit left and right but stay basically centered and roughly the same size, the trail wouldn't look like this. Oh, wait a second. If you have a single dark pixel, then tweaking left and right will produce a single dash. If you have several equally spaced dark pixels in a single row of your images, then the collection could be turned into a pattern that looks like this. I think that's what I'd go looking for.

One way to diagnose a problem like this is to watch the output as it accumulates while PMax is running. If the dashes start short and get longer all in synchrony, that would again suggest dark pixels as described above.

One other possible clue... I notice in the first image the long row of dashes is horizontal and the short row at top tips slightly downward to the right, while in the second image the row of dashes tips upward to the right, apparently tracking the envelope of the streaks at the top of frame. Have you rotated the images using any tool outside ZS?

--Rik

elf
Posts: 1416
Joined: Sun Nov 18, 2007 12:10 pm

Post by elf »

rjlittlefield wrote:Small horizontal shifts wouldn't do this. If your source images march horizontally across the frame, then any sensor dust or defect could be turned into a trail like this. But if they just tweak a little bit left and right but stay basically centered and roughly the same size, the trail wouldn't look like this. Oh, wait a second. If you have a single dark pixel, then tweaking left and right will produce a single dash. If you have several equally spaced dark pixels in a single row of your images, then the collection could be turned into a pattern that looks like this. I think that's what I'd go looking for.

One way to diagnose a problem like this is to watch the output as it accumulates while PMax is running. If the dashes start short and get longer all in synchrony, that would again suggest dark pixels as described above.

One other possible clue... I notice in the first image the long row of dashes is horizontal and the short row at top tips slightly downward to the right, while in the second image the row of dashes tips upward to the right, apparently tracking the envelope of the streaks at the top of frame. Have you rotated the images using any tool outside ZS?

--Rik
I've posted a reply here as this issue isn't caused by the JML lens: http://www.photomacrography.net/forum/v ... 8201#58201

Bob^3
Posts: 287
Joined: Sun Jan 17, 2010 1:12 pm
Location: Orange County, California

Post by Bob^3 »

Hi Rik, I’d like to revisit some of the questions form my last post, just one last time (I promise), for clarification. Hopefully, others will find this discussion as useful as I have.
rjlittlefield wrote:
Bob^3 wrote:
Regarding the difference in NA between the two lenses used for comparison to the JML lens, f/3.5 vs. f/2, how significant is this in terms of diffraction? At 7.5x mag on the D700, EF = (1+7.5)*3.5 = f/29.75 vs on the Canon at 7.34x, EF = f/16.6, how significant is this? Does it fully explain the perceived difference in apparent sharpness on the Canon 4.7µm pitch sensor?
No, it doesn't.

Before showing the images, I should explain some theory. The calculation is not as simple as (m+1)*f. There is an additional issue called "pupil factor", which can cause the working aperture to be larger or smaller than predicted by (m+1)*f. For the OM 20/2.0, the pupil factor is about 0.75 (rear pupil smaller). At 7.34X, this means that the working aperture is actually about f/21.5, not the f/16.68 that would be predicted by ignoring it. On a quick measurement (calipers by eye at arms length), the JML lens seems to have a pupil factor of around 1. Assuming that's correct, then on my system the comparison with both lenses wide open would be about f/29.25 versus f/21.5, both effective.
I am aware of the pupillary magnification factor and also ‘eyeballed’ the entrance/exit pupil sizes on the JML lens, which I also estimated at ‘1’. Since my Olympus 20mm f/3.5 appears to have a pupil factor close to ‘1’, I (mistakenly) assumed symmetry for the Oly 20/2, as well. BTW, if anyone knows the true pupil factor of the Oly 20mm f/3.5, I would appreciate the actual value---can’t seem to find it referenced anywhere.
Of course the OM can be stopped down. The setting necessary to achieve same working aperture turns out to be almost exactly f/2.8 on the OM 20. Making that change and comparing resolution produces this comparison.
--------------
To my eye, it seems clear that the OM 20 has lost some resolution at its f2.8 setting versus wide open, but it doesn't get quite down to the JML.
Now I find that a very revealing test. It does suggest that the OM has an inherently somewhat better optical design than the JML lens.
Assuming a 20 mm lens and making the simplifying assumption that it acts thin, then the angle of view for the large sensor will be about 12.09 degrees, while the angle of view for the small sensor will be 11.28. So the two different sensors are not making dramatically different demands on the covering power of the lens. It's only a difference of about 1.07X, versus the 1.61X that you might think from just considering the sensor size. You have to take all the other scale factors into account also.
Thank you for curing me of what I believe is a very common misconception. A quick sketch of ray traces for a thin lens verifies the validity of your statement. The effect is there, but is much lower than intuition might suggest, at least for the range of sensor sizes used in DSLR’s. I presume the effect would be more pronounced when comparing DSLR sensors with medium/large format film?
Now that's a hard question. The scaled image that I showed earlier in this post suggests that the 20 mm f/2 might be overkill for a D700 at this magnification. So in that sense, it would not be justified.

On the other hand, since we're now talking price, I see that the D700 with 12 MP body is now being listed at B&H photo for $2470. I don't know what the Nikon equivalents are, but the Canon T1i with 15 MP is currently selling for $670 at B&H, and the 5D Mark II with 21 MP for $2500.

So it seems, just looking at the prices, that you have already decided to select equipment based on something other than resolution.

That decision -- to use a 12 MP 36x24 sensor, as opposed to a 15 MP 22x13, or a 21 mP 36x24 sensor, is now affecting your choice of lenses that cost a fraction of the price of your camera body. Someone with different equipment or goals will optimize to a different point.
Yes, that was a very measured decision on my part, based on the inherent trade-offs of each type of camera and sensor format. I do a lot of other types of photography including low-light landscapes and close-ups (flying insects) where the D700’s low-noise and high dynamic range at high ISO levels is often very useful. In fact, recently I have been photographing live arthropods (insects and spiders) in my studio at up to 2x and have ‘pre-visualized’ making such images at higher magnifications (as if making focus stacked, high-mag images of inanimate subjects wasn't hard enough---but, I DO enjoy challenges! :) ).

My preferred camera, with fewest trade-offs, would have been the 24MP Nikon D3x---but I simply could not justify the $8k price, for a hobby. I have absolutely no doubt that the 15MP, high-pixel-pitch sensor in the Canon T1i will do a better job of squeezing every last drop of resolving power out of those wonderful, high-NA Nikon objectives and high-end macro lenses. Canon also have a much better LiveView implementation. As far as I know, Nikon currently has no similar offering, at any price. I typically upgrade camera bodies every 2nd or 3rd generation. If/when Nikon releases their rumored higher pixel count update to the D700, I’ll consider it---but only if they fix the poor LiveView functionality. In the meantime, my older D200 will likely become my preferred camera for use with my newly acquired Nikon objectives, 10x CF N Plan NA 0.30 and 10x M Plan NA 0.25. About the only things I miss over the D700 for stacking is LiveView and the larger high-res LCD. Still, I’d rather ‘wear out’ the shutter on the D200 than the shutter on the D700.

And, as you say, at a given field coverage, these cameras will allow me to achieve similar resolutions compared to the finer-pitch sensors while using lower-cost lenses (number of pixels on subject of interest), as long as the lens is capable providing the minimum required resolution over a 43mm image circle.
Quote:
I quickly learned the axiom, simplify, simplify and then simplify, again.
There are some similarities in our backgrounds, but I think some differences as well. My training is in numerical analysis and computer science. I recently retired from 38 years with an R&D lab, where much of my job was developing and validating computational models.

What I quickly learned was a little different: "Everything should be made as simple as possible, but no simpler". I have seen far too many cases where a straightforward end-to-end validation test under realistic conditions of interest showed that something important had been missed by more restricted unit testing.

In the case of lenses, what this means is that I will generally test first using subjects and setups that are typical of my intended use.
In retrospect, I guess I should have removed one or two of the “simplify”s, huh? Still, perhaps our experiences are not so different. The situation I was thinking of as I wrote that concerned the use of a computer modeling method known as ‘design of experiments’ (DoE). In process control, this can be a very powerful technique (I realize I’m likely preaching to the choir here) for determining not only what variables affect a given process and how strong that effect might be, but also considers the interaction between variables that might amplify the effect seen while using the more classic experimental methods of simply changing one variable at a time (hope that overly-long sentence makes sense).

When I spoke of simplifying an experiment, I was thinking of the imperative first step (needed to reduce the number of variables) before employing the DoE software algorithms. This involves employing the classic approach of changing only one (controllable) variable at a time, while holding all other variables constant to determine if the changed variable has any effect on the process. What I have seen far too often is failure to properly complete this step, resulting in far too many extraneous variables being feed into the DoE---leading to mush for results (e.g. it’s highly unlikely that sun spots and tides affect the final thickness of an organic dye-polymer coating, being applied to a plastic substrate with a spin coater!). Of course, it is also possible to over simplify by failing to include all pertinent variables in the initial trials.

When applied to the testing of macro lenses, my use of a glass resolution slide is intended to test the possibility of quickly comparing lenses for one single factor, high-contrast resolution, to determine if the results have any correlation with the resolution found while imaging the final intended subjects. I’ll be the first to admit, I have yet to prove that correlation---the results will take time. My other goal for using a subject such as the resolution slide is to have a common, repeatable and slow-aging target that will remove some of the variables of using natural subjects, such as moth wings or bugs bodies, where the features may not be consistent across the frame, from sample to sample or stable over time. Also, initial testing and lens comparisons could be accomplish much faster than if multiple images and stacking is involved. Perhaps these are also some of your considerations for making and testing your universal ‘smoked-glass’ test subject and your use of LiveView?
It happens, for example, that because I almost always stack and almost never image flat subjects, I don't care very much about flatness of field. I would much rather have a lens that is crisp corner to corner over a curved field, than a different lens that is admirably flat but not quite as sharp. Other people will have different priorities.
What I meant to say here is not “field flatness” but rather lens sharpness across the field (although, for me, for some subjects field flatness is of some, secondary, importance to reduce the number of frames required to capture the subject). In the case of a possible flat universal test subject, a flat-field lens might be simultaneously tested for center, middle and corner resolution with a single frame or by scanning the field with LiveView, without need to re-focus.
I try to model everything, and when the simple models don't work, I try to figure out why not. That's usually a frustrating exercise, though satisfying when it's all over. I've accumulated a modest collection of links to useful math, which I'll be happy to share.

If the links you speak of have been previously posted here at PMN, I would not impose upon you to do this. I’ve gotten pretty good at searching this site with Google Site Search and have a good number of links from you and Charles---just need to organize them and build a spreadsheet I can easily reference. If you have links that may not have been previously posted, that you think other members might also benefit from, please do offer them.

Finally, going back through my posts, I may have sounded a bit argumentative or challenging at times. Please realize this was not my intension. I do not believe that argument for arguments sake is ever productive. However, I will freely admit to being a life-long practicing skeptic, but hopefully not to a fault.

Thanks, again, for your help.
Bob in Orange County, CA

rjlittlefield
Site Admin
Posts: 23561
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

Bob,
I will freely admit to being a life-long practicing skeptic
We're gonna get along just fine -- I describe myself as being a "natural born and highly trained skeptic".
I’d like to revisit some of the questions form my last post, just one last time (I promise)
I'm sorry to hear that. :wink: I had hoped to keep working this stuff over until we either figure it out or decide it's not worth the trouble. :D
I may have sounded a bit argumentative or challenging at times
Vigorous discussion is welcomed as long as the material is subject to experimental verification. That seems to be your style, so I don't see any problem.
When I spoke of simplifying an experiment, I was thinking of the imperative first step (needed to reduce the number of variables)...What I have seen far too often is failure to properly complete this step, resulting in far too many extraneous variables being feed into the DoE---leading to mush for results
Oh, that kind of simplify! The problem you're describing is what I know as "overfitting". Given enough parameters, you can fit any observed data arbitrarily well --- while completely losing predictive value. (I once watched someone fit an order-17 polynomial to his 18 data points, and then wonder why it wouldn't interpolate reasonably.)

My concern in this discussion is failing to include pertinent variables.

I've been doing some more experiments with the OM 20/2 and the JML 21/3.5. The results are not yet prepared to post, but in brief:

1. The OM has significant color fringes, but apparently only for out-of-focus points.
2. Stacking the OM results selects the in-focus points, which means that the color fringes tend to disappear.
3. Despite that the OM obviously has more resolution as seen in the moth scale pictures, I cannot see that it gives sharper edges using a backlit chrome-on-glass target.

As I said earlier, I generally test first using subjects and setups that are typical of my intended use. Since my intended use is with stacked front-lit bugs, that's what I test with.

I think this fits both of our concepts of "simplify", since in a head-to-head test it eliminates all variables except one: which lens.

In contrast, looking at edge sharpness of a backlit bar target in an unstacked image adds at least three differences from actual use, and in a very real sense those variables are uncontrolled because there's no evidence that relates test behavior with behavior under conditions of interest.
My other goal for using a subject such as the resolution slide is to have a common, repeatable and slow-aging target that will remove some of the variables of using natural subjects, such as moth wings or bugs bodies, where the features may not be consistent across the frame, from sample to sample or stable over time.
Those are admirable goals, but let's talk more.

The biggest problem I encounter in comparisons over short spans is variation in illumination. Over long spans the problem is differences in cameras, unless I go to the trouble to add enough optical magnification that the camera is no longer an issue.

It is easy enough to store a moth wing so that it does not age appreciably, and also to move the wing so as to image the same scale at various places in the frame. Certainly my moth wing will not be exactly the same as anybody else's moth wing. But I would probably not trust anything except a head-to-head comparison in any case.

Aside from accurately representing the subjects of interest, the moth wing has the advantage that it actually has features small enough to stress the lens. The Edmund catalog says that your NT38-257 resolution target maxes out at 228 line pairs/mm. The longitudinal striations on the moth scales measure about 430 line pairs/mm, and the wing has fine detail much below that. Edmund does sell USAF targets with enough resolution to stress the OM 20, down to 645 lppm. But those targets cost $750 each, and then they don't have enough resolution to stress a microscope objective while the moth wing does -- see http://www.photomacrography.net/forum/v ... 6596#56596 for example.

I realize that a chrome-on-glass USAF resolution target sounds more scientific, but I respectfully submit that an ordinary moth wing wins this particular competition hands-down. It's perhaps worth noting that for many years the standard test target for high-NA microscope objectives has been the dots in a particular species of diatom, Amphipleura pellucida.
Also, initial testing and lens comparisons could be accomplish much faster than if multiple images and stacking is involved.
True, but as mentioned this introduces uncontrolled variables between testing and actual use.
Perhaps these are also some of your considerations for making and testing your universal ‘smoked-glass’ test subject and your use of LiveView?
Actually what I like best about the "smoked-glass" test subject is that it has a wide range of feature sizes, with a fairly smooth distribution of sizes down to small enough to stress any lens.

The distribution of feature sizes in a moth wing is pretty "lumpy". It has big features of overall scale shape, small features at one characteristic spacing of the longitudinal striations, and very small features that are far below the resolution limits of these lenses. But there's not much in between. I can see that the OM 20 resolves the striations while the JML does not, and I can see that the smallest highlights are a lot smaller with the OM. What that translates into in terms of ultimate resolution, I can't tell. If the target had some features that were somewhat larger and smaller than the striations, it would be easier to tell how big the differences in lenses are.

The reasons I like LiveView vary depending on what I'm doing. For framing and coarse focusing, it's much more comfortable to watch a monitor than to keep my eye glued to an optical viewfinder. For precise focusing, it's many times more accurate. For actual exposure, it pretty much kills vibration as an issue. It's also useful for quick look gut-feel evaluations, but for actual comparisons I consider it useless because a) the results cannot be shown and b) it relies on visual memory, which is really pretty awful.
if anyone knows the true pupil factor of the Oly 20mm f/3.5, I would appreciate the actual value---can’t seem to find it referenced anywhere.
Some weeks ago, I asked Alan Wood of http://www.alanwood.net/photography/olympus/ whether he recalled ever seeing this number published. He said no.

But if you're sufficiently curious, it's not hard to measure. Just stop down the lens until the iris is clearly the limiting aperture. Set up a shallow-DOF macro setup and focus exactly on the iris. Take one picture. Without changing the macro setup, reverse the lens and repeat the process, focusing by changing distance to the lens under test. You now have images of the entrance and exit pupils to the same scale. Identify corresponding points and measure pixel distances. The ratio of those is the pupil factor.
The effect is there [angle of view versus sensor size], but is much lower than intuition might suggest, at least for the range of sensor sizes used in DSLR’s. I presume the effect would be more pronounced when comparing DSLR sensors with medium/large format film?
Yes, but again, not nearly as much as it seems at first blush. Remember that the subject field width is fixed. The only reason that the angle increases at all with a larger sensor is that the lens has to move a little closer to the subject in order to place the focused image farther back. At 4.8 mm width on subject, the 22.3 mm sensor is operating at 4.64X, so the 20 mm lens is located at 24.3 mm from subject. In the limit of infinite sensor size, the 20 mm lens only approaches 20 mm. So the maximum difference in angle of view is only about 20%, despite the infinite difference in sensor sizes.

Even this difference occurs only because we forced the lens to have the same focal length in all cases. If we allow the focal length to vary a little, so the lens-to-subject distance remains at some fixed value, then the angle of view remains fixed also.

You might find it illuminating to play with this problem: See what happens to the required focal length as a function of sensor size, depending on magnification, at fixed lens-to-subject distance. I believe what you'll find is that at the limit of zero magnification, the focal length varies directly with the sensor size, but at the limit of infinite magnification, the focal length doesn't change at all.
I have absolutely no doubt that the 15MP, high-pixel-pitch sensor in the Canon T1i will do a better job of squeezing every last drop of resolving power out of those wonderful, high-NA Nikon objectives and high-end macro lenses.
Again, this effect is not as extreme as it might seem at first blush. On each axis, a 15 MP camera has only 12% more pixels than a 12 MP camera. Yes, the pixels are smaller, but you can compensate for that by making the image bigger. Add a 1.4X or 2X teleconverter in front of the larger sensor, and the two start to look remarkably similar.

The effects of sensor size have been much discussed in this forum. It turns out that sensor size makes no difference to DOF and diffraction effects as long as properly corresponding apertures can be set on the lenses. "Properly corresponding" means that the subject-side numerical aperture is the same, which also ends up implying that the effective f-number on the camera side simply scales in proportion to the sensor size. Under reasonable assumptions, the larger sensor'd camera can do everything the smaller sensor'd one can. The f-number and ISO settings that are needed to achieve this are counterintuitive, an aspect that has led to some arguments. The larger sensor has the advantage that it can give less pixel noise when additional light or longer exposures are allowed. The flip side of this equivalence is that when you are shooting for maximum DOF at some specified resolution, typical of single-shot macro work, then there is no advantage to the larger sensor except for quieter pixels when increased brightness or longer exposure time permits collecting more light.
If the links you speak of have been previously posted here at PMN, I would not impose upon you to do this.
All the links have been posted at PMN, but even I would have trouble finding some of them by searching. Let me dig out a couple that I think you may find interesting, and I will post them as a follow-up in this topic.

--Rik

pierre
Posts: 288
Joined: Mon Jan 04, 2010 12:37 pm
Location: France, Var, Toulon

Post by pierre »

Hello Gentlemen,


JML not received yet
So Far so Long :roll:

I wonder if you already thought to a "basic standard" test protocol in order to put some "universal" basis on the simple but easy to reproduce by everyone (yes! we can) and to avoid the problem of "sorry i don't have a OM20 to perform such test" :oops:

Ex:

1) Flat test : 10 cm sqr millimétric paper.
2) DOF : a ping pong ball with millimétric paper with 2 bands perpendicular, meridien like 5mm wide glued on (joke) :twisted:
3) Color : .....

Hum, maybe this is not the subject of this post. :?


Pierre

ChrisR
Site Admin
Posts: 8668
Joined: Sat Mar 14, 2009 3:58 am
Location: Near London, UK

Post by ChrisR »


rjlittlefield
Site Admin
Posts: 23561
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

The pencil shavings are a good target for head-to-head comparison. But at least the way I use such pictures is to see how the same random features look with each lens. It would not be so good to have one lens shooting one target, while another lens is shooting a different target with different random features.

I think Pierre is looking for some protocol that allows lens #1 to be measured by person Smith, while lens #2 is measured by person Jones, each using their own cameras etc, and still the measurement will say which lens is better for resolution, CA, contrast, etc.

I am not aware of any convenient protocol that is powerful enough to do that.

--Rik

pierre
Posts: 288
Joined: Mon Jan 04, 2010 12:37 pm
Location: France, Var, Toulon

Post by pierre »

OK.

Thanks for your answer Rik.


Cheers,
Pierre

rjlittlefield
Site Admin
Posts: 23561
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

Bob, you might be interested in http://www.photomacrography.net/forum/v ... 2691#42691 and the links therein.

That discussion is about MTF and how it relates to image sharpness.

One big point is that with perfect lenses (no aberrations), increased resolution goes along with increased contrast at all resolutions -- the MTF curve is higher. Even better, the percentage difference is highest for fine detail. If you have one lens that gives 5% contrast at the finest resolution the sensor can capture, then a lens that has only 10% higher ultimate resolution will give over 2X the contrast at the sensor's resolution. There is significant advantage in having a lens that over-resolves the sensor.

A second big point, discussed at http://www.bobatkins.com/photography/te ... ution.html, is that behavior for fine detail does not necessarily tell you about problems that occur for coarser detail. With aberrations, it is possible that high frequencies may be resolved while somewhat lower frequencies are not.

The really big point, I think, is that any specific test is not likely to yield all the information that is potentially of interest. I guess if one knew the 3D PSF across the entire field, then most everything of interest could be computed from that. On the other hand, I suspect the process would be so difficult that hardly anyone would bother. :?

So in my mind the question becomes, what test will yield the best applicable information? That's the place where I opt for subject, setup, and workflow of interest.

--Rik

ChrisR
Site Admin
Posts: 8668
Joined: Sat Mar 14, 2009 3:58 am
Location: Near London, UK

Post by ChrisR »

I am not aware of any convenient protocol that is powerful enough to do that.
Agreed, it would always be a good thing to include a commonly used lens, as a control.
It might be interesting to try to develop a target, with as standard an international approach as we can dream up.
For low mags like this, we can all probably manage printed half millimeter grids, or of course print a Forum Standard target. It would come out differently but at least have the scale built in.
Papers vary, (or we could all go to the bank and get a US$5 bill, demanding a new one as it's internationally important!)and I daresay "HB" pencils may vary too. I tnink we'd want reflected, diffuse, flash lighting - the chips of lead do reveal something about that.
Perhaps white isn't the most appropriate background?

I haven't seen a moth or a fly for months, so $5 would be much easier. AndrewC has kindly sent me a couple of beautiful butterfly wings, which I'm scared of wrecking!

pierre
Posts: 288
Joined: Mon Jan 04, 2010 12:37 pm
Location: France, Var, Toulon

Post by pierre »

JML received :)

I will start the test soon.

I guess "universal" test can't work as Rik suggest it: there are too much parameters.

Even with the same test / subject / lens / setup / camera/ camera's parameters....the lighting (or vibrations or....) , may change the result.

Well... any subjections ?


Pierre

Bob^3
Posts: 287
Joined: Sun Jan 17, 2010 1:12 pm
Location: Orange County, California

Post by Bob^3 »

Hi Pierre,

Sorry for the late reply. Glad you finally got the JML lens, must have been delayed by a certain unpronounceable (Eyjafjallajökull---geez, what a name) volcano in Iceland! As far as the concept of a ”universal” test subject, I do agree with Rik that the only completely valid lens test subject is one that is typical of the subjects you care about for creating your final images (actually, for me, this idea was never in doubt).

That said I still believe there is some value in continuing the discussion regarding selecting some common, widely available test subjects that will test most of the important characteristics of macro lenses (whether it’s Rik’s smoke particles on glass or ChrisR’s inkjet grid with added graphite particles or some other combination of materials). It might also be good to establish general lens testing protocols based on the many previous tests done by forum members and maybe a new sticky or a FAQ entry. Clearly, there is no way to eliminate all the variables (sensor, lighting, software, technique, etc) that each member might use to test their lenses. But perhaps we could at least minimize them? I’ll start a separate thread on this when I get a chance.

For your current tests, if you happen to have other lenses available for comparison---perhaps one with more test history (by forum members) than the JML, you could photograph the same subject with the JML and one or more other lenses, under the same conditions of lighting, magnification, etc. This would give a good basis for comparison…just search the forum for ‘lens AND tests’ with Charles or Rik in the Author field for plenty of good examples. Please do post your results either here or in a new thread.

Good luck!
Bob in Orange County, CA

Post Reply Previous topicNext topic