Couple of thoughts:
- 1) You tested the RN85 at its best aperture (f/2.8 ) against the PN105 at f/2.8. At 1x, my PN105 specimen improves notably when stopped down half a stop to between f/2.8 and f/4. Your test showed the PN105 as better but close to the RN85; if your PN105 is like mine, might not comparing the two lenses at each one's best aperture show the PN105 to be better by a larger margin? (No disrespect--lens testing is bloody hard, while nitpicking tests is easy. )
2) For your 100 percent comparisons, it appears you used screen captures of the images displayed in Photoshop. This probably works fine when done as you did--shots from both lenses treated exactly the same way at the same time. So any vagaries from your computer's graphics processing should effect both images the same way. No problem.
But I've been looking for a chance to urge against this becoming a widespread standard, so here goes: Screen grabs are impacted by a particular computer's graphics processing, and if less meticulous practitioners than you use this method with less care, it may lead to false conclusions.
I had a private discussion on this topic with a forum member who does wonderful lens tests. He and I each took his original images and did screen grabs on our separate computers, displayed in Photoshop, at agreed-on magnifications; our results were dramatically different. I attribute this to differences in how our computers' graphics systems rendered.
In contrast, we obtained much more comparable results when we each used Photoshop to crop his images to 100 percent segments, saved these crops as images, and compared the resultant image files. Going this route, we felt, permitted more useful comparisons between photographers.
No criticism intended about your wonderfully-conducted test. You made certain that the variable created by computer graphics applied equally to both images. But for comparisons between photographers, or between images processed on different computers, pixel-level crops saved as images make for a more extensible comparison. I'm jumping on this chance to speak in favor of this becoming a standard.
Occasionally, when we kiss a frog, it turns into a lovely princess. But most of the time, it remains a frog. To my mind, testing unknown lenses for macro is similar--odds are, one will purchase and test lot of junk lenses to find an occasional outstanding lens. For an inveterate lens tester, money and time spent purchasing and testing frogs will exceed the money/time cost of purchasing a known princess.Lou Jost wrote:I am sure there are amazing lens discoveries floating around out there. I have a bunch of strange things I bought over the last year that I have not tested carefully; some seem extremely good at first sight.
So agreed--hats off to the hard-working, cash-spending frog-kissers among us! What a service they do us. They probably find frogs most of the time. Then, when an odd frog turns into a lovely Kate or Meghan, they tell us about it. And then we few, we happy few, we band of brothers, can purchase such a lens!
Cheers,
--Chris S.