Telecentric DImage 5400

Have questions about the equipment used for macro- or micro- photography? Post those questions in this forum.

Moderators: rjlittlefield, ChrisR, Chris S., Pau

ray_parkhurst
Posts: 3417
Joined: Sat Nov 20, 2010 10:40 am
Location: Santa Clara, CA, USA
Contact:

Post by ray_parkhurst »

mawyatt wrote:Ramon,

I wouldn't say it was a foolish effort on your part creating a robot S&S system, but more of a fun adventure :D

Agree using Zerene and PTGui as the softwares of choice for image processing. Didn't have much success trying to stitch with PS, but PTGui was much better IMO.

I'm still in the mist of creating my robot, mechanically I'm almost there with the modified horizontal setup (I've used for many years now) based upon precision Thorlabs stuff....very good but expensive (worth every penny though!!), modifying for a vertical capability also. Now I'm in the process of writing the code for the stepper motor control of the 3 axises and select the proper control algorithms and drivers. I've learned quite a bit about stepper motors, control and the drivers lately, especially from a precision use standpoint (and what not to use/do!!). Now I'm dealing with the software side (haven't programmed anything in over 40 years!!), but learning new things is part of the adventure :D

Please show & describe your robot system, I'm certainly interested and sure others are also.

Best,
Yes, please both Ramon and Mike describe what you are doing and the results you're getting. I recently built my own XY Stage for Stack&Stitch application using THK KR1502A linear actuators and MachMotion NEMA-11 steppers. Here is what I ended up with:

Image

I'm finding the most difficult thing about any XY stage, and the photo system, is keeping things well-aligned in X and Y. I'm not willing to put in a lot of time with alignment points, so ensuring good alignnment is critical for me. I'm using the mjkzz SnS controller to move things around, and it works very well on my Win10 laptop.

I would love to see your plan/actual XY Robots, goals, performance, etc.

Edited to replace image due to typos

Lou Jost
Posts: 5948
Joined: Fri Sep 04, 2015 7:03 am
Location: Ecuador
Contact:

Post by Lou Jost »

I think that instead of working with the DiMage lens, we should explore the use of longer lenses for this.

ray_parkhurst
Posts: 3417
Joined: Sat Nov 20, 2010 10:40 am
Location: Santa Clara, CA, USA
Contact:

Post by ray_parkhurst »

Maybe a separate thread on Stack and Stitch robots, lenses, etc?

ray_parkhurst
Posts: 3417
Joined: Sat Nov 20, 2010 10:40 am
Location: Santa Clara, CA, USA
Contact:

Post by ray_parkhurst »

Lou Jost wrote:I think that instead of working with the DiMage lens, we should explore the use of longer lenses for this.
Lou...you mentioned adding an extra lens to the front to create a telecentric combo, and that this might only be possible with a longer lens. Makes sense, but won't the added lens degrade the image quality of the (optimized) objective? I've never heard particularly good things about adding achromats or other elements to increase mag ratio, even the very best ones. What is your (and others) experience with adding achromats in term of IQ degradation?

Lou Jost
Posts: 5948
Joined: Fri Sep 04, 2015 7:03 am
Location: Ecuador
Contact:

Post by Lou Jost »

It depends very much on the lenses, Ray. It is easier at low magnifications. Raynox 150 and 250 lenses work very well on small sensors. I would think the big five-element Raynox 5320 (72mm diameter) would be an even better candidate. But there are other options; instead of a simple close-up lens, you can do the same thing with well-corrected regular lenses mounted in reverse on the prime lens. I've had some good experiences using MFT sensors with coupled lenses at low m (< 3x), stopped down at the point between them which makes the combo approximately telecentric.

I have been meaning to do more thorough tests on this, but I am short of time right now. I hope I can get to it soon.

RDolz
Posts: 96
Joined: Mon Aug 28, 2017 9:32 am
Location: Valencia (Spain)

Post by RDolz »

Well, ... it seems that we have opened a new post

Automated Stack & Stitch System Robots
http://www.photomacrography.net/forum/v ... hp?t=38091

Anyway, I would like to emphasize that it would be good to find a method to correctly measure the telecentricity of a lens.
In this forum there are very prepared people, among all we could standardize the procedure.

Does anyone embark on this task?
Ramón Dolz

rjlittlefield
Site Admin
Posts: 23564
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

RDolz, thank you for the careful and detailed work.

I think I can explain some of the more confusing aspects of your observations.

You wrote:
Because the maximum field a telecentric lens can cover is approximately the diameter of its input aperture.
More precisely, the maximum telecentric field is approximately the diameter of the lens input aperture, minus the diameter of the light cone for each part of the subject, at the location of the lens aperture.

If the field gets any larger, then in the outer portion of the field, each cone of light gets clipped by the lens aperture on its outside edge. This means that the effective aperture for those points in the field is no longer round. More importantly, the central rays of these modified apertures are no longer parallel to the optical axis.

In that outer part of the field, the system cannot be exactly telecentric, although it will be a lot closer to telecentric than it would be if the added stop were not present.

Notice that in the non-telecentric part of the field, the light cones are smaller than they are in the telecentric area. These areas of the image will get less light, and if they get enough less light, you will perceive them as darker, that is, "vignetted".

There is a simple guideline here: whenever you see vignetting in a system that is supposed to be telecentric, it's a good bet the system is not really telecentric.

To make this more definite, I have used WinLens3D to make a simplified picture of your lens setup. The MD5400 is modeled as a "thin lens" with 36 mm focal length and 13 mm diameter. At the perfect telecentric position 36 mm behind the thin lens I have placed a stop, and I have set the object and image distances to give 2.95X magnification (your HFOV 8 mm).

Then in one case I have set the added stop quite small, only 2 mm diameter, and in another case I have set the added stop to 6.25 mm diameter, as in your test.

Here are the pictures:

Image

In the first case, with the telecentric stop set to 2 mm, the telecentric field diameter works out to be about 10.32 mm, a little more than enough to fully cover the 9.6 mm diagonal that goes with your 8 mm HFOV.

In the second case, with the telecentric stop set to 6.25 mm (your "f4"), the telecentric field diameter is only about 4.63 mm! Outside that small central area, the system will be progressively less telecentric. In the second picture, notice that for the extreme point in the object field, the ray fan is clipped by the edge of the lens so that it does not fill the aperture of the added stop.

A third case, which I have not illustrated, would correspond to your "f8" setting, which according to your table really means an aperture diameter of 3.13 mm. In this case the diameter of the light cone at the lens would be 3.13 * (142.2/(142.2-36) = 4.19 mm, giving a maximum telecentric field of 13 - 4.19 = 8.81 mm, still not quite big enough to cover your whole 9.6 mm diameter.

It's an interesting exercise to think about what this means in terms of image appearance as you move through focus. In the center of the field, where the system is truly telecentric, nothing changes except focus. But in the periphery of the field, where the system is not quite telecentric, the subject changes size along with focus. If your subject consists of a series of small round dots, and you keep the shutter open as you sweep through focus, then in the center of the field the dots will stay round, while in the edges of the field they'll become slightly elongated on the radial axis. It's like a very weak "zoomburst" effect, but restricted to the periphery.

If you then feed a bunch of these images to Zerene Stacker, and ask it to align them, including Scale adjustment, then you can probably imagine that some compromises will be needed. In the center of the field, scale change zero would be perfect, on the edges some other value would be perfect, and in between, other values would be perfect. The final value determined by Zerene Stacker will be some sort of weighted average of all the different scale values.

So, in your table of measured scale changes, there is not even a single case where the system should be telecentric across the entire field. The one at 8 mm HFOV and "f8" is pretty close, with (in the model) only the corners not being telecentric. I'm thinking that's probably the reason that you got a pretty small value of 0.00434% .


You mentioned that:
There is a point that has puzzled me. Out of curiosity, I calculated the telecentricity in several of the stacks of this image by activating the Scale option in Zerene, ... and it gives me an average of 0,066%!. With the same setup, the first calculations gave me a scale change between each stacking image of 0,041%.
The scale factor computed by Zerene Stacker is only its "best guess", based on comparing pixel values in the two images. The number is quite accurate when two images are identical except for focus, for example when comparing a planar target shot with a flat field lens at two distances that are equally far on both sides of perfect focus. But the number becomes less accurate when the images are not so similar, typical of real subjects that are not even symmetric. With your complicated subject, I am not surprised that the numbers are as different as 0.066% and 0.041%. If anything, I am surprised that the number are so similar!
I think far from 0,02% …. although the stacking + stitching has apparently worked perfectly.
Ah, but remember...

When you turn off scale adjustment in the software, the resulting stacked output will end up having exactly the same overall geometry that a perfect telecentric lens would give naturally.

In this case the effect of a lens that is not perfectly telecentric is to cause some radial displacement artifacts in the corners of the images.

But the displacement can only be as bad as the shift from front to rear of each focus step. It does not accumulate, one frame after another.

As a result, when you turn off scale correction in the software, the main difference between a telecentric lens and a non-telecentric lens is how well quality is maintained in the corners of the stacked images.

Lou Jost wrote:
I think that instead of working with the DiMage lens, we should explore the use of longer lenses for this.
Yes, I agree completely. If nothing else, longer lenses have larger diameter apertures, at the same f-number, so they can have proportionally larger telecentric fields.

Finally, RDolz wrote:
it would be good to find a method to correctly measure the telecentricity of a lens.
If the degree of telecentricity is the same everywhere in the field, then I think you will get an accurate number from the method that I described earlier, using a detailed planar subject that is focused equally in front and in back of perfect focus. In this case the aperture can be made smaller without affecting the telecentricity, so stopping down to get more DOF can improve the accuracy of the number.

However, if the degree of telecentricity is not the same everywhere in the field, as in all of these MD5400 tests, then no single number will suffice to describe the system.

--Rik

RDolz
Posts: 96
Joined: Mon Aug 28, 2017 9:32 am
Location: Valencia (Spain)

Post by RDolz »

Hello Rik:

Thank you very much for the effort you make so that we can understand the concepts that concern us. I'm always surprised by the accuracy of your answers.

In this case I already missed you and I thank you deeply for your contribution.

Well, thanks to your extensive explanation I think that I am finally on the way to understanding some key aspects of telecentricity, both the optical aspects and the behavior of the software.

First of all understand that, depending on the diaphragm used, the "good" telecentricity can cover the entire field of the sensor or only the central part of it (as in the case of my test). In addition, if it does not occupy the whole field, it is very important to understand that there is a gradation of the telecentricity being much better in the center than in the edges.

That may be obvious for those who have extensive knowledge of optics, but that is not my case, I believed that when we mounted a diaphragm at the right point, according to the diameter of it, we generated a better or worse telecentricity, but of THE SAME QUALITY throughout the field. Therefore, I focused on the degradation that occurs at the edges due to diffraction and vignetting. I did not know that the vignetting itself was an indicator of the lack of telecentricity.

Your simulations and calculations with WinLens3d are very clarifying ... and I think they provide a way to know if we are covering all our sensor with a "good telecentricity", or just a percentage of it.

This can be definitive when planning a Stacking + Stitching,, .. given that with this method we can determine which part of the image is usable.

I have made the calculations of the "bad" image percentage corresponding to my telecentric field diameter of 4.63 mm. I know it's a circle, but since we use Cartesian robots, you're going to let me simulate it with a rectangle.

A diameter of 4.6 mm corresponds to a diameter of 2833 pixels and a horizontal field of 2357 pixels. That is, 4896 pixels are only "good" 2357 central and on each side we have 1269 "Bad". On each side there is 26% of the field of the sensor with lack of telecentricity (this percentage is valid both horizontally and vertically).

I think this explains why my test went well despite the bad values obtained by my measurements. It worked because to make a good montage I was doing an overlap of 37.5%.

This opens a way (not very advisable): although the system does not have an excellent factor of telecentricity, this can be replaced by brute force, that is, by capturing thousands of images that only have a small central telecentric area.

This gave me an idea: calculate the telecentricity in different points of the same stack. How? Very simple, in raw camera I have cut out a rectangle of the same dimension in different points of the image: Center, upper right corner, upper left corner, lower left corner, and lower right corner. And I have made stacking for each of these portions.

First I tried it in several stacking of the sample Bryozoa, and the result has been very random, depending on the image can give very similar values in the center or on the edges, or even worse .. apparently the complexity of the structure influences much in the scale variation calculations.

But then I remembered that I have stacks of a Silicon Wafer, captured with the same setup with which the Bryozoa captures ... (Red and Green, they are only indicative lines)

Image


AH! The results have been spectacular. Sup Center, is a smaller portion within the central portion.

Image

In the center 0.005% (0.5 parts / 10.000 !!!) in the center and a correct 0.025% in the central zone. As well as an average measure of 0.077% on the edges 32.3% worse. I think these data help clarify this aspect.

Another aspect that I have to emphasize is your explanation about the defects generated by the lack of telecentricity in the Stacking and how Zerene works in calculating the Scale Factor. The lack of understanding in this aspect has made me make mistakes and has been a huge job in trial and error sessions: IT IS NOT ACCUMULABLE in all the stacking !, it only affects the degradation in the edges of the image.

I thought that the lack of telecentricity generated an error that was accumulating as they are stacking each of the images that make up the image. I did not understand anything. For example, in a stacking, the scale factor gave me a value of 0.981735, which meant 89.4 total pixels of scale variation, I expected to find that error when stacking! But now, obviously, I see that it is not like that.

Now it is evident that it is not immediate to obtain a measure of the telecentricity in cases where the "good" telecentricity does not occupy the entire sensor, but I am left with the possibility of calculating the diameter of the image where the telecentricity is good and in the possibility of, although it is difficult, to find a way to standardize that measure (in order to compare different setups).

Rik I dare to make a suggestion: For those of us who work in telecentric systems, it would be very useful if Zerene incorporated as a calculation option, in addition to the scale factor, the measure of the telecentricity between each pair of images and the average overall measurement of a stacking. (The maximum would be this measure at different distances from the center).

Rik, thank you very much again. Your explanations about telecentricity that you already started, at least in 2006, have allowed me to get here.
Ramón Dolz

rjlittlefield
Site Admin
Posts: 23564
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

Ramón, I'm glad I could help. I have spent a lot of time removing my own confusion about most of these issues, so I am always happy when I can help somebody else get there faster.

I will think about your suggestions for Zerene Stacker.

At least some of the information that you ask about can be obtained by using the new ZS feature Save Other > Save Registration Parameters, plus a little spreadsheet processing. The feature generates a tab-separated file of alignment values that can be read directly into a spreadsheet program such as Excel.

--Rik

RDolz
Posts: 96
Joined: Mon Aug 28, 2017 9:32 am
Location: Valencia (Spain)

Post by RDolz »

Hello, I was finally able to identify the species, it is not a bryozoa it is a coral:

the Organ pipe coral (Tubipora musica) is an alcyonarian coral native to the waters of the Indian Ocean and the central and western regions of the Pacific Ocean.

Here are some links ...

https://es.wikipedia.org/wiki/Tubipora_musica
https://www.livingoceansfoundation.org/ ... ica-coral/

As you can read: "The image of the red skeleton is more known than that of the animal itself, this is because the skeleton is appreciated in jewelry, which together with its use by oriental medicine to cure dysentery, is placing this species in situation of near threatened ".

The Tubipora genus is on the IUCN Red List for Endangered Species as Near Threatened (NT).

the best
Ramón Dolz

Post Reply Previous topicNext topic