Zhongyi (Mitakon) Super Macro Lens (1 - 5x)

Have questions about the equipment used for macro- or micro- photography? Post those questions in this forum.

Moderators: rjlittlefield, ChrisR, Chris S., Pau

mjkzz
Posts: 1693
Joined: Wed Jul 01, 2015 3:38 pm
Location: California/Shenzhen
Contact:

Post by mjkzz »

CORRECTION: ICE's ROLL is around the axis shooting into (or out) screen, YAW is around the vertical axis of screen and PITCH is around the horizontal axis on screen.

But I think there is some kind of randomness with ICE, now it stitches perfectly :shock: :shock: :shock:

I guess they start with some random initial condition which is so popular in vision algorithms.

Image

Lou Jost
Posts: 5990
Joined: Fri Sep 04, 2015 7:03 am
Location: Ecuador
Contact:

Post by Lou Jost »

It still has all the same problems I mentioned before, while the QV image has none of those problems.

Correction made after seeing your new corrected comment: Yes! That fixes it, now all problems are gone.

mjkzz
Posts: 1693
Joined: Wed Jul 01, 2015 3:38 pm
Location: California/Shenzhen
Contact:

Post by mjkzz »

Lou Jost wrote:It still has all the same problems I mentioned before, while the QV image has none of those problems.

Correction made after seeing your new corrected comment: Yes! That fixes it, now all problems are gone.
Thanks Lou.

I am used to have the screen as "ground" and "flying" north, but in ICE world, you are flying into (or out of) the screen with top (of screen) as sky.

Now come to think of it, everything makes sense. Imagine this, the beetle is sitting (standing?) on a flat surface which is fairly tuned to be parallel with sensor plane (well just off a little as my 10x test showed). Using the body of beetle as reference, here is an analysis:

its left and right are fairly symmetrical, so we really should not expect much rotation (left or right in or out of screen), and this is the pitch rotation. ICE reported 0 rotation.

because it is sitting stilling, there is really no rotation around its center (imagine a pin going through its body), so the roll rotation should be small, just as ICE reported.

now, the front part of the beetle is "taller" than its tail part, so on screen, this represents the yaw rotation in ICE world, and this is the amazing part: ICE picked it up by reporting some yaw rotation!!! This also answers this:
I'm not sure why there is a global perspective adjustment, but I guess ICE makes some assumptions in order to complete the stitching.
WHY? This lens is NOT telecentric, though close. The parallax error, however small, is actually being picked up by ICE, isn't that amazing!

This is why I am using stitching as test of this lens. If the lens has some degree of distortion, it would be hard to stitch a flat subject (like coin) with relatively straight "edges" (intentionally miss align XY and Z). If the lens is not telecentric, there would be some rotation being reported by ICE!!!

rjlittlefield
Site Admin
Posts: 23621
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

Regarding the distortion, I notice that at the bottom of the earlier distorted one, the label says "Camera motion: rotating motion", and at the bottom of this current better one, the label says "Camera motion: planar motion".

That's a really important difference, and I'm wondering what caused it.

Did you set that, or did ICE pick it for itself? In either case, why the change?

--Rik

mjkzz
Posts: 1693
Joined: Wed Jul 01, 2015 3:38 pm
Location: California/Shenzhen
Contact:

Post by mjkzz »

rjlittlefield wrote:Regarding the distortion, I notice that at the bottom of the earlier distorted one, the label says "Camera motion: rotating motion", and at the bottom of this current better one, the label says "Camera motion: planar motion".

That's a really important difference, and I'm wondering what caused it.

Did you set that, or did ICE pick it for itself? In either case, why the change?

--Rik
Hi Rik, now since I have the pitch, yaw, and roll in ICE world correct, this is my take on this:

With overlap, ICE is able to compute a homography between each tiles.

If telecentric optics is used, because of insensitivity to parallax, AFTER stacking, all tiles would appear to be a planar subject (like they, each tile of the beetle, are printed on paper), so if perfectly stitched, ICE report planar motion -- moving in parallel with sensor plane.

If a non-telecentric optics is used, with enough overlap to compute homography, these tiles would appear 3D objects and to stitch them, some rotation might be required

When a near telecentric optics is used, if ICE uses random values for initial conditions and rely upon iterations to converge, the result might be flipping between perfect stitched (planar motion) or some transformation needed.

In this case, the Yaw rotation is so small, I can see it flips.

If images are stitched perfectly, ie, planar motion, the only parameter you can change is Roll which basically rotate the image for your liking.

In the "distorted" one with pitch being -8.82, I have to admit that I am an OCD person, I want to show people that page, so I dragged it to the middle and subsequently, changed perspective :D

mjkzz
Posts: 1693
Joined: Wed Jul 01, 2015 3:38 pm
Location: California/Shenzhen
Contact:

Post by mjkzz »

So, as a warning to those doing stitching: if stitch is not planar motion, do not change those pitch, yaw, roll values or change it back to what they were before click NEXT (or move to CROP), those changed values will change final result.

Here is another test, the tallest part is about 12mm, yet, with this lens, I am still able to get a planar motion stitch. So, I am really happy with it in terms of stitching.

Image

Image

Image

rjlittlefield
Site Admin
Posts: 23621
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

mjkzz wrote:Here is another test, the tallest part is about 12mm, yet, with this lens, I am still able to get a planar motion stitch. So, I am really happy with it in terms of stitching.
Hhmm...

As I measure your image, the right side of the black area is almost 10% larger than the left side (499 pixels high on the left, versus 547 on the right).

I don't see any sharp discontinuities along the top or bottom edges, so I assume the difference in dimensions represents stretching.

Have I interpreted this correctly?

--Rik

mjkzz
Posts: 1693
Joined: Wed Jul 01, 2015 3:38 pm
Location: California/Shenzhen
Contact:

Post by mjkzz »

rjlittlefield wrote:
mjkzz wrote:Here is another test, the tallest part is about 12mm, yet, with this lens, I am still able to get a planar motion stitch. So, I am really happy with it in terms of stitching.
Hhmm...

As I measure your image, the right side of the black area is almost 10% larger than the left side (499 pixels high on the left, versus 547 on the right).

I don't see any sharp discontinuities along the top or bottom edges, so I assume the difference in dimensions represents stretching.

Have I interpreted this correctly?

--Rik
yes, that is exactly what I would interpret it.

ICE can compute a camera position from the overlap area, with so many "features" in these images, the estimated camera positions (distance to subject and pose (pitch, yaw, roll)) probably only differ in distance (edit, and accurate enough), thus "stretching" probably can fix the issue (of different camera positions).

Or put it in another word, the image plane seen by camera (estimated by ICE) are parallel to each other, but at different location. edit: this of course, requires the optics used to be rather good, this is why I am happy with it

I did not work on ICE, I do not know exactly how it works, so this is my guess (or approach I would take).

mjkzz
Posts: 1693
Joined: Wed Jul 01, 2015 3:38 pm
Location: California/Shenzhen
Contact:

Post by mjkzz »

further more, with telecentric optics, all tiles would appear flat as if they were printed on paper. So the computed camera positions from overlap area probably have no change in distance nor any rotations, they just appear to be sweeping across the subject, edit: thus planar motion, so it is so easy to stitch.

ray_parkhurst
Posts: 3438
Joined: Sat Nov 20, 2010 10:40 am
Location: Santa Clara, CA, USA
Contact:

Post by ray_parkhurst »

I use ICE for my panoramas and I don't normally see any of this sort of stretching, but with ICE you need to be careful to specify the overlap % fairly accurately. I do remember seeing such stretching when I had the overlaps set incorrectly and the program needed to search for alignment points. By "fairly accurately" I mean within 0.5% or less. The overlap slider has 1% increments, but I have found that is insufficient for best results.

rjlittlefield
Site Admin
Posts: 23621
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

I've been struggling with how to think about this thread.

My own background does include developing panorama stitching software. I've never been inside ICE. But at one time I was intimate with the geometry optimization and output projection modules of the Panorama Tools package, and I wrote mods to improve them that are still in the code.

As a result of that background, I have seen a lot of oddly transformed panoramas and I understand a lot of ways that can come about.

When working with a spherical world, some stretching is unavoidable because you simply cannot get a flat result otherwise.

But no stretching is required when working with a flat world, except for small local adjustments to correct lens geometry issues like barrel or pincushion distortion.

So, when I see globally stretched results that were shot in a rectilinear XYZ setup, my interpretation is that something has gone wrong and the appropriate course of action is to figure out how to fix it.

In contrast, mjkzz in this thread seems to be using software behavior as some sort of tool for making inferences about lens performance. I don't have any problem with that basic idea. But the connections in this thread seem to be pretty tenuous and I can't really identify a "takeaway message". Hence my struggles...

--Rik

mjkzz
Posts: 1693
Joined: Wed Jul 01, 2015 3:38 pm
Location: California/Shenzhen
Contact:

Post by mjkzz »

ray_parkhurst wrote:I use ICE for my panoramas and I don't normally see any of this sort of stretching, but with ICE you need to be careful to specify the overlap % fairly accurately. I do remember seeing such stretching when I had the overlaps set incorrectly and the program needed to search for alignment points. By "fairly accurately" I mean within 0.5% or less. The overlap slider has 1% increments, but I have found that is insufficient for best results.
if your subjects are mostly flat ones, you will not see any "stretching"

The images done with QV have 20% overlap, the ones with Zhongyi lens and finally stitched have 40% overlap.

I see Rik has comment and I think it will answer yours to avoid duplicate thoughts :D

mjkzz
Posts: 1693
Joined: Wed Jul 01, 2015 3:38 pm
Location: California/Shenzhen
Contact:

Post by mjkzz »

rjlittlefield wrote:I've been struggling with how to think about this thread.

My own background does include developing panorama stitching software. I've never been inside ICE. But at one time I was intimate with the geometry optimization and output projection modules of the Panorama Tools package, and I wrote mods to improve them that are still in the code.

As a result of that background, I have seen a lot of oddly transformed panoramas and I understand a lot of ways that can come about.

When working with a spherical world, some stretching is unavoidable because you simply cannot get a flat result otherwise.

But no stretching is required when working with a flat world, except for small local adjustments to correct lens geometry issues like barrel or pincushion distortion.

So, when I see globally stretched results that were shot in a rectilinear XYZ setup, my interpretation is that something has gone wrong and the appropriate course of action is to figure out how to fix it.

In contrast, mjkzz in this thread seems to be using software behavior as some sort of tool for making inferences about lens performance. I don't have any problem with that basic idea. But the connections in this thread seem to be pretty tenuous and I can't really identify a "takeaway message". Hence my struggles...

--Rik
I do not know how your stitching algorithm works, but this is my thoughts.

From overlap area between tiles (you can think of them as stereo images), you can compute camera position and pose, this algorithm is very robust as long as you have enough texture in the overlap area.

Once camera positions are estimated, there are algorithms to estimate pose of a well textured surface. Back in 2008, this algorithm is fairly robust and I am pretty sure it has been improved further till 2015 (when ICE stopped). So regard to "rectilinear XYZ setup", this does not mean everything is rectilinear, computer vision system can be very powerful in estimating poses of what images present

With all these, it is very possible to extract 3D data and compute a 3D model. The "stretching" part is to fit images onto this model. So the "globally stretched" part, in my opinion, is just how images are fitted. edit: we can think of final stitching is a result of such 3D model crushed/flattened/viewed onto a 2D surface, ie, our screen

Of course, there is one big problem: these images are obtained from one direction and the 3D model might not be exactly what it is in real world (compared to ICE world). So you might have error in estimating the pose of a camera.

In the example of latest beetle, there is a slight yaw rotation. My interpretation of that is basically we do not have images shot from different angle to fully construct a 3D model (or accurately). The beetle is 63mm long, the tile is 8x8, so without images from different angle, estimating 64 (or 60 as corners were left out) camera positions accurately can be tricky. A slight difference in height of that beetle can cause some wrong estimate. This is where I was amazed that the yaw is the rotation caused by the difference of height (front and tail part of beetle).

In terms of inference of lens, well, in ICE world, without lens calibration (or maybe ICE has it, but I did not use it), all calculations are done assuming a perfect camera system (ie,no distortions, etc). I am sure ICE has some tolerance, but I do not think it would be very forgiving, as many computer vision systems require some kind of lens calibration info to work well.

ray_parkhurst
Posts: 3438
Joined: Sat Nov 20, 2010 10:40 am
Location: Santa Clara, CA, USA
Contact:

Post by ray_parkhurst »

mjkzz wrote:I see Rik has comment and I think it will answer yours to avoid duplicate thoughts :D
Not really. Actually his comments reinforce mine. Each of the tiles is shot at the same magnification, so if the software is making some tiles larger than others, the output is being distorted. This should not happen independent of how flat the subject is.

mjkzz
Posts: 1693
Joined: Wed Jul 01, 2015 3:38 pm
Location: California/Shenzhen
Contact:

Post by mjkzz »

The very fact, in ICE, that changing the pitch, yaw, or roll values results in the final stitching and you can basically view the stitched result by dragging it around on the fly suggests there is a 3D model inside ICE. The only problem here is that this 3D model is constructed from images shot from one direction.

Somehow I anticipated questions about this :D, so I did an experiment and now it is finished. The experiment is to shoot a fairly 3D object, the shell, using "rectilinear XYZ setup" with 50% overlap with each tile and this is done with Zhongyi lens at 5x, total 3652 images with 6x4 tiling. After stacking them, I fed these 24 images into photogrammetry app, Agisoft Metashape, and here is the result.

Note, even though all images are acquired through "rectilinear XYZ setup", a 3D model was constructed. Those blue thingies are cameras, also notice the position and pose (well, probably not the pose as images is too small) of camera resemble the real world config.

The "imcompleteness" of the 3D model is probably due to all images were shot from one angle, it is hard to construct a true 3D model

This is to show how powerful modern computer vision is!!!


Image

Post Reply Previous topicNext topic