Comparison of synthetic stereo versus true stereo

A forum to ask questions, post setups, and generally discuss anything having to do with photomacrography and photomicroscopy.

Moderators: rjlittlefield, ChrisR, Chris S., Pau

rjlittlefield
Site Admin
Posts: 23588
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Comparison of synthetic stereo versus true stereo

Post by rjlittlefield »

I think it's well known that Zerene Stacker can generate synthetic off-axis views, suitable for making synthetic stereo pairs or rocking animations.

But there may be nagging doubts about how "accurate" those synthetic views are. I mean, I wrote the software, and even to me it seems like some sort of magic to get really good off-axis views from a single stack.

The purpose of this thread is to help lay those doubts to rest.

Let's start with some imagery.

The following are stereo pairs, in crossed-eye format, showing a small highly textured sphere (about 4 mm diameter). The first pair is synthetic stereo, constructed from a single stack. The second pair is true stereo, shot as two stacks from two different camera positions.

Image

Image

If you're having trouble seeing the difference between those images, that's because there isn't much to see.

Theory predicts that the true and synthetic images should have the same overall geometry, except that the synthetic images should be a hair wider because the synthetic camera is not "toed in". That difference is about 0.3% in this case. (The stretching is 1-cos(viewAngle), with viewAngle about 4.4 degrees in this case.)

Here is a flash-to-compare rendering of the true versus synthetic left-eye images. Pretty close to theory, I think. On the right side, the synthetic image focuses a bit farther back because of the different tilt of the focus plane. On the left side, it's the reverse. OOF blobs around the edges shift left and right because in the synthetic stereo they get mapped to the plane of the last frame, half-way through the sphere, where the true off-axis view shows them in their proper position.

Image

Now, here is some more information about how the above images were made.

The sphere itself is made from a steel ball, lightly coated in sticky grease, rolled in some finely ground salt and pepper, stuck to a pair of small magnets for handling, and blown off with a CO2 duster. It was a lot of prep work, but it gave me a finely textured small object with known geometry.

Image

The sphere and its associated magnets were mounted in my horizontal stacking setup. (Lights and diffuser removed here so you can see the subject.)

Image

Then I shot three stacks: one straight on the long axis of the baseboard, plus two others angled off axis by 1 hole in 13.

Here's the setup for shooting the right-eye view.

Image

It's simple enough to compute the off-axis angle in this case. That's just atan(1/13) = 4.3987 degrees.

The question then is what is the corresponding X shift?

To get that number, we need to know the stack width and depth.

We could calculate the stack width by observing that this is a Mitutoyo 5X objective on 100 mm tube lens, so giving nominal 2.5X. The sensor on the camera is listed as width 22.3 mm, so the field width should be 22.3/2.5 = 8.92 mm.

Or we could just measure it, by shooting a stage micrometer and counting pixels. When I did that, I got a value of 8.706 mm. That's pretty close to nominal, but I'd rather avoid that 2.5% error so I used the measured value.

Now for stack depth. I shot 101 frames (100 steps) with a spacing of 0.020 mm, so nominally 2.0 mm. But that's according to the tick marks on the fine focus knob of that Olympus CHT focus block. Turns out that thing is not quite right either. Earlier I measured the focus block and found that it's off by a ratio of 25.9/25.0. That turns my nominal 2.0 mm into an actual 2.072 mm.

OK, so now we know actual frame width and depth: 8.706 mm and 2.072 mm.

From there, it's just a matter of plugging into the formula given at http://www.zerenesystems.com/cms/stacke ... wing_angle :

maximumShiftInX = tan(viewAngle) * (subjectStackDepth/subjectStackWidth) * 100%

Excel can quickly compute this as =TAN(RADIANS(4.3987)) * (2.072/8.706) *100 = 1.8307 .

I rounded that to 1.831% for purposes of stacking:

Image

And that's where the images above came from.

By the way, I know I'm a perfectionist, but I do care about getting things right. If I plug in the nominal width and depth, instead of the measured values, then the calculation ends up saying shift by 1.725%. That gives a comparison that is close, but not quite close enough to make me confident that I understand correctly.

Image

I will be happy to answer questions, of course.

--Rik

[*] Any sufficiently advanced technology is indistinguishable from magic. Arthur C. Clarke, 1973.

ChrisR
Site Admin
Posts: 8671
Joined: Sat Mar 14, 2009 3:58 am
Location: Near London, UK

Post by ChrisR »

I find the dome of the central half of the circle more convincing in the True image.
The tests inthe "what's a good separation" thread showed that part more convincing with more separation, but that would be expected because it's fairly flat.
My initial impression with the synthetic pic above was of a flat or slightly concave middle.

The shape, from the pictures with the macro lens (other thread) is far more convincing.
Is this because the entrance pupil of the objective is too far away, and could it be changed by forcing the objective to focus nearer, on the image side? Perhaps by shortening the distance from the tube lens to the sensor?
Chris R

JH
Posts: 1307
Joined: Sat Mar 09, 2013 9:46 am
Location: Vallentuna, Stockholm, Sweden
Contact:

Post by JH »

Thanks!
Best regards Jörgen
Jörgen Hellberg, my webbsite www.hellberg.photo

rjlittlefield
Site Admin
Posts: 23588
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

ChrisR wrote:I find the dome of the central half of the circle more convincing in the True image.
Interesting!

I have tried but failed to reproduce that impression in my own head, so I can only guess about what might be causing it in yours.

In case you want to investigate this yourself, I have uploaded full-resolution versions that you can feed into StereoPhoto Maker. They are here:
http://janrik.net/MiscSubj/2017/StereoS ... ullRes.jpg
http://janrik.net/MiscSubj/2017/StereoS ... ullRes.jpg

Now, about my guesses...

One possibility is that you're reacting to some subtle difference in the illumination. These were shot with highly diffused continuous illumination (three Jansjö lamps, one each at 12, 4, and 8 o'clock, diffused through a white foam cup, all this fixed with respect to the subject), manual exposure, processed using DMap with no brightness correction. That makes them as constant as I can. Nonetheless, I see in flash-to-compare of the two stereo views shown above that there are systematic differences in brightness between the True and Synthetic pairs. I assume that has something to do with light bouncing off crystalline surfaces at different angles, oblique versus head-on.

Another difference that I see in flash-to-compare of the stereo views is that the slight narrowing of the True view turns into a corresponding slight increase in perceived curvature. It's a small amount, someplace around 0.3%, certainly detectable in flash-to-compare though I can't see it if I have to move my eyes.

A third possibility is that the effect has something to do with depth quantization.

The argument in favor of depth quantization being an issue is that I only shot 100 frames, so at first blush that's how many depth planes there are in the Synthetic view. The central region of the sphere uses only a fraction of those. The fraction is 1-cos(angleOffCenter) which means that the center third of the sphere face only uses the first 5 or 6 depth planes. So at least with some handwaving, the apparent depth of the center part of the sphere might end up varying by 10% or so depending on how the physical face lined up with the focus planes.

It's not that simple, though, because with a highly textured subject like this, DMap ends up constructing a depth map that moves pretty smoothly between levels, and then pixel values get computed using a weighted average of images with different shifts. The final result ends up sort of "simulating" intermediate depths also.

As an experimental test, it's simple enough to zoom in, using StereoPhoto Maker, to see that region in more detail.

Here are screen captures of the True and Synthetic versions, zoomed in to about 66% of actual pixels.

Image
Image

The region shown in these images is just above right of sphere center.

This area is far in front of the screen in the stereo files, so to get a tolerable depth position when zoomed in, I ended up having to realign the images in SPM by pressing the left-arrow key 40 times. I notice in flash-to-compare of these two screen captures that they don't quite match in overall depth position. That could be a slight difference in alignment in the image files, or maybe I got off by one in the keypresses. In any case I don't think the difference in overall depth position matters much.

What strikes me in these two zoomed-in stereo views is that I'm astonished at how similar they are. Looking at just the lower left quarter of the region, and checking against the original source images, I can see that only 7 source images contain sharp detail in that quarter. Nonetheless, to my eyes the fused stereo pairs are almost identical except that the True version is slightly more crisp. (I expect the increased crispness is because in each True view, all the images were naturally aligned for stacking, rather than being shifted and averaged as in the Synthetic.)

Having looked at the images this way, I'm doubtful that depth quantization is a significant factor in your perception.

You mentioned:
The shape, from the pictures with the macro lens (other thread) is far more convincing.
Is this because the entrance pupil of the objective is too far away, and could it be changed by forcing the objective to focus nearer, on the image side? Perhaps by shortening the distance from the tube lens to the sensor?
I assume you mean "convincing as a sphere". I suspect that has something to do with surface texture. In the other thread, the dark particles that provide texture were flat flakes that look quite different when they're seen face on versus side view. In the smaller sphere shown in this thread, the texture particles are more symmetric, like sand, so there's a lot less difference in appearance from center to edge.

I'm inclined to think that perspective foreshortening is not a big factor in your perception. In this thread, the rendering is telecentric. The objective by itself is very close to telecentric and I turned off Scale in processing. In the other thread, there is perspective foreshortening, but it's small. I checked a stack that I shot at the same time, and there's only around 2.5% change in scale from center to edge of the visible hemisphere. That's consistent with what I know of the sphere radius (about 11 mm) and the position of the lens's entrance pupil (about 0.5 m away from the subject). Angle of view is small, definitely "telephoto perspective".

As for changing the perspective by refocusing the tube lens, I don't think that's going to work. Perspective is established by the apparent position of the limiting aperture, which is physically inside the objective. That position won't be altered by refocusing the tube lens. The focus distance will be altered, but in this case the entrance pupil is so far away that the relative change caused by altering focus distance would be miniscule, far less than 0.1%, I think.

--Rik

David Sykes
Posts: 51
Joined: Mon May 02, 2016 2:04 pm
Location: North Wales,U.K.
Contact:

Post by David Sykes »

Interesting work Rik.

One hundred depth layers is good as far as avoiding 'cardboarding' is concerned.

('cardboarding' - image looks like a popup book with limited layers)

For larger subjects I guess the number of depth layers reduces rapidy and at some subject-size you would have to resort to subject rotation instead.

At that smaller magnification, even though rotation will move the subject horizontally (unless using a goniometer setup) at least there is sufficient depth-of-field to recognise the previous starting-point detail.

At higher magnification that is almost impossible so that is yet another advantage of synthetic stereo.


David

MacroLab3D
Posts: 96
Joined: Tue Jan 31, 2017 11:40 am
Location: Ukraine

Post by MacroLab3D »

I did comparing before and was totally amazed and satisfied with the result, and you now just did it in more precise way. Thanks. And by the way, Rik, you need to point somewhere on avatar or in sign that you the creator of Zerene Stacker. Because i only recently noticed that fact and still don't understand why is this conspiracy?

rjlittlefield
Site Admin
Posts: 23588
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

MacroLab3D wrote:Rik, you need to point somewhere on avatar or in sign that you the creator of Zerene Stacker. Because i only recently noticed that fact and still don't understand why is this conspiracy?
No conspiracy intended. It's to avoid even the appearance of advertising. The photomacrography.net website is staunchly non-commercial. We don't allow any advertisements or things that look like them, especially repetitive references to commercial products in avatars or signatures.

In situations where I recommend Zerene Stacker over alternative packages, I usually include a disclaimer to the effect that I'm biased because I wrote that one. In other situations where it doesn't seem to matter, I don't mention it to avoid repetitive reference. This thread was strictly a technical analysis, so I wouldn't have mentioned it at all except that I wanted to reinforce the idea that the method works surprisingly well.

--Rik

Smokedaddy
Posts: 1959
Joined: Sat Oct 07, 2006 10:16 am
Location: Bigfork, Montana
Contact:

Post by Smokedaddy »

Personally I thought the Zerene Stacker version looked slightly better, even a touch more contrast (not that it's relevant).

-JW:

Post Reply Previous topicNext topic