Macro to infinity spherical|orthogonic|lenticular pano head

Have questions about the equipment used for macro- or micro- photography? Post those questions in this forum.

Moderators: rjlittlefield, ChrisR, Chris S., Pau

rjlittlefield
Site Admin
Posts: 23597
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Re: Macro to infinity spherical|orthogonic|lenticular pano h

Post by rjlittlefield »

Joseph S. Wisniewski wrote:It's also effective if you work in very narrow slices, so the "echos" as Rick calls them from the horizontal displacement isn't as severe. I do architectural orthographics, and telecentric lenses are not effective at architectural scales.

Then again, to do this with focus stacking would require small slices of depth and small slices horizontally, so you're looking at a very complex procedure. Square the number of slices you normally take...
Good points.

For focus stacking you'd still need to rescale in the cross-scan direction to avoid echos.

I gather that your architectural images are orthographic in width, but ordinary perspective in height. Or do you scan on both axes?

--Rik

Joseph S. Wisniewski
Posts: 128
Joined: Fri Aug 15, 2008 1:53 pm
Location: Detroit, Michigan

Re: Macro to infinity spherical|orthogonic|lenticular pano h

Post by Joseph S. Wisniewski »

rjlittlefield wrote: For focus stacking you'd still need to rescale in the cross-scan direction to avoid echos.
Unfortunately. The way around that is a volumetric stacking and panning algorithm, but right now we're a good distance away from seeing those. I've played with small ones, but building a 3D "volume" of pixels requires incredible amounts of storage. On the order of 100-1000 times what it takes to do a stack the way we do now.
rjlittlefield wrote: I gather that your architectural images are orthographic in width, but ordinary perspective in height. Or do you scan on both axes?
Just the horizontal. Put up a tripod with a laser and follow the beam. Roofs generally don't show in the ortho, protrusions from building fronts are minimal, and and the vertical errors are pretty much limited to "incorrect" perspectives in views through windows, and few notice those.

elf
Posts: 1416
Joined: Sun Nov 18, 2007 12:10 pm

Re: Macro to infinity spherical|orthogonic|lenticular pano h

Post by elf »

Joseph S. Wisniewski wrote: OK, say we're using the 63mm Luminar, on a bellows, with your four thirds camera. And we're stalking a small flower, approximately 40mm high and wide, so we need about 0.25x magnification, and start out focused at the front of the flower, 78.75mm from the subject to the front node of the lens, 315mm from the rear node to the sensor. Out flower is roughly 20mm deep. Now, to move the subject focal plane that first mm (from 78.75mm to 79.75mm) we move the rear standard forward 15.04mm, to get a rear node to sensor distance of 299.96mm.

The next slice, 80.85mm, comes a little cheaper, we only move the rear standard 13.3mm to get a 1mm front focal plane shift. By the time we reach the last slice, 97.75mm from the focal plane, we're only moving the rear standard 3.4mm to get each 1mm of front shift. So if you want uniform slices through the subject, you have to move the rear standard in a highly non-linear fashion.

We also note that the magnification has increased from 0.25x to 0.55x, so we're making an ugly crop. Which means that we should have started this sequence of computations from the back, and we have to use a different lens, because the luminar won't let us move the front focus plane 20mm forward when we start at 78.75mm. There's only 15.75mm of motion before the subject is one focal length from the front node and magnification goes infinite.
I'm still not sure what you mean by "ugly crop". I can see where the total depth of field available on lens that don't focus to infinity may be smaller than the subject, but this should just be an artistic challenge :)

I'm still experimenting with the workflow and here's what I'm currently doing:
1. Decide on the maximum magnification.
2. Set the entrance pupil to the rotation point.
3. Focus on the closest object and note the bellows draw position.
4. Focus on the furthest object.
5. Determine the number of images in the stack. Part of this is to decide whether to shoot the entire stack at the increment of the maximum magnification or vary it to keep the number of images to a mininum.
6. Shoot the first frame from back to front so the closest focus is the last shot.
7. Rotate the head to the next position allowing a 20% overlap. Note the amount of rotation.
8. Shoot the stack.
9. Reset bellows to position noted in step 3.
10. Rotate the head using the same amount of rotation as step 7.
11. Repeat 8 through 10 until done.

Charles Krebs
Posts: 5865
Joined: Tue Aug 01, 2006 8:02 pm
Location: Issaquah, WA USA
Contact:

Post by Charles Krebs »

elf,

... you have become the polar opposite of "point-and-shoot"... :wink:

rjlittlefield
Site Admin
Posts: 23597
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

Let me see if I can talk through this so it makes sense. There are some errors and hidden assumptions in what's been written above. I'll try to ferret those out and reconcile the various arguments.

First, let's take as given that the goal is to do a perfect stack-and-stitch. By "perfect", I mean:
1. no parallax between tiles in the stitch, and
2. everything sharply focused between the front and rear focus limits of each stack.

To accomplish the first goal, what's needed is to keep the entrance pupil at the same place in every stack. elf's strategy does this by rotating the camera+lens assembly around the entrance pupil. With his optics, that's probably inside the lens, a bit in front of the front bellows mount.

To accomplish the second goal, with an arbitrary subject, what's needed is to limit the angle of view of the stacked result to be that of the narrowest frame.

Let's pause a moment to consider those words "narrowest frame".

In elf's setup, the angle of view changes with focus. It is narrower when the sensor is far away from the lens, and wider when the sensor is close to the lens.

This variation in angle of view presents a dilemma.

To be sure of getting everything in focus, we have to limit our attention to just what's contained in the narrowest angle of view. But that means throwing away pixels in the other frames.

It would be nice at this point to have a specific example. My first attempt was to start with Joseph's numbers. That made a compelling case, but I gradually realized that those numbers are wrong. Joseph starts from needing 0.25 magnification to shoot a particular flower. That part is OK, but from there his example has its conjugate distances swapped. Placing the lens 78.75 mm from the subject and 315 mm from the sensor results in a magnification of 4.0, not 0.25. The proper relationship for magnification 0.25 would be to place the lens 315 mm from the subject, and 78.75 mm from the sensor.

Starting from the corrected relationship, the analysis goes like this. At close focus, 315 mm to the subject and 78.75 mm to the sensor, the sensor will see an angle of view that is about 13.04 degrees wide. At far focus, 335 mm to the subject and 77.6 mm to the sensor, the angle of view would indeed be larger, but not much -- only about 13.23 degrees. Even if we push far focus clear out to infinity, with the lens 63 mm from the sensor, the angle of view increases only to 16.26 degrees. Going from 16.26 down to 13.04 is not what I would consider to be an "ugly crop".

The situation gets worse at higher magnifications, though. Working from Joseph's original numbers at 4X, the angle of view would vary from 3.3 degrees at close focus to about 5.9 degrees at 20 mm farther back. That's quite a bit of crop.

On the other hand, at 4X onto a four-thirds sensor, the foreground field would be only 4.5 mm wide. Stacking a subject 4.5 mm wide and 20 mm deep would be pretty aggressive in anybody's book. I wouldn't be surprised if other issues proved to be more troublesome than variations in the angle of view.

elf makes a point of shooting "from back to front so the closest focus is the last shot". I think there's a hidden assumption in there that stacking software will use the first frame as a reference for alignment. That's actually adjustable in most packages. Starting with the narrowest frame (closest focus) makes every frame completely cover the stacked result. This avoids artifacts but throws away pixels on the periphery of background frames. Starting with the widest frame (farthest focus) saves all the pixels, but is likely to give artifacts due to frame edges and can only focus foreground parts of the subject in the central portion of the stack.

Bottom line is that I think elf's strategy is not only feasible, but may be a preferred approach at low magnifications where telecentric optics are challenging.

--Rik

Post Reply Previous topicNext topic