Easily transportable self-contained stacking setup

Have questions about the equipment used for macro- or micro- photography? Post those questions in this forum.

Moderators: Chris S., Pau, Beatsy, rjlittlefield, ChrisR

Bob^3
Posts: 287
Joined: Sun Jan 17, 2010 1:12 pm
Location: Orange County, California

Post by Bob^3 »

OK Rik, I’m going to be a bit obnoxious here, so please don’t hold it against me.

Back in May I made some suggestions and asked a (somewhat OT) question to which you never responded. I’ve found this either means you missed it, “don’t know” (rare case!), it’s a nonsense idea or it’s already in work and you don’t wish to discuss it yet. This thread seems the appropriate place to bring up the topic again. Here’s the thread and my comment:

http://www.photomacrography.net/forum/v ... n&start=15
Bob^3 wrote:
rjlittlefield wrote: Yes, this is the classic "transparent foreground" effect. What happens is that when the lens is focused on background, it sees around small foreground objects. As a result both the background and foreground are seen clearly, in different frames. Stacking software does not understand about connected opaque structures, so it just shows whatever is the strongest detail at each pixel position. With large bristles, often there is not much detail except at the edge of the bristle, so the background "shows through".
While reading this thread I had some interesting ideas I’d like to throw out to see if they stick. What if ZS knew the actual physical focus plane on which the lens was focused at time of capture? This data could be provided during the stacking capture process by a number of methods---a linear or rotational sensor on the focus rail, counting the number of steps of a stepper motor drive, etc. If ZS had this data, could it be used to enhance the stacking algorithm resulting in a better final image?

Actually, I’ll admit this is also intended as a gentle nudge to ask for feature I would love to see added to ZS---the ability to control an automated rail! If valid, the above concept would work much better if ZS controlled the rail position and camera shutter during the capture sequence. It could then collect the true depth data for each frame and incorporate it in the process of assembling the image stack?!


I’m quite pleased to see that you’re finally following my advice and implementing the ability (through StackShot) to capture automated stacking sequences in ZS. :wink:

But how about the other concept of using true depth data from the StackShot controller to enhance the stacking results, does this concept simply peel off my proverbial “idea wall” and go splat on the ground?

I’ll interpret a “no response” to indicate that we can expect this revolutionary new feature to be announced in about six months! :D (Can I expect part of the profits?)
Bob in Orange County, CA

rjlittlefield
Site Admin
Posts: 24434
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

Bob, thanks for the link to that earlier thread. I apologize for not responding then. That was a very active thread, and I think those particular questions/suggestions got lost in the crowd of others.

The answer is that I don't know how to exploit physical position of the focus plane to improve focus stacking. That information has been available for almost every stack I've ever shot, stretching back now over 10 years, so if I did know how to use it, then most likely it would already be used.

The only thing that changed by adding the StackShot is that the information would be entered to control acquisition and could then be automatically transferred to the stacking algorithm. With a hand-driven rail it would just be entered into the stacking algorithm after acquisition -- certainly no great challenge for the typical case of evenly spaced frames. Either way, I don't know how to use it in the stacking algorithm.

--Rik

Bob^3
Posts: 287
Joined: Sun Jan 17, 2010 1:12 pm
Location: Orange County, California

Post by Bob^3 »

Rik, absolutely no apologies required. I never cease to be amazed at the number and detail of replies you provide nearly every day. I think you’re allowed to miss one every once in awhile.
That information has been available for almost every stack I've ever shot, stretching back now over 10 years, so if I did know how to use it, then most likely it would already be used.
That’s true. Knowing the stack depth and step size would give this data. It would just need to be entered manually.

I realize that the possible validity of this concept and the reality of its implementation is directly dependant on the specific algorithms used to fuse the image stack, which I reviewed some time ago and admit I don’t fully understand. However based solely on intuition, it just seems to me that in cases like the overlapping structures in the linked thread the depth data could be used to help the algorithm differentiate between foreground and background elements.

Of course, I also fully realize that mathematical functions don’t always yield to intuition. :?
Bob in Orange County, CA

ChrisLilley
Posts: 674
Joined: Sat May 01, 2010 6:12 am
Location: Nice, France (I'm British)

Post by ChrisLilley »

rjlittlefield wrote: The answer is that I don't know how to exploit physical position of the focus plane to improve focus stacking. That information has been available for almost every stack I've ever shot, stretching back now over 10 years, so if I did know how to use it, then most likely it would already be used.
Also, the software does not know that the zone of sharp focus is centered on a plane.

Bob^3
Posts: 287
Joined: Sun Jan 17, 2010 1:12 pm
Location: Orange County, California

Post by Bob^3 »

ChrisLilley wrote:Also, the software does not know that the zone of sharp focus is centered on a plane.
No, but if the software could incorporate depth data it might be better able to differentiate foreground and background elements. As understand it, the algorithms I've seen use only the image data in the stack and attempt to determine the areas of best sharpness in each frame (based on contrast), which are then fused into the final image. I do not know how the original algorithms might have been modified or exploited in ZS to lead to its improved results compared to other stacking software; but my guess is that the approach is similar.

So my question might have been better phased as: Would it be possible for the base algorithms in ZS to be modified or adapted to accept the known relative depth data (in a given region of the image) for each focus slice? And could that data then be used to give added weight to sharp foreground features in each frame to improve the results for overlapping structures? Obviously, in the case of the overlappping structures in the link, this can be accomplished by the operator using the retouching tools in ZS. The operator knows that the bristles are contiguous, solid, foreground structures and should not be rendered transparent. As I understand it, the software just looks for the highest contrast in the stack, regardless of depth, and uses that area for the final image.

If the entire concept on which the existing algorithms are based fundamentally preclude this possibility, perhaps new algorithms will someday be invented that can?

I don't think Rik is saying it is not possible.
Bob in Orange County, CA

yardman
Posts: 26
Joined: Sun Apr 03, 2011 1:25 am
Location: New Zealand

Easily Transportable Self-contained Stacking Set-up.

Post by yardman »

Rik. I appreciate the opportunity of viewing your set-up. I am particularly keen to get a closer look at the speciment positioning gear that you have mounted on the Adorma 2-axis rail. It would be really helpful if I could see a closer shot of this part of your set-up.

rjlittlefield
Site Admin
Posts: 24434
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Re: Easily Transportable Self-contained Stacking Set-up.

Post by rjlittlefield »

yardman wrote:I am particularly keen to get a closer look at the speciment positioning gear that you have mounted on the Adorma 2-axis rail. It would be really helpful if I could see a closer shot of this part of your set-up.
There's not much to see, really. A side view is shown at http://www.photomacrography.net/forum/v ... php?t=4502. It's basically just a hunk of sheet metal that's been tapped 1/4x20 to fit a small ballhead (Giottos MH-1004) that's screwed onto the top of the Adorama rail, with a chunk of balsa then glued onto the sheet metal. Provides a pinnable surface that can be tipped over a pretty wide angle courtesy the ballhead. Compared to more elegant setups, the disadvantage is that when I tip the subject it moves off center, where a goniometer would keep it centered or at least much more so.

--Rik

John (lostincoins)
Posts: 9
Joined: Sun Mar 15, 2015 10:18 am

Post by John (lostincoins) »

First off I would like to say hello as I am new to the forum. My name is John and I am an amateur.

I have a question about this stacking set up, will this set up be usable or adaptable to the new wireless systems coming out from Sony, Canon and Nikon? I ask because of my interest in the new Canon T6s (24mp) which will be WiFi capable and also NFC enabled. I can see a wireless system being more usable in the field and while doing demonstrations. Thank you for your time.
Enjoying life one day at a time, sometimes just for a minute.
An image captured by the eye may be fleeting but can be held by the heart forever.

rjlittlefield
Site Admin
Posts: 24434
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

John (lostincoins) wrote:First off I would like to say hello as I am new to the forum. My name is John and I am an amateur.
Welcome aboard!
I have a question about this stacking set up, will this set up be usable or adaptable to the new wireless systems coming out from Sony, Canon and Nikon? I ask because of my interest in the new Canon T6s (24mp) which will be WiFi capable and also NFC enabled.
Adaptable, sure, but it's important to think through the whole process to understand what the improvements might be.

The setup shown in this thread relies on the StackShot rail for focus stepping. It uses wired communication for the following major functions:
  • stepper motor control from StackShot controller to rail
  • USB command transfer from laptop to StackShot controller
  • shutter activation from StackShot controller to camera
  • USB image transfer from camera to computer.
In addition there are three power cables:
  • laptop
  • StackShot controller
  • camera
To get huge advantage from wireless, it's probably important to get rid of all the wires. Swapping a wireless camera into this particular setup would only get rid of one: the USB cable from camera to computer.

To get rid of all the wires, what you'd do is switch strategies entirely and go to a system that focuses by AF motor control inside the camera lens. An example is something similar to the Camranger-based setup shown HERE, which is part of what I used last summer for demonstration at a similar conference. That approach introduces limitations on stack depth, maximum magnification, and types of optics that can be used, so it's a game of tradeoffs.

--Rik

Post Reply Previous topicNext topic