Back in May I made some suggestions and asked a (somewhat OT) question to which you never responded. I’ve found this either means you missed it, “don’t know” (rare case!), it’s a nonsense idea or it’s already in work and you don’t wish to discuss it yet. This thread seems the appropriate place to bring up the topic again. Here’s the thread and my comment:
http://www.photomacrography.net/forum/v ... n&start=15
Bob^3 wrote:While reading this thread I had some interesting ideas I’d like to throw out to see if they stick. What if ZS knew the actual physical focus plane on which the lens was focused at time of capture? This data could be provided during the stacking capture process by a number of methods---a linear or rotational sensor on the focus rail, counting the number of steps of a stepper motor drive, etc. If ZS had this data, could it be used to enhance the stacking algorithm resulting in a better final image?rjlittlefield wrote: Yes, this is the classic "transparent foreground" effect. What happens is that when the lens is focused on background, it sees around small foreground objects. As a result both the background and foreground are seen clearly, in different frames. Stacking software does not understand about connected opaque structures, so it just shows whatever is the strongest detail at each pixel position. With large bristles, often there is not much detail except at the edge of the bristle, so the background "shows through".
Actually, I’ll admit this is also intended as a gentle nudge to ask for feature I would love to see added to ZS---the ability to control an automated rail! If valid, the above concept would work much better if ZS controlled the rail position and camera shutter during the capture sequence. It could then collect the true depth data for each frame and incorporate it in the process of assembling the image stack?!
I’m quite pleased to see that you’re finally following my advice and implementing the ability (through StackShot) to capture automated stacking sequences in ZS.

But how about the other concept of using true depth data from the StackShot controller to enhance the stacking results, does this concept simply peel off my proverbial “idea wall” and go splat on the ground?
I’ll interpret a “no response” to indicate that we can expect this revolutionary new feature to be announced in about six months!
