Speed Test for Zerene Stacker.

A forum to ask questions, post setups, and generally discuss anything having to do with photomacrography and photomicroscopy.

Moderators: rjlittlefield, ChrisR, Chris S., Pau

NikonUser
Posts: 2693
Joined: Thu Sep 04, 2008 2:03 am
Location: southern New Brunswick, Canada

Speed Test for Zerene Stacker.

Post by NikonUser »

In a previous post
HERE
I stacked 136 fine quality JPG's each 4288x2848 px with image caching on; it took my computer 46 minutes for ZS PMax to process.
Image quality was excellent but I considered processing time too long for my work flow.
That image size was unecessarily large for images that would only be used for web and pdf files.

Today I caught my 1st moth of the year (still patches of snow in my garden!) and photographed its front end: reversed 50/2.8 El Nikkor @ one-half stop <f5.6.

Stack of 83 images, fine quality JPG's each 2144x1424 px.; image caching off
Processing time for ZS PMax=7minutes.
Top: whole frame
Bottom: 800 pixel selection.
Image quality good/excellent; processing time GREAT.
But note weird effect on left margin.

EDIT: The idea of reducing my pre-stack image size to 2144x1424 px, from my usual 4288x 2848 px, for a final stacked image that will be used only as an 800x800 px image on a monitor was based on a recommendation from Rik. It certainly decreased processing time both in-camera and in the stacking software without any loss of image quality (as seen on a monitor).
Thanks Rik.

Image
Image
Last edited by NikonUser on Sat Apr 18, 2009 11:45 am, edited 1 time in total.
NU.
student of entomology
Quote – Holmes on ‘Entomology’
” I suppose you are an entomologist ? “
” Not quite so ambitious as that, sir. I should like to put my eyes on the individual entitled to that name.
No man can be truly called an entomologist,
sir; the subject is too vast for any single human intelligence to grasp.”
Oliver Wendell Holmes, Sr
The Poet at the Breakfast Table.

Nikon camera, lenses and objectives
Olympus microscope and objectives

rjlittlefield
Site Admin
Posts: 23625
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Re: Speed Test for Zerene Stacker.

Post by rjlittlefield »

NikonUser wrote:But note weird effect on left margin.
I presume you're talking about the horizontal smearing.

Smearing on the edge is a normal result of processing a stack in order from wide field to narrow field.

When you start from the widest field, subsequent images do not completely fill the frame after the subject is properly aligned. Missing edge pixels are filled in by duplicating the nearest available data, resulting in horizontal streaks.

To avoid this effect, simply reverse the stack so it gets processed in order from narrow field to wide. Alignment will then cause subsequent images to slightly more than fill the frame, and the unused pixels will just be thrown away.

Every stacking package I know does essentially the same thing in this regard. Helicon Focus has the nice feature that by default it automatically figures out which frame is the narrowest, and starts at that end. Zerene Stacker does not yet have that feature.

As a curiosity, I converted your two speed tests to a common unit of "pixels per minute", just to see how the computation was scaling. The calculation is 136*4288*2848 pixels in 46 minutes = 36.1 million pixels per minute, versus 83*2144*1424 pixels in 7 minutes = 36.2 million pixels per minute. Apparently the execution time scales quite linearly. This is no surprise, but it's nice to see confirmed.

In contrast, my midge stack consisted of 54 frames at 3072x2048, processed in 197 seconds with image caching turned off, on a Dell Vostro 400, 2.4 GHz Core 2 Quad. That works out to be 103 million pixels per minute, about 2.8 times as fast as NikonUser's machine.

--Rik

ChrisR
Site Admin
Posts: 8671
Joined: Sat Mar 14, 2009 3:58 am
Location: Near London, UK

Post by ChrisR »

My old 2.67Ghz P4 with 1GB ram took well over an hour for 5 x 4200 x 2800, which is about 0.5 Mppm.
I only got through the 115 images in the stack by reducing each one to 1000 wide. ZS was slightly slower than CombineZP, but not enough to matter.
Using more jpeg compression didn't make much difference though the files were much smaller
I see there's a ZS download to help with slow PCs, so I'll try that.

rjlittlefield
Site Admin
Posts: 23625
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

ChrisR, I think you'll be a lot happier with the new version. What happened with the old one was that it blindly used 1 GB of address space for application memory. With space for Windows, on a 1 GB box, that resulted in a huge amount of paging. The new version automatically checks to see how much physical memory is available and uses 300 MB less than that, up to a maximum of 1600 MB which is all that the 32-bit JVM can handle. 1 GB of physical memory (resulting in 700 MB for the application) is enough to handle at least up to 10 Mpixel images.

Using JPEG compression won't have any significant effect on processing time. The first thing ZS does for each image is to uncompress it, then everything is done on the uncompressed form.

The last test I ran, processing speed was 14.5 Mpixels per minute on my 1.3 GHz Pentium M in 1 GB (54 images at 3072 x 2048).

--Rik

ChrisR
Site Admin
Posts: 8671
Joined: Sat Mar 14, 2009 3:58 am
Location: Near London, UK

Post by ChrisR »

Running Pmap with scale set at 15% instead of 5% because I moved the lens to focus at about mag3 - 4 , subject about 6mm deep, and got multiple images when the last 15 or so images were processed (when image size reduced in PS to 100 wide)

Letting it go with 4.2k x 288k images,
"Thread 6, out of memory error.."
Tried again with caching off, same result.

Now trying Dmax, which previously produced nasty halos, and it's running ( has go to the 3rd image as I type this!)

rjlittlefield
Site Admin
Posts: 23625
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

ChrisR, this is an interesting report!

First off, please send email to support@zerenesystems.com so this gets properly logged as a trouble report.

I'm sure we'll be wanting to get a copy of the stack as well.

Now, about your difficulties...

In all of our tests, a 1 GB machine has been large enough to run all functions on 10 Mpixel images. Images larger than that often produce an out-of-memory error, but in some cases the program just hangs.

So I am eager to understand about your image sizes.

On the one hand I'm hearing "reduced in PS to 100 wide". If that means 100 pixels wide, then that's definitely outside the envelope we've ever tested. We've made thumbnails from stacked results, but not stacked results from thumbnails. I don't know offhand what interesting behavior the alignment procedure might have when presented with very small images.

On the other hand, I'm hearing "4.2k x 288k images". That's a puzzler. After several rather bizarre visions, I finally figured out that most likely you mean images of size 4.2k x 2.8k pixels (a typo: 288 instead of 2.8). That would be about 12 Mpixels, which is almost certainly too large to process with PMax on a 1 GB machine. DMap requires a little less memory, so it's consistent that DMap might work while PMax would not.

My best suggestion at this point is to rescale your images to a size appropriate to your machine. For starters, try downsizing by 2X, so that you're working with images of size 2.1k x 1.5k pixels. Those should run without stressing the memory system. If they don't, then I'll be curious to know about your machine configuration.

It doesn't seem to me that your focus method should be stressing the alignment procedure, so I'm thinking the problem there has to do with unusually small images. One of our test cases, for example, was focused by moving the camera, with the subject and lens locked in position. It has a scale ratio of 1:1.48 across 58 images. These process without any difficulty using the default parameter settings. The default limit of 5% scale correction applies between successive images, not aggregate.

Hope this helps. I'll be interested to hear further results.

--Rik

ChrisR
Site Admin
Posts: 8671
Joined: Sat Mar 14, 2009 3:58 am
Location: Near London, UK

Post by ChrisR »

Oh dear, I do apologise Rik, being dyslexic I avidly check spelling but the numbers slip by.
"reduced in PS to 100 wide"
Should have been 1000 wide, ie 1000 by about 666.
And you guessed right about the 4.2 x 2.8 = 12Mpixels.

The good news is that using today's update and PMax I got a very good result, though I can't say how long it took because I was in bed! This was 115 frames 12Mp, unequally spaced because I was created nudging the knob on the bellows.

A remaining issue is that on clicking "retouch" I got another OOM error, Thread 16, I think.

Completely unretouched, then, here it is, a few heather flowers, pleasing enough from a very rudimentary set-up. (Lighting by sheet of paper held over the pop-up flash :o ) I'm aware now that I'd inadvertently left the camera on "Vivid" which has not helped.

Image

rjlittlefield
Site Admin
Posts: 23625
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

OK, this is looking better.

Thanks for sending me the full-resolution version offline. Let me walk you through a few issues that I see here and that you asked about separately.

First, the alignment issues that you saw earlier were almost certainly due to reduced size images (1000 pixels wide). The earlier version of the code essentially assumed camera-resolution images and was sometimes inaccurate with smaller ones. That problem should be fixed in the current version.

The ghostly halos are typical of the so-called "pyramid" methods found in ZS, CZP, and TuFuse. They are essentially negative after-images of flower parts from out-of-focus frames. There is a corresponding brightening of areas around the dark parts, but that is usually harder to see.

With PMax (and corresponding methods in all the other tools), there is also an issue with contrast buildup. When a feature is seen completely focused, and slightly OOF, and moderately OOF, and extremely OOF, that feature in the final image ends up receiving brightness contributions from multiple levels of focus. Those contributions add up so that in the final image, the feature has more contrast than it did in any of the original images. When detail is low contrast to begin with, the buildup is often a good thing. But when the detail is high contrast, the buildup can cause clipping at black or white. It can also cause a sort of "glowing" effect.

DMap does not have these issues, but it has different ones. DMap is prone to a different kind of halo, and rather than enhancing contrast it tends to overlook low contrast detail altogether. With a deep stack and low contrast detail, DMap (and corresponding methods in all the other tools) is vulnerable to what I call "stacking mush", in which certain areas lose all detail even though the human can easily find frames with detail in them.

Despite its simple appearance, this test subject you have chosen is actually quite challenging. It has intense contrast against the background, which makes halos very likely, yet it has low contrast over much of the petals.

In my experience, this sort of subject cannot be handled really well by either DMap or PMax, or any other single method or fully automated combination that I know of in other tools.

However, it should be fairly straightforward to handle by manual retouching, using a combination of DMap and PMax as outlined in the ZS documentation. This is exactly the strategy that I used with the mayfly face that I recently posted. I've used it on many other stacks also, but not talked much about it.

In your specific case, retouching is problematic because the retouching tool takes even more memory than PMax, and it seems your 1 GB is not quite enough. To get some practice with it, you will need to reduce your image sizes. That will also help with speed issues that might crop up in using large images and correspondingly large brushes on a slow machine.

You asked in email about the apparent transparency of some flower parts, especially on the left. That problem commonly occurs when shooting a deep stack at wide aperture. It is caused by the lens "looking around" OOF foreground structures and seeing detail in the background. The software has no good idea what is foreground or background, and no idea at all of what is opaque versus transparent, so it just renders out everything it sees. The only fix for this problem is manual retouching. With a deep stack, it can be helpful to render the whole stack, and one or a few substacks that cover just the foreground structures, then retouch from the substack output images into the full stack output.

Hope this helps. Glad to hear we're making progress on making this thing work.

--Rik

ChrisR
Site Admin
Posts: 8671
Joined: Sat Mar 14, 2009 3:58 am
Location: Near London, UK

Post by ChrisR »

try downsizing by 2X, so that you're working with images of size 2.1k x 1.5k pixels.
You meant :D 4x, OK I'll try that.
It appears I can go up to 3GB for about $50 though, so that has to be the way to go!
Thanks for the advice.

rjlittlefield
Site Admin
Posts: 23625
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

ChrisR wrote:
try downsizing by 2X, so that you're working with images of size 2.1k x 1.5k pixels.
You meant :D 4x, OK I'll try that.
I meant by linear dimension, not area dimension = total pixel count. But yes, it's ambiguous. If you're talking to the Photoshop resizing dialog, it would be 50%. That why I specified the pixel dimensions also, to remove the ambiguity.

Agree completely about the memory issue. The stuff is so cheap now, it only makes sense to max out most machines.

--Rik

Wayne Baker
Posts: 47
Joined: Wed Nov 26, 2008 10:39 pm
Location: Melbourne, Australia
Contact:

Post by Wayne Baker »

Interesting... I performed my first test using Zerene Stacker the other day and was disappointed by the result. Not sure if this is to be expected however it took around 12 hours (per method) to process one stack, as follows:

- Intel iMac 2.0ghz, 2 GB RAM
- MAC OSX 10.5.8
- Latest version of Zerene Stacker from website
- 8-bit JPG files
- 3744 x 5616 px
- Approx 9.3 MB each
- 300 ppi
- 178 images in stack
- DMap & PMax methods with default settings

When running, my mac completely locked up, I couldn't run any other applications, etc. Had to leave it run its course. I am getting similar results from Helicon Focus. I started a stack in HF at 0940 this morning. It is now 1620 and it's still running... But with HF the mac does not lock up - I can still use other apps, including Bridge, with no problems... Maybe I just need to wait until my new dual quad-core mac pro turns up next week? The old iMac may just be on its last legs perhaps?

Cheers,

Wayne :-)

ChrisR
Site Admin
Posts: 8671
Joined: Sat Mar 14, 2009 3:58 am
Location: Near London, UK

Post by ChrisR »

Poor old ( :wink: ) Rik spends a lot of time patiently repeating stuff, he's done it for me enough times!

FOr your first tries at stacking with new software:
Use a smaller number of frames, say 50.
You can always Stack the outputs from substacks.
Use the pre-sizing feature in ZS. Set it early, as it gets greyed out later in the process. You could as I'm sure you could work out, set it to 20% linear, and therefore deal with files only 4% the size, and get a good screen image about 750 x 1020.
You can batch-reduce files in Photoshop too, which is handy if you rerun the stacks for any reason, you can pick which original set to use.
If you habitually find you really don't need the resolution of full pixel-count, you can also set your camera to produce lower sized files. You knew that :lol: .

Jpeg compression makes no difference, the files are decompressed before stacking anyway.

Use Pmax alone for starters, Dmap can show some things, or parts of things, well, but not very often, in my limited experience.

I believe Macs are slower than PCs now, even if you spend a lot more money on them. There are other advantages of course, but they're getting marginal I think.

Add memory if you can. Windows XP only uses up to about 3GB, so 2GB would be plenty, I don't know about your mac.
And it's definitely worth setting the priority of the java process a bit lower if you can. It made a huge difference to me, in that I can cheerfully use Photoshop etc at the same time as ZS is running in the background. Previously , PS lurched and crawled. If ZS is running with nothing else, then it makes no apparent difference to its speed.

Doing those things, instead of watching images get processed at 30 - 60 seconds each, they're sub-second.

Don't be disheartened, you'll soon be in the chortling group!

Oh and dooo keep using HF, if you're the kinda guy who's into typewriters, gramophone records, telephone boxes... :wink:

Wayne Baker
Posts: 47
Joined: Wed Nov 26, 2008 10:39 pm
Location: Melbourne, Australia
Contact:

Post by Wayne Baker »

Thanks for the tips.. :D

I'm looking to make A4 prints or larger from the resulting files therefore am aiming for larger file sizes. Part of my research is going to be trying to find the optimum settings to use to get the fastest processing of stacks whilst rendering the best quality output for printing to the sizes I've mentioned..

The new Mac I've ordered is coming with 12GB RAM so hopefully that will help.

Thanks again. I'll keep at it... :D

Wayne

rjlittlefield
Site Admin
Posts: 23625
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

Regarding the pixel and frame counts, I have two reactions: "Oh dear..." and :shock: :shock: :!: .

Stacking 178 images at 21 Mpixels per image will keep any system busy for quite a while.

Processing at that size and stack length is a great way to make final images, but learning the tools and settings is best done with shorter stacks and smaller images. Otherwise you spend all your time processing one or two stacks and don't get to explore the parameter space. (Wayne is gearing up to do a research project using these tools, so I thought I'd throw in a little research jargon. :wink: )

It's turning out to be more common than I expected for users to dive in with deep stacks and a gazillion pixels, and then be disappointed that the processing is slow. As a result, I'm seriously considering adding something like a "new user wizard" that will detect, warn, and suggest some alternatives.

ChrisR's suggestions are very good -- many thanks.

Regarding the problem with ZS keeping the system so busy that it's non-responsive, I expect to post out an update later today that has that problem fixed internally. The update will also have a faster retouching brush and a couple of bug fixes.

--Rik

Eric F
Posts: 246
Joined: Tue Nov 11, 2008 1:38 pm
Location: Sacramento, Calif.

Speed Test for Zerene Stacker

Post by Eric F »

Wayne,

I just ran a 46 image (ea. 1.8 MB) stack on my 1 year old iMac (2.8 GHz Intel Core 2 Duo, 4 GB Ram, OS X 10.5.8) in 7 min, using PMax in the newest ZS version (Thanks Rik!). The resulting 58 MB, 16 bit image is posted in the Technical and Studio Photography -- Macro and Close-up forum.

I'm sure you will love your new 12 GB Mac & ZS to pieces!

Cheers,

Eric

Post Reply Previous topicNext topic