Blue light for high resolution - post processing questions

A forum to ask questions, post setups, and generally discuss anything having to do with photomacrography and photomicroscopy.

Moderators: ChrisR, Chris S., Pau, rjlittlefield

Beatsy
Posts: 1646
Joined: Fri Jul 05, 2013 3:10 am
Location: Malvern, UK

Blue light for high resolution - post processing questions

Post by Beatsy »

I'm trying to make very high resolution stacks (diatoms) and have kind of 'thunk myself into a corner' regarding post processing of RAW images taken in blue light (455nm). Two main questions...

1. Is there any way to extract just the blue pixels from a batch of RAW files without involving any debayering algorithm? I would expect to end up with tiffs that are half the width and height of the original RAW images. I'd prefer a tool that works on batches of files - but willing to wind through them in turn if needed.

Note: In the absence of said tool in Q1, I used Photoshop CS6 to batch extract blue channels from tiffs, but I'm obviously using debayered input files and believe 'made up' values (derived from the red and green pixels in the bayer matrix) are reducing the quality/accuracy of the blue channel.

2. Related to the above. When determining sharpness, how does Zerene Stacker interpret the different colours? Given that all but Plan Apo lenses will bring different wavelengths to focus at slightly different depths, does ZS use luminance - or does it work with the colour channels separately?

Currently, although I can *see* more detail through the eyepiece, my photographic (stacked) results are only marginally better than I achieve using white light.

Pau
Site Admin
Posts: 5107
Joined: Wed Jan 20, 2010 8:57 am
Location: Valencia, Spain

Post by Pau »

Not an actual response to your questions, but few points:
- With blue light you only use 1/4 of the Bayer sensor pixels, so you need a very high pixel dense sensor. Resolution on the sensor side will be lower than with white light. The raw image converted with no demosaicing will show separated pixels in place of a continuous image.
- The best alternative will be a monochrome sensor, although this is not present but in some telescope or microscope dedicated cameras apart of the super expensive and maybe not convenient Leica M-Monochrom
Pau

Beatsy
Posts: 1646
Joined: Fri Jul 05, 2013 3:10 am
Location: Malvern, UK

Post by Beatsy »

Thanks Pau. I'm using a 5d mkii which has FAR too many pixels really. At high NA the sensor is oversampling by a factor of 8 (thanks to the built-in magnification on my Zeiss's camera port). So the reduced resolution is welcome in this case.

rjlittlefield
Site Admin
Posts: 20877
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

1. You should be able to do this using dcraw with the -d or -h options to convert raw to TIFF without interpolation. See http://www.photomacrography.net/forum/v ... 4987#84987 for example. Then in Photoshop you could certainly do some combination of cropping and resizing with nearest-neighbor resampling to extract just the blue pixel positions. I expect the same thing could be done in imagemagick. There might even be options to do the whole functionality in dcraw. See http://www.cloudynights.com/topic/51234 ... rpolation/ for other options.

2. Zerene Stacker works just with luminance in almost all cases. In PMax, there is an option to "Use All Color Channels In Decisions", but that still results in a single decision that then gets applied to all color channels at that pixel position for that source image. There is no facility to process the RGB channels completely separately, as needed to handle different colors focusing at different depths.

I recall exchanging email with two or three people who implemented the separate colors approach by preprocessing their RGB stacks into three monochrome stacks, running each of those through Zerene Stacker separately, then merging the results back to RGB. However, each of them told me that the result was not as good as they had hoped/expected. I have no personal experience with separated colors, and I don't know why it failed to give the expected benefits.

--Rik

g4lab
Posts: 1434
Joined: Fri May 23, 2008 11:07 am

Post by g4lab »

I think Pau is on the right track. What is the point of increasing resolution with blue (you could also use Long Wave UV which has been done, Zeiss had some UV monochromats once upon a time. )light and then throwing it away on the bayer pattern.

To see the improvement afforded by blue light you either need the Leica monochrome sensor Pau mentioned or you could shoot black and white film which is what they used to do with the above mentioned objectives and blue light too "back in the day". Shooting film would be the most economical way. You could also round up a monochrome video camera. Or a three shot camera like the Diagnostic Instruments SPOT cameras which you can probably isolate the channels conveniently without debayering.

I have a vague recollection of someone posting such an experiment here in the past. I will see if I can find it. Maybe I saw it elsewhere.
http://www.photomacrography.net/forum/v ... e&start=15

http://www.photomacrography.net/forum/v ... resolution
Last edited by g4lab on Tue May 17, 2016 11:27 am, edited 1 time in total.

Beatsy
Posts: 1646
Joined: Fri Jul 05, 2013 3:10 am
Location: Malvern, UK

Post by Beatsy »

I'm not throwing away resolution on the bayer filter.

At the highest numerical apertures, my system is over-sampling by a factor of 10 (using 100/1.3 and oiled condenser and white light). That is, the smallest feature the objective can resolve covers a 10 pixel diameter circle on the sensor. By using blue light, I reduce the size of the smallest resolved feature such that it "only" covers an 8 pixel circle instead (oversampling). I am achieving higher resolution than I would with white light and capturing all the information.

PS. Sorry - forgot to thank you for the link. Also, I have tried using a near-UV interference filter in the past. But all my objectives focus this UV light at a radically different position to 'normal' wavelengths so it's a bit tough to use. Also requires incredibly long exposures on my 5d ii to get a decent signal.

g4lab
Posts: 1434
Joined: Fri May 23, 2008 11:07 am

Post by g4lab »

I was also going to ask, are you using a blue interference filter. This would give you much sharper cutoffs than even a pretty good blue glass or blue gelatin filter. It would probably maximize whatever gains you could get.

Maybe you are not throwing away resolution but aren't you throwing away like three quarters of the light??

Beatsy
Posts: 1646
Joined: Fri Jul 05, 2013 3:10 am
Location: Malvern, UK

Post by Beatsy »

g4lab wrote:
Maybe you are not throwing away resolution but aren't you throwing away like three quarters of the light??
Yes. Well, sort of. But only because I'm discarding (or want to discard) 1 red and 2 green pixels for each blue. It's not a problem - the 100w halogen lamp provides plenty of light. I can blow out the blue histogram with a 1/30th sec exposure and the lamp running at about 80% power.

My main problem is finding a method to extract the blue pixels (without debayering) from each RAW image in the stack and save them as a half-size monochrome tiffs instead (prior to stacking). I've looked at dcraw, raw therapee and even deep sky stacker (super pixels) with no luck so far. The search continues.

rjlittlefield
Site Admin
Posts: 20877
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

Beatsy wrote:I've looked at dcraw, raw therapee and even deep sky stacker (super pixels) with no luck so far. The search continues.
I'm a little confused here. Do you mean that you haven't found a solution that is as tightly integrated as you'd like, versus the dcraw -d plus Photoshop method that I described above?

--Rik

soldevilla
Posts: 571
Joined: Thu Dec 16, 2010 2:49 pm
Location: Barcelona, more or less

Post by soldevilla »

Perhaps this is too complicated to explain with my bad English ... In astronomy is very common to use black and white cameras and take a series of images through RGB filters, and then combine them and get a color image.

The cheapest way to have a camera in black and white is taking a DSLR (I did it with a Canon 450D), remove the first lowpass filter, scratch the bayer matrix and the second low-pass filter. Since the focus point is completely lost, I want to repeat the process with a Canon 1200D, for focusing in LiveView mode. The increase in resolution is very important, now every pixel gives information and we don´t have two defocusing filters in the camera. In return, perhaps for non astronomical applications we should add an IR cut filter.

Image

Image

Beatsy
Posts: 1646
Joined: Fri Jul 05, 2013 3:10 am
Location: Malvern, UK

Post by Beatsy »

Rik: it was not working as expected. The output tiff just looked like a normal image (no bayer matrix apparent). Had a few goes with different parameters and then set it aside while looking for other options. Just realised I was using the sRAW format - so that may be the issue. Will shoot some full-sized RAWs later and try again.

g4lab
Posts: 1434
Joined: Fri May 23, 2008 11:07 am

Post by g4lab »


Beatsy
Posts: 1646
Joined: Fri Jul 05, 2013 3:10 am
Location: Malvern, UK

Post by Beatsy »

Update: dcraw doesn't handle sraw or sraw2 files (Canon reduced size RAW). I got it to produce a 16 bit greyscale tiff with with a normal RAW file (as Rik suggested) but cannot find any way to reliably isolate only the blue pixels from that (scaling 50% with nearest neighbour resampling in CS6 doesn't work). So still searching...

soldevilla: Thanks. But that really made my toes curl :)

Pau
Site Admin
Posts: 5107
Joined: Wed Jan 20, 2010 8:57 am
Location: Valencia, Spain

Post by Pau »

Beatsy, you're right, your sensor is largely outresolving the microscope image so at the sensor side resolution will not be an issue.
This leads me to another point: because the use of blue light is only advantageous at the sample/objective side (to diminish diffraction and avoid chromatic aberrations) demosaicing the sensor image will not induce important loses.
Somewhere I read that the green pixels are to some extent sensitive to blue (the channels are not sharply separated from origin), if right it could give some advantage if it applies to the wavelength you're using.

In any case your idea is most interesting and worth to try. Please keep us informed.
Pau

MaxRockbin
Posts: 185
Joined: Sat Nov 08, 2014 11:12 pm
Location: Portland, OR

Post by MaxRockbin »

I haven't tried this, but, to use DCRAW with your sraw/sraw2 files, you might try converting to DNG (Adobe's Digital Negative universal raw format) using their free DNG converter and then using DCRAW on the DNG.

https://helpx.adobe.com/photoshop/digital-negative.html
If your pictures aren't good enough, you're not close enough. - Robert Capa

Post Reply Previous topicNext topic