Super-resolution from small sensors?

A forum to ask questions, post setups, and generally discuss anything having to do with photomacrography and photomicroscopy.

Moderators: rjlittlefield, ChrisR, Chris S., Pau

Lou Jost
Posts: 5943
Joined: Fri Sep 04, 2015 7:03 am
Location: Ecuador
Contact:

Super-resolution from small sensors?

Post by Lou Jost »

This is a continuation of an earlier post of mine about the possibility of getting higher-than-native resolution from a sensor by making multiple pictures of exactly the same scene and then combining them.

Here is my first test. I used a Nikon D90 (APS-C sensor, 4288 x 2848 pix) focused on a power line on a distant mountain. I used a Nikon 16-85mm lens at ISO 200, f11, 1/500 sec in full sun. I took eleven pictures of the same scene and converted them to uncompressed, unsharpened tiffs using a Photoshop actio. (By the way Chris, Photoshop CC did not open them all at once but processed them sequentially.)

Image
Whole frame, shrunk for forum.

It was a windy day and after taking the images, I realized even the distant trees were moving too much to make useful targets. But the power tower on the right wasn't going anywhere so I concentrated on it, by cropping a vertical strip from the original image. (The tower on the left had clouds behind it that were moving, so they might have screwed up the alignment). Even so, I realize now that the alignment was made from the whole vertical strip, which included much vegetation, and so the alignment might be slightly off.

I upsampled all the images by a factor of three using the bilinear option in Photoshop. Then I put them all into a stack and had Photoshop align them. I then set their transparencies to 100/n where n is the layer number. Then flattened the image. Then cropped to just the power tower. Here is a comparison of the original tower image (from one of the eleven source shots) and the combined image. The original image is at left, the combined image is at right and outlined. The original image is resized to match the combined image. No sharpening or other adjustments were applied.

Image
One of the original images at left. Combined image at right.

The difference is interesting but it may be mostly noise reduction. The noise reduction is striking; the original sky is very noisy in spite of the low ISO and bright sun, while the new sky is smooth. The most striking improvement is in the diagonal white power lines to the right of the tower; these are almost invisible in any individual shot but are visible in the combined shot.

However, I do not see any detail smaller than the original pixel size. I suspect a more sophisticated algorithm using pixel math may be needed for that. Theoretically I do think it is possible. Will do some more experiments and report back.

Edit: If this were film, I guess this is equivalent to using a lower ISO film, approx ISO 25.

Admin edit [rjl]: add link to earlier post

Online
Saul
Posts: 1781
Joined: Mon Jan 31, 2011 11:59 am
Location: Naperville, IL USA
Contact:

Post by Saul »


Lou Jost
Posts: 5943
Joined: Fri Sep 04, 2015 7:03 am
Location: Ecuador
Contact:

Post by Lou Jost »

Yes, that's the article I cited in my other post on the subject. I think my tests are more honest, since I use no sharpening at any stage.

What do you think of the results?

Chris S.
Site Admin
Posts: 4042
Joined: Sun Apr 05, 2009 9:55 pm
Location: Ohio, USA

Re: Super-resolution from small sensors?

Post by Chris S. »

Lou,
Lou Jost wrote:I upsampled all the images by a factor of three using the bilinear option in Photoshop. . . .

The original image is resized to match the combined image. No sharpening or other adjustments were applied.
When you resized the original to match, did you do this by upsampling it by a factor of three, or upsizing it in the photoshop display? If the latter, it would be interesting to see the same comparison, but with the original upsampled in the same manner as the inputs to the test images. In fact, it would be interesting to see crops of all the input images. Do they differ visibly?

I get the impression that you photographed the distant mountaintop through a fair degree of atmospheric haze. Was this the case? If so, it might add a variable, as air is not uniform or still. Perhaps it would be useful to repeat the experiment on a close subject.

--Chris

Lou Jost
Posts: 5943
Joined: Fri Sep 04, 2015 7:03 am
Location: Ecuador
Contact:

Post by Lou Jost »

Chris, I upscaled it in the display, because I did not want to add any processing. This way you see exactly the original pixels. But yes, it would be interesting to do it the way you suggested as well. Will do later today.

And yes you are right, yesterday was a bit hazy. We are in a severe drought and that's the time that people like to burn forests, so there is some light haze caused by smoke from distant Amazonian fires.

I am going to redo this using a precisely made resolution target that I am drawing on my computer screen. I can make crystal-clear detail with line thickness that is a known fraction of my camera's pixel dimensions.

But as I think about this, I am pretty sure that one needs to do a bit of pixel math to extract better sub-pixel information. That will require Visual Basic programming. Still, the easy Photoshop method is an interesting improvement, even if it just comes from noise reduction, and there are still some more parameters to play with.

benjamind2014
Posts: 221
Joined: Sat Oct 18, 2014 2:07 am

Post by benjamind2014 »

One thing I have noticed is that the larger the pixel pitch on the sensors, the less general noise.

However I have compared images to determine low light performance of a few small format sensors (through various sources) and found the Panasonic MN34120 produced great images in low light.

The larger format sensors are assumed to generally produce the better images, but the MN34120 really held its own in this regard.

Lou Jost
Posts: 5943
Joined: Fri Sep 04, 2015 7:03 am
Location: Ecuador
Contact:

Post by Lou Jost »

I've been researching this more, and finding lots of interesting results. Some people were skeptical that super-resolution could be obtained from multiple randomly-shifted low-res images. It turns out that this is an area of active research with well-documented results. A good search phrase for Google is {super resolution photo algorithm}.

The Olympus technique of increasing resolution by moving the sensor a known amount and taking multiple pictures can indeed be replaced (in theory) by moving any camera by a random amount, and applying sophisticated algorithms to reconstruct the sub-pixel image from the many randomly-displaced images. It is not necessary to know the displacement of each image: there are accurate algorithms to figure that out if one needs to. This method is not limited to a doubling of the resolution either. The resolution can be increased by factors of 8x or more, but more images are required. The difficulties go up very quickly as the extrapolated resolution increases, though. And apparently most of this stuff is done in black-and-white, so it may not be ready for color photography yet.

I have yet to find a clear implementation of any of these algorithms in software that one could download and try. The simple averaging of pixels is not enough to extract the full amount of information present in multiple images. There are some commercial programs that say they use sophisticated algorithms, but it is hard to tell if they really do.

Chris S.
Site Admin
Posts: 4042
Joined: Sun Apr 05, 2009 9:55 pm
Location: Ohio, USA

Post by Chris S. »

Lou,
Lou Jost wrote:And apparently most of this stuff is done in black-and-white, so it may not be ready for color photography yet.
Each color channel is essentially monotone, right? So perhaps simply run the process separately on the color channels?

Or perhaps convert from RGB to lab color, and run the process on the luminosity channel? I often make this conversion for sharpening, then convert back to RGB. My sense is that most of what our eyes see as resolution resides in the luminosity channel. And I suspect that the luminosity channel could be treated as black and white.

Your initial experiment was interesting, despite the simple method for combining the images. Any further results?

--Chris

rjlittlefield
Site Admin
Posts: 23561
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

Lou Jost wrote:Some people were skeptical that super-resolution could be obtained from multiple randomly-shifted low-res images.
...
It turns out that this is an area of active research with well-documented results.
...
I have yet to find a clear implementation of any of these algorithms in software that one could download and try.
So, I guess you can count me among the skeptics.

But I'm not really skeptical about "could be obtained". Theory and experiment are pretty clear that it can be.

The skepticism I have relates to how practical and reliable the approach is, particularly for focus stacking which was the original context.

I will be very interested to hear what your further investigations find.

--Rik

Lou Jost
Posts: 5943
Joined: Fri Sep 04, 2015 7:03 am
Location: Ecuador
Contact:

Post by Lou Jost »

Chris, that was my thought too. But maybe there are issues in getting the absolute levels of the channels to match--maybe relative lightness and darkness patterns are easier to reconstruct. I don't know.

Rik, I suppose the dearth of good consumer-ready programs suggests that you are right, but if it CAN be done, it WILL be done sooner or later.

I'll get back to my experiments as soon as I get sufficiently ahead on my "real" work....

soldevilla
Posts: 684
Joined: Thu Dec 16, 2010 2:49 pm
Location: Barcelona, more or less

Post by soldevilla »

This sound like the astronomical photography! The way of use several images is our normal line of work, Sometime in planetary images, thousand...

Test Registax, AutoStakkert, Deep Sky Stacker and many more softwares. they are free, and work better that PS.

Lou Jost
Posts: 5943
Joined: Fri Sep 04, 2015 7:03 am
Location: Ecuador
Contact:

Post by Lou Jost »

Soldevilla, it looks like some of these are mainly noise-reduction programs rather than resolution-increasing programs. Do you have any recommendations for resolution-increasing programs?

Edit to note that also some of these programs, like Deep Sky Stacker, are specifically designed to use stars for alignment and won't work without them, and Registax 6 has been crippled with weird artificial limitations compared to earlier versions, making it useless for super-resolution applications.

Stevie
Posts: 95
Joined: Sat Dec 27, 2008 9:17 am
Location: Belgium
Contact:

Post by Stevie »

it looks like some of these are mainly noise-reduction programs rather than resolution-increasing programs.
Noise reduction in case of deepsky , yes since you use long exposure .
Another reason to do this , when shooting planets , is battling local seeing .
Most planet shooters also use a mono camera with a filterwheel in front of it .
So the shoot in LRGB to gain the full resollution of the sensor for each colorchannel .

Bushman.K
Posts: 162
Joined: Wed Aug 26, 2015 5:49 pm
Location: OR, USA
Contact:

Post by Bushman.K »

How about trying AutoStakkert! 2?

rjlittlefield
Site Admin
Posts: 23561
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

AutoStackkert looks like a good idea.

Here is a more specific URL for the super-resolution function: http://www.autostakkert.com/wp/enhance/

--Rik

Post Reply Previous topicNext topic