3D scan of fly head

A forum to ask questions, post setups, and generally discuss anything having to do with photomacrography and photomicroscopy.

Moderators: rjlittlefield, ChrisR, Chris S., Pau

BugEZ
Posts: 850
Joined: Sat Mar 26, 2011 7:15 pm
Location: Loves Park Illinois

3D scan of fly head

Post by BugEZ »

Several months ago I saw some literature on a Keyence digital microscope that included a description of turning a stacked image into a 3D surface. The literature

http://www.digitalmicroscope.com/dwn/vhx_2000_ka.pdf

shows some solid surfaces that were created of some relatively simple geometry from photos of mechanical parts, similar to threads on a screw. I wondered if the information in one of my stacks would allow a similar 3D surface to be constructed of insect features, specifically of a fly's eye, but I had many other things to pursue and put this investigation on the back burner.

A few weeks ago I was imaging the eyes of a colorful doli and decided to use the stack as a trial and to write some code to scan the fly and build a solid of the eyes. I budgeted 2 months to get the project done. Mostly because I did not want to be messing with software when fresh bugs emerge after winter.

I started out by teaching myself enough JAVA (computer programming language) to be able to decode the RGB data at each pixel of an image. Happily it turns out JAVA has most of the tricky bits built in and the online help is excellent so that was less of an obstacle than I had expected. My first baby step was to read a horizontal line of pixel values from one image and write out the RGB values into a text file. I imported the text file into Excel and looked over the changes in RGB values that occur in the focused areas and contrast that to the blurry areas. That allowed me to develop an algorithm to detect the center of a focus zone. The figures below are extracts from a small presentation I made for my work colleagues on this topic. (My colleagues think I am crazy but they enjoy the stories)

Image
Image
Image
Image
Image
Image
Image
Image

The method I chose to pick the center of the focus band is described here but poorly. Essentially I use a weighted average location with the threshold for focus detection set at 40. It worked well enough. I am sure that other methods could be chosen that would work better.
Image


This shows the pixel locations that the algorithm has selected for the center of the focus band. I scan every forth line vertically (white spots) and vertically (black spots). I started scanning every line but the "point cloud" was excessive and so I chose to use fewer scans.
Image

The program scales the pixel X-Y location to create X-Y coordinates in mm. In this case I have a 10X lens with my 3009X2009 pixel sensor and recon the width of a pixel to be .000788 mm. I calculated the step size between photos in the stack to be .005mm and used that to calculate the Z coordinate.

The JAVA program crunched through ~ 100 stacks on my old laptop in about 20 minutes. I consider that an acceptable stack time. The data file it produced is 7 MB.

I used my workstation at the office to crunch the 3D points and turn them into surfaces. I am a mechanical engineer and work with a variety of mechanical design modeling programs. For this task I used Unigraphics NX. I am hoping to discover a simpler lest costly program that could be used for this hobby as I can't allow my hobby to encroach on my work.

Here are some snips from the screen of the point cloud and the surfaces the computer created. Note that I trimmed away points from the doli's whiskers and antenna.





Image

This slice of surface and point cloud shows the surface is well placed.
Image

Both eyes... Each eye surface is separately modeled
Image

In the drafting mode of the UG software, I can construct slices through the eye surface in arbitrary planes and project them. This tells me the male doli in the photograph has eyes with good range resolving capability when the prey or mate is relatively close by... Perhaps 1-2 body lengths. It will be interesting to compare the shape of the female fly.
Image

I am still working on a editor to allow me to trim out unwanted point cloud areas near whiskers and antenna. For this analysis I had to do that manually in UG which was laborious.

Overall I am quite pleased with what I accomplished on the software side in a few weeks. Not a bad first effort.

What do you think?

Keith

canonian
Posts: 891
Joined: Tue Aug 31, 2010 4:00 am
Location: Rotterdam, Netherlands
Contact:

Re: 3D scan of fly head

Post by canonian »

BugEZ wrote:What do you think?
I think you have started a very challenging project and I have much respect for your skills to translate an idea and your theories into software ( In the same way that I think it's mindblowing like Rik's excellent software, build just by looking at what light and glass do together, and translating it to code). This is far beyond my grasp.

I have seen this at work in a demonstration of a Hirox HK-8700, that in a similair way like Keyence does, builds a 3D wireframe mesh by depthmapping and focus measuring an object, and also texture this wireframe with an image of the subject , letting you freely rotate the 3D object.
You have to see this to believe it, you can also see this expained on page 12 and 13 of the manual at http://www.hirox-europe.com/catalog/cat ... 700_en.pdf) and shown in this video. I have some examples of a 3D bug at hand but not at this location, will add them later tonight.

Please keep us posted how this evolves.

EDIT: added a 3D example here: http://www.photomacrography.net/forum/v ... 841#143841

rjlittlefield
Site Admin
Posts: 23561
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Re: 3D scan of fly head

Post by rjlittlefield »

What do you think?
Nice work!

A couple of thoughts come to mind:

1. There is a function built into Zerene Stacker that some people use to collect much the same data. It is at Options > Preferences > DMap Settings > "Save depth map". The output is a 16-bit TIFF file with estimated depth at each pixel position recorded as a gray level = frame number scaled to the full 0-65535 range. This may not be of interest to you since you already have your own code to do the image analysis, but I thought I should mention it for other readers who might find it a helpful starting point.

2. I see in your image at http://www.photomacrography.net/forum/u ... ject_1.jpg that your source data has some strong JPEG compression artifacts (8x8 checkerboards) that seem to be giving you noisy data. Probably this washes out in the surface fitting, but if you plan to collect new data specifically for this purpose, it would be good to crank up the JPEG image quality as high as possible.

Things seem to be going very nicely with this project. Do you have some specific goal in mind, or is the work motivated by curiosity?

--Rik

BugEZ
Posts: 850
Joined: Sat Mar 26, 2011 7:15 pm
Location: Loves Park Illinois

Post by BugEZ »

Fred (Canonian)

Many thanks for your encouragement. The photos you posted and added were quite interesting. The Hirox machine you saw demonstrated appears to be quite similar to the Keyence machine I had previously read about. Very neat. The three d model is quite impressive. Particularly the modeling of individual hairs. I can only imagine the difficulty of constructing the software to reverse the point cloud and decide what is hair and what is stripes...

Rik mentioned:
a function built into Zerene Stacker that some people use to collect much the same data. It is at Options > Preferences > DMap Settings > "Save depth map". The output is a 16-bit TIFF file with estimated depth at each pixel position recorded as a gray level = frame number scaled to the full 0-65535 range.
Wow! That definitely sounds like pressing the easy button. I will re-run the stack through Zerene and generate the depth map. I can then use it to generate a point cloud and overlay the Zerene points and the "bugScanner" points.

Rik mentioned :
your source data has some strong JPEG compression artifacts

The input frames to the "bugScanner" were saved with the highest "quality" my RAW to JPG converter will produce, but the output frames from bugScanner with the point cloud pixels super-imposed are saved with the default compression of the JAVA function used. It is not a problem. These images are only for debugging.

Rik wondered:
Do you have some specific goal in mind, or is the work motivated by curiosity?
Well, both really. I have been taking photos of various dolis examining their eyes. Some have eye stripes, some have random (stochastic) patterning. On a recent vacation I observed two similar species on the same bush catching the same bugs. One had the classic eye stripes, the other (the subject of my first eye scan) had stochastic patterning. I want to study the patterning and think I can automate the process of mapping the eyes.

I am also interested in what the fly sees. I plan to place the ommatidia on the surface of the eye (generated with bugScanner) and then project the surface normal at that spot onto the inner surface of a sphere. that in theory is the spot where that ommatidium is pointed. The overlap areas of the fly's vision field (spots where it sees with left and right eyes) are where it can see in stereo. The resolution of the fly's vision in a particular direction is probably proportional to the density of eyes pointed toward a particular area. While I don't know that I will discover any surprises from this mapping it is quite interesting to look.

I made some movies of fly behavior while I was on vacation. Here is an interesting one...
https://www.youtube.com/watch?v=Z2vrHCs ... 2GGzPSuWyg

I watched a female of this "random eye" species land on a busy leaf and chase off all the similar flies. The movie suggests a normal detection distance and it shows the attitude the fly holds it's head to target and track a critter of interest. I want to see if the area of the eye used for "targeting" is in the normal stripe zone... Anyway, there are many things to examine and I am bringing a different set of tools to examine the critters. I expect to learn something.

A rather long winded answer. Probably should have stopped with "both"...

Many thanks for pointing out the depth map!

Keith

rjlittlefield
Site Admin
Posts: 23561
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

BugEZ wrote:Fred (Canonian)

Many thanks for your encouragement. The photos you posted and added were quite interesting. The Hirox machine you saw demonstrated appears to be quite similar to the Keyence machine I had previously read about. Very neat. The three d model is quite impressive. Particularly the modeling of individual hairs. I can only imagine the difficulty of constructing the software to reverse the point cloud and decide what is hair and what is stripes...
I suspect that what you're imagining is not what's in the model. To my eye, what's shown at http://www.photomacrography.net/forum/v ... 841#143841 looks like a low resolution smooth surface with some nice sharp bristles painted onto it. Take a look, for example, at the big white peaks to the left and right of subject center. The white part was originally bare cuticle lying almost flat on the critters back. In the 3D view the narrow ring of sparse skinny bristles has grown up into a robust mountain with details from both structures painted onto it. This is one standard way of generating a sort of hybrid model that looks good as long as you don't get too far away from the original axis of acquisition.

--Rik

BugEZ
Posts: 850
Joined: Sat Mar 26, 2011 7:15 pm
Location: Loves Park Illinois

Post by BugEZ »

I ran DMAP from the original stack and created a depth map. I like the look of the surface in the TIFF output. I'll spend a bit of time this weekend sorting out the pixel values and creating a point cloud. Neat!


Here is a compressed JPG from the depth map...


K

Image

TheLostVertex
Posts: 317
Joined: Thu Sep 22, 2011 9:55 am
Location: Florida

Post by TheLostVertex »

What file format are you saving your point cloud to? You should be able to automatically convert a point cloud into a mesh with http://meshlab.sourceforge.net From there you have lots of possibilities for editing and looking at your mesh.

As Rik said, most data you are collecting (position and depth) would be in the depthmap Zerene can generate. I just recently tested it out to see how accurate it was compared to a depthmap generated from a 3d rendering, and it gathers the depth information quite accurately(see HERE at the bottom of the page).
I suspect that what you're imagining is not what's in the model. To my eye, what's shown at http://www.photomacrography.net/forum/v ... 841#143841 looks like a low resolution smooth surface with some nice sharp bristles painted onto it.
I agree. As with helicon's 3d feature, it is applying the color image over a plane, with the plane being displaced by a depthmap.

I think your method currently will be limited to the same issue of only being able to gather data on one axis, with no tangental information. You could expand your solution however, to include data from the same object that has been rotated and photographed again. Then you will begin to build up a more complete dataset. Or you can take a series of completed stacks that are being rotated 360º and create your model from those with something like Autodesk 123d(see http://en.wikipedia.org/wiki/Photogrammetry )

Since you posted your depthmap, I quickly displaced a plane with it and cut off much of the area surrounding the eyes, since that is what you are looking for(plus it was a bit messy). I do not know your stack depth, so I just did it an arbitrary amount. If you want the model just let me know and Ill post it.

Image

BugEZ
Posts: 850
Joined: Sat Mar 26, 2011 7:15 pm
Location: Loves Park Illinois

Post by BugEZ »

Steven,

I am blown away! So cool. I will look up your reference and take a look.

If you PM me and propose a method, I will send the higher resolution Tiff greyscale of the depth map file. It is 36MB which exceeds my normal e-mail limits. That would probably make a smoother model for comparison. Or perhaps you have one from one of your own images?

I am not sure that I can extract face normals from the model you show. I shall have to ponder that a bit. The method I used produces smooth faces.

I am quite impressed at the great ideas that come from the members of this forum. It appears I have crowdsourced my cloud computing. I am rather old to go this high tech...

On another note, I have managed to decode the pixel values from the grayscale Tiff generated by Zerene, and should have a comparison of the point cloud produced by my bugScanner and Rik's depth map tomorrow.

Keith

BugEZ
Posts: 850
Joined: Sat Mar 26, 2011 7:15 pm
Location: Loves Park Illinois

Post by BugEZ »

TheLostVertex asked:
What format do you save your point cloud to?
I save it to a text file with x, y, z coordinates separated by commas.

PM me and I'll e-mail it along. The most recent one is 8 MB. It also includes the x,y pixel values (integers)

K
Aloha

TheLostVertex
Posts: 317
Joined: Thu Sep 22, 2011 9:55 am
Location: Florida

Post by TheLostVertex »

BugEZ wrote: I am not sure that I can extract face normals from the model you show. I shall have to ponder that a bit. The method I used produces smooth faces.
The image posted is a polygon model(as opposed to a parametric surface like you would see in most CAD software), with all normals facing outwards. From where the model is now, it could either be smoothed as is, or retopologized to produce a simpler mesh.
BugEZ wrote: If you PM me and propose a method, I will send the higher resolution Tiff greyscale of the depth map file. It is 36MB which exceeds my normal e-mail limits. That would probably make a smoother model for comparison.
I am not sure how much difference it would make. If you look carefully at the image I posted, and then at the depthmap file you have, you should notice the bands in the eye are in your 16bit depthmap as well. I hope Rik corrects me if I am wrong, but these stepped areas are areas where Zerene is selecting mainly out of one image, with little to no blending. While using the 16bit image is obviously preferred, I think either way you would want to smooth the geometry afterwards using a little artistic license.

rjlittlefield
Site Admin
Posts: 23561
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

TheLostVertex wrote:I hope Rik corrects me if I am wrong, but these stepped areas are areas where Zerene is selecting mainly out of one image, with little to no blending.
That's already correct.

If the software were specifically targeting surface reconstruction, it would do something like compare three adjacent images to see how the focus was changing, so that it could infer a more accurate location for the surface at that pixel location. But Zerene Stacker doesn't do that. Instead it just selects the best-focused frame because that's where the best pixels come from. ZS does do a little bit of smoothing, but really that's only for the sake of blending edges to hide things like slight differences in exposure.

It's a surprisingly tricky problem to reconstruct an appropriately smooth surface from coarsely stepped depth data. I still have vivid memories of an experience back in the 1970's when I tried to make an elevation grid of a mountain by interpolating from its contour lines. The first result was strongly terraced. The reason -- obvious in retrospect! -- was that in the vicinity of contour lines, the interpolation process was getting mostly or entirely the same elevation values, from nearby points on the same contour line. To eliminate the unreal terraces I had to alter the point selection process so as to guarantee that each interpolation used data from more than a single contour line.

There's a very similar problem with this fly head. Finding the best frame at each pixel position naturally produces terraces. One way to get rid of the terraces is to fit a low order parametric model. That approach also is very effective at getting rid of simple noise, which otherwise will contaminate the surface normals something awful. A second approach is to collect better data in the first place. In this context, "better" does not necessarily mean denser. A well chosen set of sparse data may represent the surface more accurately. BugEZ's original approach of looking for particularly strong contrasts is good in this regard, and it's especially good for this particular subject because that beautiful dark outline on the edge of each ommatidium gives a nice natural spacing to the data.

--Rik

BugEZ
Posts: 850
Joined: Sat Mar 26, 2011 7:15 pm
Location: Loves Park Illinois

Post by BugEZ »

Here are two overlay plots of the depth coordinates generated by bugScanner and extracted from the depth map output from Zerene's Dmap method.

The first plot shows the z (depth) coordinates produced by the two methods. The values are along the 1000th row of pixels. The coordinate system is explained in my first post.

Image

The second plot shows the same coordinates zooming into a "normal" area where the fly eye surface is normal to the camera lens.

Image

The bugScanner routine produces a very narrow band of values that plot nicely with Zerene's depth map when the scanned surface is oblique (sloping away from the camera lens). The close match of data can be seen between pixel columns 500-700. When the surface becomes normal to the axis of motion the scatter in the bugScanner output increases. The Zerene output is less noisy in the "normal" orientation, and similar in the oblique orientation.

When I think about this, I believe it is because my method only tries to find the sharpest focus (highest contrast) in the X-Y directions of one frame. Zerene tracks and compares focus in the z direction. It looks like Zerene does a good job of placing the focus surface presuming it is located in the middle of the scatter of bugScanner data points.

I suspect the Zerene method is more accurate.


Keith
Aloha

BugEZ
Posts: 850
Joined: Sat Mar 26, 2011 7:15 pm
Location: Loves Park Illinois

Post by BugEZ »

TheLostVertex (Stephen) observed:
I think your method currently will be limited to the same issue of only being able to gather data on one axis, with no tangential information. You could expand your solution however, to include data from the same object that has been rotated and photographed again.
I quite agree. I gathered several stacks of this insect with the goal of using the best surfaces from each to thoroughly map each eye in 3D. What I discovered is that the critter deflates rather quickly. My rig gathers one image every 6 seconds. This first stack contained ~185 images or so. that took ~ 30 minutes. The next several stacks with the insect re-positioned each required about that long. By the time I was imaging the back of the head, the eye had started to sag. While the secondary images are probably good enough for mapping the eye's patterning, they would not be very useful for constructing where each ommatidium is pointed.

It is a little bit like trying to stack a balloon with a slow leak.

Perhaps others have higher throughput? Were I using flash rather than gated continuous lighting, perhaps I could halve the cycle time. I still fear I would run out of time.

Keith

TheLostVertex
Posts: 317
Joined: Thu Sep 22, 2011 9:55 am
Location: Florida

Post by TheLostVertex »

BugEZ wrote: Perhaps others have higher throughput? Were I using flash rather than gated continuous lighting, perhaps I could halve the cycle time. I still fear I would run out of time.

Keith
I would agree that even with flash your working time would still be too long. The high number of images you take for the image is advantageous for trying to extract depth in the manner you are currently doing, as well as with Zerene depth output. However, if you decide to try and construct the object from completed images that are rotated successively, then you can use a smaller aperture and larger step size to help trim your working time. Just a thought. Though I still think you will be limited on time, since you will need several images around the object to make an accurate object.

Or maybe you can figure out how to make a fly eye humidifier for it while you stack ;) I am only half joking, as I wonder if something like that might be feasible.

rjlittlefield
Site Admin
Posts: 23561
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Post by rjlittlefield »

TheLostVertex wrote:Or maybe you can figure out how to make a fly eye humidifier for it while you stack ;) I am only half joking, as I wonder if something like that might be feasible.
Passing a slow stream of saturated air over it might do the trick. I am imagining a small "aquarium" equipped with an ordinary bubbler, hooded, with the output going to a small tube positioned near the specimen.

For sheer throughput, however, nothing beats video. The image quality is not as good as still frames, but the tradeoffs might be favorable for this application. See HERE for a slightly crazy example.

--Rik

Post Reply Previous topicNext topic