That is a good idea. For 3d reconstruction purposes, I do not think the images will have to be quite as robust. Plus I am sure Keith would be able to figure out a good way to get decent lighting for the purpose with his LED rig.rjlittlefield wrote: For sheer throughput, however, nothing beats video. The image quality is not as good as still frames, but the tradeoffs might be favorable for this application. See HERE for a slightly crazy example.
--Rik
3D scan of fly head
Moderators: rjlittlefield, ChrisR, Chris S., Pau
-
- Posts: 317
- Joined: Thu Sep 22, 2011 9:55 am
- Location: Florida
My belt driven rig (rubber bands really) shown in link below
http://www.photomacrography.net/forum/v ... hp?t=13427
is quiet and smooth in operation. I don't hear the actuator at all and have to look for belts turning or LED's flickering to know that the motor is being driven. I think it would be smooth enough for video. I need to look through the camera viewfinder when the actuator slews and see how it looks.
The bucket light I use is quite bright. Link below. Since I first constructed the bucket light I replaced all the LED's and doubled the count, but don't drive them quite as hard (lower current). The revised light is a bit brighter.
http://www.photomacrography.net/forum/v ... hp?t=15788
The Pentax K-100 I use on the rig does not make video, but my Pentax K-5 does and I have constructed an adapter for the remote shutter cable to allow it to be used in the rig.
A video mode of operation is definitely on the table.
For now, I am still working on software tools to streamline the processing of the bugScanner output.
Keith
http://www.photomacrography.net/forum/v ... hp?t=13427
is quiet and smooth in operation. I don't hear the actuator at all and have to look for belts turning or LED's flickering to know that the motor is being driven. I think it would be smooth enough for video. I need to look through the camera viewfinder when the actuator slews and see how it looks.
The bucket light I use is quite bright. Link below. Since I first constructed the bucket light I replaced all the LED's and doubled the count, but don't drive them quite as hard (lower current). The revised light is a bit brighter.
http://www.photomacrography.net/forum/v ... hp?t=15788
The Pentax K-100 I use on the rig does not make video, but my Pentax K-5 does and I have constructed an adapter for the remote shutter cable to allow it to be used in the rig.
A video mode of operation is definitely on the table.
For now, I am still working on software tools to streamline the processing of the bugScanner output.
Keith
I have an update on the "progress" with the eye scan software and analysis. I have gotten a great deal of help from TheLostVertex (Steven) who pointed me to MeshLab, an open source program described here. http://en.wikipedia.org/wiki/MeshLab. MeshLab has proven to be a great help. I have also used another open source program called imageJ. ImageJ is described here http://en.wikipedia.org/wiki/ImageJ imageJ was developed by the National Institute of Health, probably to help read digital x-rays. It is a great utility for interrogating pixels, both for coordinates and hue.
In my first post I described my tool for turning a stack into a "point cloud" of points placed in 3D space. Since my primary interest was to model the eye surfaces, I needed to trim away points that are not part of the eyes. I loaded the output image from my initial stack (the DMAP output from Zarene Stacker) into imageJ. I then used the multi-point feature in imageJ to trace a polygon around the perimeter of the eye and exported the polygon point coordinates into a text file. I wrote a Java routine that loads the point cloud and the boundary point files. The routine then sorts trough the point cloud keeping the points that are within the polygon and writes them to a file in a format compatible (*.XYZ format) with the MeshLab program.
I then open up MeshLab and load the trimmed point cloud. There are many tools within MeshLab for manipulation of point cloud data. Happily, the proliferation of laser scanning tools and solid printers has created a lot of interest in turning point cloud data in to 3D meshes. On YouTube you can find tutorials on how to turn point cloud data into meshes. I tried a variety of processing techniques and found that the best was to generate about 6000 filtered mesh points from the 10,000 or so that were in the original trimmed file. This got rid of a lot of the noise (mostly in the z direction) that was in the input data. I then generated surface normals and a mesh. Various YouTube tutorials explain these processes much better than I can. I will only add that the filtering and mesh generation steps can be done very quickly. There are various techniques for smoothing the mesh and I found this to be necessary. Again tutorials can be found by searching on "smooth a mesh with MeshLab".
After I had generated the mesh I needed to output it in a form that I could use to read into a Java program. The simplest format for this was the *.xyz format, which creates a text file with the vertexes of the mesh and the normal vector (a vector pointing directly away from the surface) at each vertex.
The last step for my needs was to map the centers of individual ommatidium onto the surface. For this, I again turned to imageJ to provide pixel coordinates. I again loaded the DMAP processed image and used the multi-point function to get ommatidium coordinates by placing points at the center of each ommatidium. This was laborious. imageJ allows you to zoom in on section of the photo and zooming helped a great deal. Again the x-y pixel coordinates were exported to a file. I used this file to create the x-y coordinates in the "bug" coordinate system. I then wrote another Java program that read in the mesh data, and the ommatidia coordinate data and solved for the z coordinate of the ommatidium as well as the normal vector on the eye surface at that location. This was the information I wanted in the first place. I used the ommatidium coordinate and normal vector information to project on a 30mm radius sphere (30mm is the distance at which I observed this doli detect small flies on a leaf) This is enough information to show how many of the fly's ommatidia are pointing toward the "target" fly when the fly sees it and responds.
That was a lot of pretty boring stuff. Here are some figures that explain the same thing more clearly...
As you can see in the last image, the projected "coverage" of ommatidia that I computed was not very regular. I don't think this is due to any wrinkles in the eye. I think this is due to the relatively poor measurement accuracy that my "bugScanner" routine (first post in this thread) in the z axis when viewing the bug's eye. I think the optical problem is largely due to the fact that the eye's cornea is relatively thick and there are things in focus at many axial levels.
Anyway, I will be looking to clean this up a bit and to refine the techniques I used. I have been working steadily on this for several months, and now have tools that allow the condensation of several months of effort into an afternoon.
Comments welcome...
Keith
In my first post I described my tool for turning a stack into a "point cloud" of points placed in 3D space. Since my primary interest was to model the eye surfaces, I needed to trim away points that are not part of the eyes. I loaded the output image from my initial stack (the DMAP output from Zarene Stacker) into imageJ. I then used the multi-point feature in imageJ to trace a polygon around the perimeter of the eye and exported the polygon point coordinates into a text file. I wrote a Java routine that loads the point cloud and the boundary point files. The routine then sorts trough the point cloud keeping the points that are within the polygon and writes them to a file in a format compatible (*.XYZ format) with the MeshLab program.
I then open up MeshLab and load the trimmed point cloud. There are many tools within MeshLab for manipulation of point cloud data. Happily, the proliferation of laser scanning tools and solid printers has created a lot of interest in turning point cloud data in to 3D meshes. On YouTube you can find tutorials on how to turn point cloud data into meshes. I tried a variety of processing techniques and found that the best was to generate about 6000 filtered mesh points from the 10,000 or so that were in the original trimmed file. This got rid of a lot of the noise (mostly in the z direction) that was in the input data. I then generated surface normals and a mesh. Various YouTube tutorials explain these processes much better than I can. I will only add that the filtering and mesh generation steps can be done very quickly. There are various techniques for smoothing the mesh and I found this to be necessary. Again tutorials can be found by searching on "smooth a mesh with MeshLab".
After I had generated the mesh I needed to output it in a form that I could use to read into a Java program. The simplest format for this was the *.xyz format, which creates a text file with the vertexes of the mesh and the normal vector (a vector pointing directly away from the surface) at each vertex.
The last step for my needs was to map the centers of individual ommatidium onto the surface. For this, I again turned to imageJ to provide pixel coordinates. I again loaded the DMAP processed image and used the multi-point function to get ommatidium coordinates by placing points at the center of each ommatidium. This was laborious. imageJ allows you to zoom in on section of the photo and zooming helped a great deal. Again the x-y pixel coordinates were exported to a file. I used this file to create the x-y coordinates in the "bug" coordinate system. I then wrote another Java program that read in the mesh data, and the ommatidia coordinate data and solved for the z coordinate of the ommatidium as well as the normal vector on the eye surface at that location. This was the information I wanted in the first place. I used the ommatidium coordinate and normal vector information to project on a 30mm radius sphere (30mm is the distance at which I observed this doli detect small flies on a leaf) This is enough information to show how many of the fly's ommatidia are pointing toward the "target" fly when the fly sees it and responds.
That was a lot of pretty boring stuff. Here are some figures that explain the same thing more clearly...
As you can see in the last image, the projected "coverage" of ommatidia that I computed was not very regular. I don't think this is due to any wrinkles in the eye. I think this is due to the relatively poor measurement accuracy that my "bugScanner" routine (first post in this thread) in the z axis when viewing the bug's eye. I think the optical problem is largely due to the fact that the eye's cornea is relatively thick and there are things in focus at many axial levels.
Anyway, I will be looking to clean this up a bit and to refine the techniques I used. I have been working steadily on this for several months, and now have tools that allow the condensation of several months of effort into an afternoon.
Comments welcome...
Keith
Here is the result of the same analysis the right eye (as seen in the photo). This is shows where the ommatidia are pointed projected onto a 30mm sphere, both left and right eyes. I colored the projected points from the left eye purple and on the right eye yellow. It is interesting to note the overlapping band of stereo vision in the upper central portion of the field of vision.
Keith
Keith
- rjlittlefield
- Site Admin
- Posts: 23564
- Joined: Tue Aug 01, 2006 8:34 am
- Location: Richland, Washington State, USA
- Contact:
-
- Posts: 317
- Joined: Thu Sep 22, 2011 9:55 am
- Location: Florida
In a post about an actuator for stacking ray_parkhurst showed a photo made with Helicon Focus that used a routine within that stacking software to generate a square 3D mesh. Link to the post below. The photo is on page 3.
http://www.photomacrography.net/forum/v ... a26d65cc26
This intrigued me.
I read the description in the Helicon Focus literature of their program's rendering features. While I have no need for a 3D rendering projected into 2D, the square mesh surface creation they describe might be interesting to program. I may dust off the fly eye scan program I wrote a few years ago (described in this post) and write code to create the square mesh they describe. It might allow a more self contained package. The process I used and described above required many handoffs to other programs.
Keith
http://www.photomacrography.net/forum/v ... a26d65cc26
This intrigued me.
I read the description in the Helicon Focus literature of their program's rendering features. While I have no need for a 3D rendering projected into 2D, the square mesh surface creation they describe might be interesting to program. I may dust off the fly eye scan program I wrote a few years ago (described in this post) and write code to create the square mesh they describe. It might allow a more self contained package. The process I used and described above required many handoffs to other programs.
Keith
-
- Posts: 1954
- Joined: Sat Oct 07, 2006 10:16 am
- Location: Bigfork, Montana
- Contact:
... or maybe VisualSFM. http://ccwu.me/vsfm/TheLostVertex wrote: Or you can take a series of completed stacks that are being rotated 360º and create your model from those with something like Autodesk 123d.
-JW: