A 3D model of a Spondylus regius.
For this I used 112 pictures from different angles and Agisoft Photoscan.
http://3d.hphoto.se/
https://sketchfab.com/models/2e7710f66e ... 51e7e6fc39
Regards Jörgen Hellberg
A 3D model of a Spondylus regius
Moderators: rjlittlefield, ChrisR, Chris S., Pau
A 3D model of a Spondylus regius
Jörgen Hellberg, my webbsite www.hellberg.photo
-
- Posts: 109
- Joined: Mon Oct 17, 2016 10:28 pm
Thanks!royalwinchester wrote:That's cool!
Was each of the 112 photos a stack or a single image?
No stacking - this time...
I have started to test some stacking but my results, so far, is far from what I want to achieve.
Regards Jörgen
Jörgen Hellberg, my webbsite www.hellberg.photo
-
- Posts: 1976
- Joined: Sat Oct 07, 2006 10:16 am
- Location: Bigfork, Montana
- Contact:
Very nice. Was that generated in the less expensive stand-alone program? Will the software allow you to save as an .STL file for 3D printing? Do you actually process the 3D image or do you upload your images to their cloud server and they generate the 3D image. Do they host the 3D image and you link to it? IF you don't mind will you briefly explain the workflow process? I did look at the software a while back but thought it was to expensive 'for me' but then again I must have been looking at the Pro version pricing. What sort of 'stage' did you use to capture all of the images?
I ended up using VisualSFM for testing and never gave Agisoft Photoscan a shot.
Regards,
-JW:
I ended up using VisualSFM for testing and never gave Agisoft Photoscan a shot.
Regards,
-JW:
Thanks for the comments!
2) I have not tried but there seems to be a possibility to save the model in .stl format
3) I proecessed it on my computer
4) I have uploaded a smaler version of the modell to Sketchfab - beacause it seemed to be an easy way to share my modell with you.
5-6) 50 mm lens at f16 on a FF camera to get enough depth of field. White background. Used two softboxes and turned the clam by hand. Masked every picture in photoscan (first automatic by background then by hand). After that I pretty much followed the "Workflow".
1) I used "Standard Edition - Stand-Alone license"Smokedaddy wrote: [1] Was that generated in the less expensive stand-alone program?
[2] Will the software allow you to save as an .STL file for 3D printing?
[3] Do you actually process the 3D image or do you upload your images to their cloud server and they generate the 3D image.
[4] Do they host the 3D image and you link to it?
[5] IF you don't mind will you briefly explain the workflow process?
[6] What sort of 'stage' did you use to capture all of the images?
2) I have not tried but there seems to be a possibility to save the model in .stl format
3) I proecessed it on my computer
4) I have uploaded a smaler version of the modell to Sketchfab - beacause it seemed to be an easy way to share my modell with you.
5-6) 50 mm lens at f16 on a FF camera to get enough depth of field. White background. Used two softboxes and turned the clam by hand. Masked every picture in photoscan (first automatic by background then by hand). After that I pretty much followed the "Workflow".
Jörgen Hellberg, my webbsite www.hellberg.photo
-
- Posts: 109
- Joined: Mon Oct 17, 2016 10:28 pm
It is very cool to see these results.
I have a lot of experience with Agisoft, and the software is excellent, though I have only used it for much larger subjects (mostly human heads - thousands of them)
While I don't mean my thoughts about it to be any substitute for actually trying it out, here are some things I would expect:
The software works by finding fairly dense correspondence between photographs taken from different perspectives, so moving the camera around the subject, or rotating the subject. This means out of focus areas don't help so DOF is a huge issue, as is consistent shot to shot lighting. The best results I would *guess* would be from rotating the subject on a stage with very, very flat diffuse lighting. Retouching is almost impossible, and masking can be very helpful to give the software an extra "hint".
Because the software tries to deduce the camera location of each shot, both for 3D reconstruction and for photo projection to texture the model, telecentricity is probably detrimental.
I would expect a lot of small, thin structures, such as bristles, to be challenging.
You really want to avoid any specularity on the subject so that you capture the diffuse colors. Then, when you render out the 3D model, you can control materials to get lighting, specularity, and shadows, if desired. Not trivial, perhaps, but absolutely possible. All depending on the sophistication of your rendering software. I think I would approach this with a 3D painting application, such as ZBrush, and paint a color mask that I could then use to drive material types in the model's UV's (which are automatically generated by the meshing algorithm)
This is really an exciting experiment. I'm interested in seeing the results of a stacked and rotated subject, with materials applied, rendered in a high end rendering application like 3DStudio Max, or Maya, or Keyshot or the like!
-Royal
I have a lot of experience with Agisoft, and the software is excellent, though I have only used it for much larger subjects (mostly human heads - thousands of them)
While I don't mean my thoughts about it to be any substitute for actually trying it out, here are some things I would expect:
The software works by finding fairly dense correspondence between photographs taken from different perspectives, so moving the camera around the subject, or rotating the subject. This means out of focus areas don't help so DOF is a huge issue, as is consistent shot to shot lighting. The best results I would *guess* would be from rotating the subject on a stage with very, very flat diffuse lighting. Retouching is almost impossible, and masking can be very helpful to give the software an extra "hint".
Because the software tries to deduce the camera location of each shot, both for 3D reconstruction and for photo projection to texture the model, telecentricity is probably detrimental.
I would expect a lot of small, thin structures, such as bristles, to be challenging.
You really want to avoid any specularity on the subject so that you capture the diffuse colors. Then, when you render out the 3D model, you can control materials to get lighting, specularity, and shadows, if desired. Not trivial, perhaps, but absolutely possible. All depending on the sophistication of your rendering software. I think I would approach this with a 3D painting application, such as ZBrush, and paint a color mask that I could then use to drive material types in the model's UV's (which are automatically generated by the meshing algorithm)
This is really an exciting experiment. I'm interested in seeing the results of a stacked and rotated subject, with materials applied, rendered in a high end rendering application like 3DStudio Max, or Maya, or Keyshot or the like!
-Royal
Thanks Lou Jost!
For scientific use I guess it is usefull to scale the model - for that you will need the more expensive professional edition. The professional editon also lets you selects "Tie points" for alignment.
The subject, the lens settings and the light must be more or less unchanged between pictures. The subject can not be transparent or reflective. You will need a lot of sharp overlapping pictures.
But - it is interesting and quite fun
I still have some questions about the scientific correctness of the model. Model to the left, one of the pictures to right;Lou Jost wrote:This would have enormous value in my scientific work (describing new orchid species).
For scientific use I guess it is usefull to scale the model - for that you will need the more expensive professional edition. The professional editon also lets you selects "Tie points" for alignment.
The subject, the lens settings and the light must be more or less unchanged between pictures. The subject can not be transparent or reflective. You will need a lot of sharp overlapping pictures.
But - it is interesting and quite fun
Jörgen Hellberg, my webbsite www.hellberg.photo
Thanks Royal!
Regards Jörgen
It would be interesting to see some of your work - and to know more about your workflow.royalwinchester wrote:I have a lot of experience with Agisoft, and the software is excellent, though I have only used it for much larger subjects (mostly human heads - thousands of them)
-Royal
Me to! Beside documenting some shells, I am interested in using stacked 3D objects in video.royalwinchester wrote: I'm interested in seeing the results of a stacked and rotated subject, with materials applied, rendered in a high end rendering application like 3DStudio Max, or Maya, or Keyshot or the like!
-Royal
Regards Jörgen
Jörgen Hellberg, my webbsite www.hellberg.photo
-
- Posts: 109
- Joined: Mon Oct 17, 2016 10:28 pm
Most of my work has been in a professional capacity, so I can't show it directly. In order to restrict subject movement the setup had dozens of cameras at different locations all triggered simultaneously with very diffuse lighting and strobes. It is kind of like a very crude (and much less expensive) version of the lightstage at ICT
https://www.youtube.com/watch?v=c6QJT5CXl3o
The largest gathering efforts were to train the models used to capture users for a video game called Kinect Sports Rivals. I worked with the vision science team to create the software for the live user capture.
We would capture a pretty good likeness (mostly limited by the Kinect camera and uncontrolled lighting and environment in the wild) then map that to an art-directable model to capture some of the aspects of the likeness that was rigged and ready for real time game play.
Super fun and very technically challenging project!
https://www.youtube.com/watch?v=c6QJT5CXl3o
The largest gathering efforts were to train the models used to capture users for a video game called Kinect Sports Rivals. I worked with the vision science team to create the software for the live user capture.
We would capture a pretty good likeness (mostly limited by the Kinect camera and uncontrolled lighting and environment in the wild) then map that to an art-directable model to capture some of the aspects of the likeness that was rigged and ready for real time game play.
Super fun and very technically challenging project!