Here in 3Dflow we are, first and foremost, Computer Vision researchers. Computer Vision is, generally speaking, the science that studies how to extract (any kind of useful) information starting from images. This is a very rough definition and many subproblems can be delineated and many different meanings for the concept of "information" can be given. Extracting three-dimensional models from a number of images (extracting 3D information from a 2D input) is a well-studied problem in Computer Vision. 3DF Zephyr represents a highly refined and well-engineered solution for this problem.

 

There are three main phases in Zephyr:


When starting a new project the software will extract the camera external parameters (positions and orientations) and their internal parameters (i.e. the focal length and other camera properties). All the magic of this phase is done by Samantha, our leading Structure from Motion Technology. If you want to input the internal camera parameters manually instead of letting Samantha estimate them, you can.


After the cameras have been oriented, you can proceed and generate the 3d model. This step goes through two phases :


- Extract a dense reconstruction. The process requires the input cameras to be correct. If you believe a camera is not correctly oriented, you can remove it from the workspace. Before proceeding you can also restrict or expand the volume of reconstruction. The densification stage is managed by Stasia, our Multi-View Stereo technology, powers the reconstruction process.  You can create different dense reconstructions (with varying parameters) and save them into the workspace.


- Create a triangular mesh, starting from a dense reconstruction. Based on what you are going to reconstruct you can choose from either the generation of a smoothed surface (when dealing with human body or terrains for instance) or the generation of a sharp surface that more benefits scenes such as buildings or mechanical parts. The latter kind of surfaces are handled by Sasha, the meshing algorithm developed by 3Dflow.


Every mesh polygon gets textured using the most appropriate pictures according to their visibility and quality. Within the texture generation phase there is a color balance algorithm as well, that allows the user to select and set-up the best color for each pixel automatically by checking the different lighting changes on the photos.


In the next section we will give some hints and tips on how to take the right photos.