The first phase of 3D reconstruction is called Structure from Motion and it is responsible for defining the position and orientation of each photo that has been loaded in Zephyr. This step yields a sparse point cloud as initial 3D product.

Depending on the selected category it is possible to choose from four quality values (fast - default - deep - exhaustive) that gradually increase the capabilities of the Samantha algorithm managing this first phase.

You can switch to advanced mode to define each parameter of Structure from Motion phase in detail. To do so, simply select "Advanced" from the "settings" dropdown menu.

Keypoints Density: keypoints represent the matched pixels in the pictures. This parameter allows you to set the number of keypoints that 3DF Zephyr will have to extract for each photo; nevertheless a substantial number of keypoints does not lead to a better result, since the more elements involved the higher the ambiguity during correspondence detection between the keypoints themselves.

Very low:                extracts up to 2,500 keypoints per image

Low:                extracts up to 5,000 keypoints per image

Medium:                extracts up to 7,500 keypoints per image

High:                extracts up to 10,000 keypoints per image

Very High:        extracts up to 15,000 keypoints per image

It is recommended to decrease keypoints density when:

  - a computed result is required quickly;

  - the distance between cameras and subject/scene on survey does not change;

  - pictures have a high overlap percentage (70% - 80%);

It is recommended to increase keypoints density when:

  - Cameras are not correctly oriented;

  - There are some pictures too close or too far from the subject/scene;

  - The dataset includes a few photos that have a low percentage of overlap;

  - The subject/scene has a uniform surface and no texture (a white wall for instance);

Matching Type: allows you to select different kinds of matches between the keypoints in the images. The accurate option requires more time to complete;

Matching Stage Depth: sets up how many matches are performed on the images between the keypoints. By setting this parameter to the higher value, the capability to align multiple images at the same time is going to increase, as is computation time.

It is recommended to decrease the depth value of the matching stage when:

 - a computed result is required quickly;

 - The distance between the camera and subject is not high;

 - The overlap percentage between different shots is high (70% - 80%);

It is recommended to increase the depth value of the matching phase when:

 - Cameras are not correctly oriented;

 - There are some pictures too close and too far from the subject/scene;

 - The dataset counts a few photos that have a low percentage of overlap;

 - The subject/scene has a uniform surface and no texture (a white wall for instance);

Reconstruction Engine: 3DF Samantha supports both incremental and global reconstruction pipelines. The global pipeline tries to solve the orientation problem globally, which means that it's usually faster than the incremental one and potentially be less prone to drifting error. The incremental engine on the other hand, albeit slower, tends to be able to include more cameras in the final solution. In a nutshell, you'd should use the global pipeline if you are processing a huge dataset (more than 1000 pictures).

Photo Ordering: this parameter informs the software which what methodology was used during image capture in the field and therefore the sequence of the pictures taken . This can speed up the near images detection process. By using sequential option the photos have been taken following a numerical order (photo 1 near photo 2 -  photo 2 near photo 3 and so on), while circular option indicates a loop sequence. If you are aware that the photos have been taken with a specific order, you can streamline the process by choosing among those two options, otherwise you can consider the sparse option. The last option is called approximate grid and it was designed for data acquisition collected from drones or UAV systems, and  it is exploited when capture is sequential and has a high percentage of overlap between the pictures (70% - 80%).

A further description of those parameters is available here:

You may also switch to the "Custom" mode by selecting "Custom" from the "settings" dropdown menu. This mode allows you to fine tune in detail every single reconstruction parameter. Remember that you can load an XML file or reset to default by right-clicking and selecting the desired option. Please also note that a hint regarding the selected parameter is shown at the bottom of the window.

At the end of the first phase of cameras orientation you will see both the sparse point cloud and the pictures sequence around the point cloud.

Photographs in the render window are identified by blue stylized cones whose graphics are editable in Zephyr Options.


After the first step of 3D reconstruction is completed it is advisable to check its accuracy in order to figure out which pictures should be discarded to decrease the error even on further phases.

By clicking on Tools > Workspace > Camera Stats the following chart will appear:  

The column in the middle shows the mean reprojection error related to each picture, while the "Point Seen" column indicates the number of keypoints (namely the points for which the software was able to retrieve the position) detected in each photo. By clicking on the entries Mean Reproj. Error and Point Seen users are able to sort the values in both decreasing and ascending order.


Generally a well-oriented picture with a good accuracy degree will have a low reprojection error as well as many points seen. Points seen represent the equations amount exploited during reconstruction phase, so their high value is related to a higher reliability of final result. It should also be noted that the reprojection error does not provide a precise degree of the result since it always has to be correlated to the sensor size of a camera rig.