Tutorial #01 : How to convert photos in 3D models with 3DF Zephyr

Getting started with 3DF Zephyr

 

Welcome to the 3DF Zephyr tutorial series.
In this recipe, you will learn the basics and you will see how easy it is to turn your pictures into accurate 3D models with 3DF Zephyr.
3DF Zephyr is a powerful tool that requires a lot of computation power. Though not mandatory, a CUDA device is recommended as well as a high amount of memory.

 

  • Step 1 – Getting ready

Creating 3D Models from pictures requires a good dataset. You can follow this tutorial with your own pictures or try it with our sample dataset. If you want to take your own pictures, please follow these base guidelines that will teach you the best practices to acquire a dataset.
If you want to use our dataset, please download this zip file and extract it so that you can use it in Zephyr! This dataset is composed by 65 photos – if you’re using 3DF Zephyr Free, which is limited to 50 photos, don’t worry: simply select the first 50 photos — you’ll get a less detail model, but a nice model nonetheless!

download datasetDownload Dataset – Cherub (531MB)
Download – Cherub .ZEP file (340MB)

After preparing your dataset, install 3DF Zephyr on your computer and that’s it! That’s all you need to start using 3DF Zephyr!
Note: due to high demand, the “Dismal Souvenir” tutorial is available for download in Tutorial #3 (Masquerade use)
 

  • Step 2 – Creating a new project

To create a new project, click  “Workflow” (1) and then “New Project” (2). The “Project Wizard” (3) screen will appear which will guide you through the process of importing your pictures.
This phase is critical for the scene reconstruction, so please feed 3DF Zephyr a good dataset: blurred images and dataset with no overlapping pictures are examples of bad data for 3DF Zephyr. You can learn more about the most common guidelines on this quick guide titled “how to acquire pictures for 3DF Zephyr“.
To continue, in the “Project Wizard” screen click “Next” in the lower right corner.
Protip: there two options in the lower left corner of the Project Wizard (3) that allows you to automatically compute the dense cloud and the surface. This is useful for a bigger dataset. Leave them unchecked as in this tutorial we’ll walk you through both those steps. Another option (checked by default) allows Zephyr to download camera calibrations where available: we suggest to leave this option on – although Zephyr is completely autocalibrated, the online camera calibration can speed up the first phase and help with some fisheye lenses.
This next window is the “Photo selection page“, in which we need to add the photo that we want 3DF Zephyr to process. Click on the “plus sign” (4) and browse to the directory where your dataset is located. Select all the images you previously extracted and then click open (or drag and drop images directly from the windows explorer)
The “next button” (5) will now be clickable. Click it and you’ll get to the next window, which will allow you to assign camera calibration parameters to loaded pictures.
Protip:  advanced users can add a previously generated manual camera calibration (if available) by checking the option in the lower left corner of this window. 
You will be now taken in the “Camera Calibration Page” Window. This topic won’t be covered in this tutorial, so just click “Next“.
You are now ready for the first computation phase. Here’s a brief explanation of what will happen: Zephyr will analyze each image and find the features of the images (points of interests that the computer can understand) and compare each image with (usually a subset of) the other pictures: this is done to set the cameras in the correct position. Before doing that though, it’s necessary to tell Zephyr which settings to use. For this tutorial, we will use the preset mode with Close Range/Default settings.
 

  • Step 3a – Camera orientation and sparse point cloud generation, preset mode

The preset mode window will appear. The preset mode allows you to pick optimal settings for most cases, depending on the application scenario (dropdown menu “category“) as well as the accuracy and computation speed required (dropdown menu “presets“). For this specific case, for example, you can pick close range as category and default as preset and then experiment with different settings.
We will anyway have a quick look at the reconstruction settings: you can switch to the advanced mode using the dropdown menu in the top right corner and selecting “advanced” and proceed to step 3b. Otherwise, you can click the “Next Button” (6) and then “Run”, which will start the computation which will take you directly to step 3c.
 

  • Step 3b – Camera orientation and sparse point cloud generation, advanced settings

The advanced windows setting allows you to tweak and control every aspect of our reconstruction engine. We will discuss these advanced parameters in another tutorial, although you can also find an in-depth explanation in the manual as well.
Use again the dropdown menu in the top right corner to go back to “presets mode”, select Close Range and Default and then click the “Next” Button (6).
When you click the “Next Button” (6) you will be ready to start the computation and you will be presented with a “Run” button. This phase will output a sparse point cloud and the will do the camera orientation. Click run to start the computation.
Protip: understanding these settings is very important to obtain the best result possible with 3DF Zephyr. This is not a topic discussed in this tutorial, but it’s highly recommended that you read our parameters fine tuning guide.
 

  • Step 3c – Reconstruction outcome

After a while, you should see a “Reconstruction Successful!” dialog. This window will tell you how many (and which) images were correctly oriented. Double clicking the filename will open the appropriate picture in your default image viewer. This is especially useful when dealing with large datasets to quickly understand which cameras weren’t reconstructed successfully.
Click “Finish” in the lower right bottom of the screen. Congratulations!
You have generated your first sparse point cloud in 3DF Zephyr.
 

  • Step 4 – Moving around

Before moving to the next step, let’s learn the basics to navigate the scene, which is rendered at the centre of the screen (7).
By default, you can navigate the scene in orbit view mode: while hovering the mouse cursor on the scene (7)  hold left click and then move your mouse to look around. To zoom in/out simply use the mouse wheel. You may also pan the view by keeping left control pressed and moving the mouse cursor.
Zephyr offers three navigation systems that can be selected with their respective icon (8) or from the Scene > Camera submenu.
The orbit view mode with pivot behave exactly like the orbit view mode, however, the pivot is not the centre of the reconstruction but picked every time on the model on the cursor position.
The free look mode uses the classic first person shooter WASD keys; Holding the left mouse button and moving the mouse allows you to rotate the camera and the ‘q’ and ‘e’ keys allow you to move respectively Up and Down.
You can also move quickly to a camera position by right clicking on a camera and then left clicking on “Move here” (9) or by using the camera navigator at the bottom of the screen.
 

  • Step 5 – Dense point cloud generation

Now that the cameras are positioned, we can extract the dense point cloud of our 3D model. This time, from the “Workflow” (10) Menu choose Dense Point Cloud Generation (11). The “Dense point cloud Generation Wizard” (12) will appear, click “Next” in the lower right corner of the screen.
At this stage, 3DF Zephyr will compute depthmaps and extract the dense point cloud.
These settings can significantly change the quality of the output as well as the computation time needed and every aspect can be tuned in the advanced window, but right now just leave again the preset Close Range/Default and then click “Next” in the lower right corner of the screen.
Click “Run” to start the dense point extraction (second phase of computation to create the 3D mesh).
Once the dense point cloud generation has completed, click finish to proceed to step 6, where we’ll be able to extract the mesh.
Protip: understanding these settings is very important to obtain the best result possible with 3DF Zephyr. This is not a topic discussed in this tutorial, but it’s highly recommended that you read our parameters fine tuning guide.
After the computation, a “Dense point cloud generation successful” dialog should appear: click “Finish” in the lower right corner of the window. You can navigate the scene of your new dense point cloud or proceed to the final phase of the computation.
 

  • Step 6a – Mesh extraction

To start the mesh generation process, simply click “Workflow” (13) and then on Mesh Extraction (14).
The “Mesh Generation Wizard” (15) window will appear.
Since the workspace is capable of holding multiple point clouds, we have to choose which one we’ll be used for the mesh extraction. Since we just have one, we can simply click “Next” to get to the “surface reconstruction page“.
Once again, you can snoop around the advanced settings if you wish, and then select again the presets “Close Range/Default” and click “Next” in the lower right corner of the window.
To start the mesh creation, just click on the “Run” button: when the “Mesh Creation Successful” dialog appears, just click “Finish” in the lower right corner of the window.
Protip: understanding these settings is very important to obtain the best result possible with 3DF Zephyr. This is not a topic discussed in this tutorial, but it’s highly recommended that you read our parameters fine tuning guide.
 

  • Step 6b – Textured mesh generation

Once the mesh has been generated, the color information will be saved per-vertex. This might be fine for some applications, though usually a texture is required as well. In Zephyr, this is a separate step that takes in input the mesh generated in step 6a.
To start the texture generation process, simply click “Workflow” (16) and then on “Textured mesh Generation” (17). The “Textured Mesh” (18) generation window will appear.
Select the mesh previously generated and if you wish, tweak the settings according to your desired output. Please note how the color contribution will vary depending on the maximum number of cameras per triangle, as well as the possibility to use the multiband setting for cleaner and sharper borders.
It’s recommended that you leave 1 camera per triangle and Use Color Balance enabled. This feature, available since Zephyr 2.0 will automagically fix the lighting issues and pick the best possible color for each point. You can still use more than one camera per triangle and the multiband option if you wish (even paired with color balancing) however for most cases the best results can be achieved with 1 as “max number of camera per triangle”, multiband disabled and use color balance enabled.
Click “Next” to start the texture mesh generation.
Protip: understanding these settings is very important to obtain the best result possible with 3DF Zephyr. This is not a topic discussed in this tutorial, but it’s highly recommended that you read our parameters fine tuning guide.
 

  •  Step 7 – Exporting the final mesh

To export the generated textured mesh, simply click on the “Export Menu” (19) and then on “Export Textured Mesh” (20): the “Mesh Export Window” (21) will appear.
3DF Zephyr allows you to export in the most common and used file formats. As development goes on, new file formats might be supported, so it’s important to keep your 3DF Zephyr up to date.
Different options are available depending on the file format chosen. As a standalone 3D model viewer you can use your program of choice (for example, Meshlab).
When you are ready to export, simply click on the Export button in the lower right corner of the mesh export window. The texture generation topic will be covered thoroughly in another tutorial.
 

  • Final notes

Opening your exported mesh in your model viewer of choice, you should now see something similar to this output. Your reconstruction should also have some parts of the white table below the cherub (which the reconstruction here does not have).

Cherub Example
by 3dflow
on Sketchfab

 
Continuing in this tutorial series, you will learn more than one way to polish and clean your reconstructions.
Learning the basics is easy, but understanding every aspect and the effects of every option takes a bit of time and experience: remember that we have our online documentation as well as a forum where you can ask specific questions.
 

The next tutorial will teach you how to set control points and control distances. Click here to proceed to the next tutorial.