Tutorial #10 – understanding mesh/cloud filters
Welcome to the 3DF Zephyr tutorial series.
In this recipe, you will learn how to use correctly the point cloud and mesh filters in Zephyr.
Albeit not properly a tutorial, this guide can improve the final results you get in Zephyr.
· Point Cloud Filters
Decimation: select the desired point cloud with the dropdown menu and the number of desired points. The point cloud will be regenerated decimating the point count to the maximum specified value.
Densification: select the desired point cloud with the dropdown menu and the number of desired points. The point cloud will be regenerated densifying the point count to the maximum specified value. Only structured point clouds can be densified.
· Mesh Filters
Decimation: select the desired mesh with the dropdown menu and the number of desired vertices. The mesh will be regenerated decimating the vertex count to the maximum specified value once.
Densification: select the desired mesh with the dropdown menu and the number of desired vertices. The mesh will be regenerated using the vertex count to the maximum specified value once.
Laplacian smoother: will smooth the surface of the selected mesh. Reduces noise and won’t keep sharp edges. More iterations means a smoother final result.
Bilateral smoother: will smooth the surface of the selected mesh. Reduces noise while trying to keep sharp edges where possible. More iterations means a smoother final result.
Fill holes: will fill the mesh holes. From the selection window it is possible to select a specific hole (in the workspace scene the hole border will be highlighted with a border) or all holes of a certain size (or all holes). The color of the triangles that will close the hole will most likely wrong when the missing visibility of the point from the camera will behave in a degenerate way (and, obviously, it is impossible to identify the color of something that is not seen by any picture). Two algorithms are available, watertight and selective. While the selective one can close the selected hole in a fast way, it might not work for complex cases, in which case we suggest to you the watertight algorithm.
Retopology: you may use this filter to let Zephyr try to execute the mesh retopology algorithm – this filter will try to simplify and generate a better organized mesh with bigger triangles where possible. The higher the factor, the bigger the triangles. Please note that running this filter may result in mesh details loss.
Photoconsist Mesh Optimization: the photoconsistency based mesh optimization will adjust the triangle positions to maximize the photoconsistency of their reprojection. You may run this filter directly when reconstructing a mesh (in some presets this feature is disabled, but you may always enabled it from the advanced or custom settings) or after a mesh has been reconstructed from this menu. When you have a good geometry in the mesh to begin with, it’s advised to keep a high resoution in the photoconsistency mesh optimization (50% – 75%) while if you have a low quality mesh as input, you should keep a lower resolution (25%).Following, some examples and more details about the photoconsistency.
· Photoconsistent mesh optimization:
Following, a detailed explanation of the photoconsistency based mesh optimization as well as a practical guide with examples.
Max. vertices (5.000.000): maximum vertex count of the final resulting mesh
Target reprojection area (20): controls the size that every triangle will try to obtain at the end of the photoconsistency process. Given a triangle, its final reprojection area (in pixels) to the nearest camera will tend to get near to the specified value. Lowering this value will make a denser mesh: the final mesh will then have a variable level of detail, with a higher vertex point cloud corresponding to those areas that are viewed by nearer cameras.
In most cases the default value will work well; in case you are dealing with a mesh that is already very good (for example, one that has already finished a photoconsistency step) this parameter can be decreased to try and extract even more details, while it can be increased in case of noisy meshes and low quality images.
Number of neighbor cameras (4): for each camera, a number N of neighboring cameras are chosen that will compose the pairs that the photoconsistency algorithm will use.
Increasing this parameter will increase the final accuracy as well as increase the computation time. In most cases, the results you can achieve with the default value of 4 and with higher values are very similar, since the information that can be extracted with 4 cameras is often enough to reconstruct most details of the subject.
This parameter should be decreased only in special cases, for example for prelimitary tests or if you don’t need high accuracy.
Use Symmetric pairs (no): when this option is enabled, the algorithm will analyze each pair of cameras (A,B) int a symmetric way (A towards B, B towards A) during each iteration. If this option is not selected, the analysys will be interleaved in between each itreation ( A to B when odd, B to A when even).
Usually, the final level of detail is very similar regardless of the value of this option. We suggest to leave this option unchecked, as it will cut the computation time in half or to use it only for very small datasets.
Max iterations (40): controls the maximum iterations of the algorithm.
We suggest to leave this value on default, as it the algorithm will usually converge to the optimal solution. You may decrease this value however if the starting mesh already has a very good detail, for example if the input mesh is the output of a previous photoconsistency step.
Image resolution (50): controls the resolution scale of the images that will be used internally by the photoconsistency algorithm.
Hierarchical subdivision (1): if this value is greater than zero, the photoconsistency algorithm will be applied more times in a sequential way, automatically adjusting the image resolution and the iteration number. This means that the same results can be obtained by running the same algorithm multiple times with different and appropriate settings.
- Following, an example for hierarchical subdivisions set to 1:
|Hierarchical subdivisions 1, image resolution 60, iterations 40:||is exactly the same as
- Following, an example for hierarchical subdivisions set to 2:
|Hierarchical subdivisions 2, image resolution 60, iterations 40:
|is exactly the same as
- Following, some typical scenario and suggested settings:
Mesh with noise, high quality images
- Hierarchical subdivisions 2, image resolution 50, iterations 40
If additional quality is desired, you may also excecute the following two steps:
- Hierarchical subdivisions 0, image resolution 75, iterations 30
- Hierarchical subdivisions 0, image resolution 100, iterations 20
Mesh with noise, low quality images
- Hierarchical subdivisions 2, image resolution 50, iterations 40
If the images are low quality, it is usually useless to run additional steps at this point.
Good mesh, high quality images
- Hierarchical subdivisions 0, image resolution 75, iterations 40
The image resolution may be increased to 100 in case of very good meshes, or you may do a second pass with 100)
Usually, you should not run the algorithm in hierarchical mode, as itis advisable to run one single process at high resolution, so that you can keep the details that have been already extracted.
Following, a photoconsistency case study example:
Consider the following mesh, which clearly shows a low detail level as well as noise distributed trhoughout the surface:
Since the pictures taken were of good quality, we can apply the settings as explained eariler as “mesh with noise, high quality images”. Following the tutorial (including the optional steps with images at high resolutions) we’re able to obtain the following output:
An error to avoid however can be seen when applying the photoconsistency directly to high resolution images, as in the case of “Good mesh, high quality images”. The initial mesh has too many imperfections that require a low-resolution analysis in order to be corrected. If this happens, the final result will be far from the optimal one shown earlier: