Orthophoto from Textured Mesh?

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • Craig
    3Dfollower
    • Oct 2022
    • 11

    #1

    Orthophoto from Textured Mesh?

    I have been using 3DFZ for drone nadir photos to document roadways. I use 80% overlap between images and the results are really good. I then export a mesh and separately an orthophoto. My question relates to the orthophoto, and whether it's possible to generate a clean orthophoto based on the textured mesh rather than it being based on the standard mesh?

    I do not have ability to stop traffic, so my photos are "contaminated" with random cars. Pix4D has a great feature where they figure out which items are moving through the scene and deletes them. 3DFZ doesn't (to my knowledge) state that they have that feature, but the textured mesh generation seems to have it or something similar built in, because the textured meshes are nicely cleaned of vehicles. The normal meshes some of have the random cars plastered onto them. I'd like to create an orthophoto from the clean textured mesh, but I can only choose to do it from the normal mesh or dense point. Is there a way around that?

    Here are a few examples:

    Screenshot of mesh contaminated with a vehicle
    Click image for larger version

Name:	Screenshot of Mesh.png
Views:	13
Size:	701.6 KB
ID:	9942

    Screenshot of textured mesh that was somehow automatically cleaned
    Click image for larger version

Name:	Screenshot of Textured Mesh.png
Views:	13
Size:	753.9 KB
ID:	9943

    ​Screenshot of final orthophoto showing the same contamination
    Click image for larger version

Name:	Orthophoto.png
Views:	14
Size:	661.0 KB
ID:	9941

    Thanks for any help.

    Craig​
  • Andrea Alessi
    3Dflow Staff
    • Oct 2013
    • 1373

    #2
    It's normal that the mesh shows "contaminated" images as the color is simply computed on the vertices - there's no texture yet. It's when composing the texture that the various photos are analyzed and weighted accordingly in order to understand how much each photo should contribute for each pixel, that will ultimately fill every triangle in the texture mesh.

    The ortophoto requires a mesh by nature, not a textured mesh. It has to recompute the value of each pixels as it is a different projection, but the approach is similar. There is no color information coming from the mesh, but from the photos. Taking color information from the texture would be a different (and wrong) approach on texture composition.

    The ortophoto generation and texturing are very similar and can be controlled in a similar fashion.

    At step 1:

    - tag your photos accordingly / remove photos with whatever is "ruining" the orthophoto. This is a manual approach but sometimes it's just easier to remove those 10 images from the dataset, or mask them, rather than having to re-compute a big orthophoto after realizing it's not what you wanted or playing with the generation settings.

    At the step 2 of the ortophoto wizard play with different settings, depending on how you want to drive the solution. Consider:

    - Enable photoconsistency (this is likely what you're looking for: please note this is NOT the photoconsistency responsible for the geometry improvements in the mesh generation phase - it's a different type of phoconsistency, and should help with removing moving elements from the photos, such as a car).

    - Increasing the color balance strength

    Eventually, If the scene is *very* cluttered/banded, you can consider increasing the maximum cameras per pixels and switch from multi-band to mean. However this will make the orthophoto a lot less sharp, if you're fine with that.

    Personally I don't like the mean approach and I'd rather have the sharpest possible orthophoto and manually fix those incorrect patches with the editor tool, see the tutorial here: https://www.3dflow.net/technology/do...o-editor-tool/ but of course this is up to you!

    Comment

    Working...