Announcement

Collapse
No announcement yet.

Capture Object on a table, including the bottom

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Capture Object on a table, including the bottom

    Hi all

    I am trying to capture an object on a table from all sides.
    I put the object on the table, make photos feed them into 3d zephyr and get the desired result (dense pointcloud)
    Then I turn the object upside down, make photos and feed them into 3d zephyr (in a new workspace) and again get the desired result (dense pointcloud)

    But now, how can I merge the 2 scans into one? I assume something like that:
    1) cut away all data (e.g. the. table) in the dense pointclouds of the related captures
    2) import one of the sparse pointclouds into the other workspaces. How is this done? is it possible to also import all photos and keep their relations to the points of the dense pointcloud))
    3) register the pointclouds

    Finally, I'd like to export a textured mesh in highest quality, baking the texture-atlas from all photos (from both imports).

    Can somebody point me to tutorials about how exactly this should be done? - Many thanks!

    J

    PS: I am using version 5.0 Dragonfruit beta

  • #2
    Hi Josua,

    when you move the subject, you are making things very difficult for zephyr because the subject and the background are no longer coherent - while you could do the steps you mentioned (using control points to merge the projects), it would take you longer and you'd get worse result - the correct procedure is to mask out the table in your datasets and process it all in one shot with zephyr.

    You can see a quick tutorial on masking here: https://www.youtube.com/watch?v=dGRw8LbXknU

    Using the in-masquerade tools for semi automatic processing (or the color masking,if you have an appropriate turntable setup) reduces the masking to a process that takes few minutes at most and will greatly improve speed (less pixels to compute = faster) as well as quality (no artifacts due to false matching)

    As for exporting, once you have oriented correctly all your dataset, simply process a textured mesh and export it like you normally would

    Comment


    • #3
      Hi Andrea Alessi. thank you very much for your answer!

      I will check out the masking tutorials with great interest!

      However: in my use-case the critical (=expensive) part is only the "time of human interaction" (but not the processing, which can run without human attention). According to my understanding, cutting away (e.g. with a plane operator) everything that does not belong to the object can be done by a human in seconds. In comparison, Masking 100 Photos would take several minutes? Maybe I am wrong? But if I am not, it will make a significant difference if we talk e.b. about 1000 objects a month.

      So I'd still appreciate if you can quickly guide me how to:
      - merge 2 workspaces: each workspace with the original photos (kept for a possible HQ export of a Mesh or Textured Mesh) and a dense pointcloud where everything that was not part of an object is already cut away)
      - register the 2 pointclouds

      -> I would then assume that the export as Mesh or Textured Mesh will just work as usual and the texture atlas calculation will just take fractions of the best matching photos in any case

      Comment


      • #4
        Hi again Andrea Alessi

        I checked the masking tutorial, but I missed the "painting tool" shown in the referenced video in Dragonfruit. The other tools (different kind of selections, or coloring) are not suitable for my current setup.

        If I understand correctly, the masking is useful to exclude data before processing, to save processing time. Otherwise there is advantage in comparison to deleting data away after processing (that is, after calculating a dense pointcloud). Please correct me if I am wrong.

        So if you could guide me how to merge 2 workspaces as described in my previous post, I'd REALLY appreciate it!

        Comment


        • #5
          Hi Josua,

          the shorter time is one of the advantage, but it's not the only one. Masked datasets will give you much cleaner results, and masking will take at most few minutes if your acquisition is done correctly. Try for example using a uniform background.

          Moving the object is not a good idea and your results will not be as good as possible by manually removing the points and merging the datasets, so i do not recommend it. Regardless, if you want to do that, you can simply merge the datasets with control points https://www.3dflow.net/technology/do...space-merging/ - note this requires Pro or Aerial

          Comment


          • #6
            Questions about masking:
            - is the mask "absolute"? -> any pixel that is not masked is strictly excluded from processing?
            - where are the nice "drawing tools" to roughly define object vs background, as shown in https://www.youtube.com/watch?v=dGRw8LbXknU
            I could not find this tooIs in Dragonfruit. I really liked this kind of "unsharp mask" as this could indeed be useful.

            Comment


            • #7
              that is correct - you are explicitly keeping/removing pixels from processing.

              Which tools are you referring to? the UI has not changed since that video, could you eventually post a screenshot of your interface please?

              Comment


              • #8
                Thanks to your reply, I found the problem: me :-).
                The width of my Masquerade Window was too small, hiding the buttons I was looking for! I don't say that this is a 'bug' but I strongly recommend to do something visually about it. I suggest to just make the width of the Dialog-Window big enough (if the width of the screen allows it).
                Last edited by josua; 2020-06-07, 02:33 PM.

                Comment


                • #9
                  Glad to hear everything's ok, thank you for the feedback!

                  Let me know if you need any additional help

                  Comment


                  • #10
                    Originally posted by Andrea Alessi View Post
                    note this requires Pro or Aerial
                    WhenI try "Import and merge another workspace": All 3 method options are grayed out in my DragonFruit beta - is that by intention because of my license, or is it a bug?

                    Comment


                    • #11
                      Hi Josua,

                      the beta has no limitation, make sure you are selecting a valid zep file which allows for merge (e.g. with control points if you want to select contro lpoints). It is very odd you have all options disabled, if you want to share photos and zep i will be happy to check them

                      Comment


                      • #12
                        I am really sorry Andrea... ....it was again just my own stupidity. I did not click on the "Import button", as I thought I should FIRST be able to select an import method. Thinking about it, it makes sense to first click the button and let Zephyr analyse what's there, then offer different options. However, the Text "Import" on the button is slightly misleading imho. Maybe "Inspect" would be more appropriate?

                        Comment


                        • #13
                          no worries josua, all questions are always welcome, no need to be sorry

                          Comment

                          Working...
                          X