Help to Optimize Large Data-set

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • Reuben
    3Dfollower
    • Jul 2017
    • 21

    Help to Optimize Large Data-set

    Hi! I am currently running a model with 6000 images from a Phantom4 of terrain. I wanted an accurate model and for the first attempt turned the settings up high, but it crashed. I am not sure how common creating a model with these amount of images is? It is a terrain with 800 m change in elevation (some vertical cliffs), and I want to be able to measure the rock bedding off the model.

    I am now running the first phase with these settings:

    Click image for larger version

Name:	image_186.png
Views:	755
Size:	122.3 KB
ID:	1469
  • Andrea Alessi
    3Dflow Staff
    • Oct 2013
    • 1304

    #2
    Hi Reuben,

    sorry to hear about the crash! If you send us the log i'll see what's wrong.

    How much ram and temporary memory do you have? I suspect the crash is due to low space on the machine: it's a big dataset so be aware of that.

    As for the clustering phase, that is sadly a task that cannot be run in parallel, so it's going to take a while. This is one of the few steps that cannot be run on GPU, so in this specific case a bigger card would not improve this specific step (clustering). Are you mixing different flights/camera settings togheter? I would probably run them as different projects, as it seems like zephyr found many different clusters.

    24GB on GPU would make a difference in the next phase, as dense cloud generation is very GPU hungry. If you increase a lot discretization/resolution in that step, expect GPU usage (and ram GPU usage) to go high. If the images are not super-high resolution (e.g. 50MPx) you should be more than okay with the P5000.

    I personally prefer to split those kind of big datasets into smaller one for easier management (you can then always merge the projects via control points or direct merge).

    It's hard to tell which settings would work best without seeing the dataset. I personally like to start on aerial/default and then see from there if i like the results. If you want to share the dataset with us (or part of it since it seems pretty big ) i'll be happy to have a look at it!

    Comment

    • Reuben
      3Dfollower
      • Jul 2017
      • 21

      #3
      Hi Andrea, thanks for the quick reply.I have 30GB Ram and 200GB hard drive free space.

      So I have 9 days of flights, and the lighting is slightly difference for each day, with mostly vertical images at 80m AGL and 80% overlap, at 12MP.

      I will PM you the dataset link.

      Cheers,
      Reuben

      Comment

      • Andrea Alessi
        3Dflow Staff
        • Oct 2013
        • 1304

        #4
        Hi Reuben,

        i had a look at the photos. There are some "jumpcuts" (i.e. DJI_0127.JPG --> DJI_0128.JPG) so it's normal that zephyr will make two different clusters first. That is absolutely not a problem as zephyr will figure it out the camera orientation, but it's normal that with big datasets it may take a while to merge a lot of clusters togheter.

        I would probably pick one of these two workflows rather than throwing all images in the software:

        1. make small projects, one for each major cluster. Then you can merge two zeps using the "nearby cameras" option. It's much faster, and as long as the orientation is correct in the first place, you should be set by selecting a fairly low amount of images (even as low as 5 should do the trick)

        2. make smaller chunks (i.e. 1000 images) and then manually merge them via control points. It's a little longer and takes some manual processing, but you have more control over the merge process

        Let me know if i can be of further help,

        cheers!

        Comment


        • Reuben
          Reuben commented
          Editing a comment
          Hi Andrea, ok great thank you, I will try option 2. I understand that it is better to merge the sparse point cloud and not the dense point cloud, is that correct?
      • Andrea Alessi
        3Dflow Staff
        • Oct 2013
        • 1304

        #5
        Hi Reuben,

        it depends on the dataset. I personally prefer to regenerate the dense cloud from a more complete set of cameras.

        But in some cases you may want to manually merge more dense cloud togheter. It really depends on the coverage and the noise you may have on certain parts of the subject

        Comment


        • Reuben
          Reuben commented
          Editing a comment
          Hello Andrea, I have created sparse clouds for 4 area's and am now merging them, looking good. Is it better to add and scale this cloud with my ground control points, or should that be done after the dense cloud is generated?
      • Andrea Alessi
        3Dflow Staff
        • Oct 2013
        • 1304

        #6
        Hi Reuben,

        great to hear that!

        If you had 3D coordinates before the merge, i would have then used the bundle adjustment after each chunk was made, so that you could merge the projects on the same reference system, but i think this is not your case, correct ?

        If not and you're doing a rigid scaling, you can do it either before or after the dense cloud is generated.

        Comment


        • Reuben
          Reuben commented
          Editing a comment
          Ok thank you. Really appreciate the fast support!

          Another question: From the help files it mentions that you can send and move .zep files, however "it will not be possible to start one of the four phases of 3D reconstruction or make changes involving photographs usage". As I am using a cloud computer, I will be moving the final zep file to my work computer. I have the images used in the construction on both the work and cloud computers. Is there a way after I copy the zep to my work computer to then link the photos from my work computer also?
      • Andrea Alessi
        3Dflow Staff
        • Oct 2013
        • 1304

        #7
        You're welcome!

        it's not a problem at all! Basically, Zephyr will save the path of the images, so for example right now zephyr is storing the path as

        C:\Users\YourRemoteUsers\Myfiles

        This means that if you open up a .zep file on a computer that does not have the photos, you will be able to see the point clouds, meshes etc, but you won't be able to start a new mesh generation for example.

        However, when zephyr opens the zep it will tell you that it can not find the photos and will ask you the path of the images on the current computer, so simply browse to

        C:\Users\YourLocalUser\MyFiles

        and you're done, you can then normally proceed

        Comment

        • Reuben
          3Dfollower
          • Jul 2017
          • 21

          #8
          Hello, I have 4 areas with sparse clouds, I merged area 1+2, but merging area 3+4 my machine froze (error on screen mentioned low memory). This was on the step 4/4 of the merging by control points. I used 3 points on each zep.

          I will send you the log once I get access to the cloud machine (have to wait for them to restart it for me).

          So how should I proceed? I will have even more memory problems when I merge areas (1+2) with (3+4). Also, what will the load be like when creating the dense point and meshes for all 4 areas combined?

          Can I do something with the settings as a workaround or do I need to get a machine with more memory?

          The last option I think is that I never combine the clouds in zephyr. I want to export the textured mesh into Move (midland valley) to create a geological model. So I could create 4 meshes and scale each mesh with the GCP's I have, then load them into Move. However, I think this way the meshes would not match perfectly in Move, while this isn't ideal it shouldn't be a major issue (my GPS accuracy is about +/- 5m).

          Let me know how you think I should proceed. Thanks!
          Last edited by Reuben; 2017-12-01, 10:44 AM.

          Comment

          • Andrea Alessi
            3Dflow Staff
            • Oct 2013
            • 1304

            #9
            Hi Reuben,

            thank you, as soon as i get the logs i'll be able to give a more in-depth answer.

            Here are some of the things that may be happening, and i need the log to know more:

            - the tesla is awesome, however, what is zephyr using for on screen rendering? Tesla are just for computation, so maybe there is not enough video memory to render onscreen, and thus, failing the full process (even making 6000 thumbnails can take a toll on an integrated device).

            - the memory is actually not enough. How much memory does your VM have?

            As a workaround, you could disable the bundle adjusment when merging, using a rigid transformation only. If you do that, pick more control points (i'd say at least then, spread out in the reconstruction)

            Sorry for not being able to give you a straight answer, i'll need the log for that

            Comment


            • Reuben
              Reuben commented
              Editing a comment
              I have DM you the log. The VM has 32 GB RAM. Thanks.
          • Andrea Alessi
            3Dflow Staff
            • Oct 2013
            • 1304

            #10
            Hi Reuben,

            thank you for the log.

            Unfortunately yes, that's a memory issue.

            I would try to:

            - disable bundle adjustment in the merge phase

            or

            - increase the VM size

            or

            - decrease the memory requirements of the project by for example re-running them with lower keypoints density (although this is probably not a good idea, timewise speaking )

            Comment


            • Reuben
              Reuben commented
              Editing a comment
              Hi Andrea. Unfortunately with paperspace I can only get 32 GB RAM. So I am trying to set up on Amazon AWS. But I am pulling my hair out here!

              Are you able to help with setting up a VM? I tried using the g3.4xlarge instance (has a GPU with support for openGL 4.5, and 122 GB main memory). I use windows server 2016, remote log in with Splashtop. Install the nvidia cuda drivers. I see the GPU in device manage, and disable the generic display. However, when I check the generic PnP monitor it is using the Microsoft Basic Display Driver and not the nvidia card.

              So when I run Zephyr I get the error that the system does not support openGL 2.0

              I basically followed this tutorial:


              I have tried different instances (g2.8xlarge, p2.2xlarge, and g3.4xlarge), all with the same result.
          • Andrea Alessi
            3Dflow Staff
            • Oct 2013
            • 1304

            #11
            Hey Reuben,

            for ease of installation, when setting up a new virtual machine search for the "3Dflow - 3DF Zephyr" amazon instance (AMI).

            It's a public vm installation we made that you can use, since you have to install the grid computing driver. We made that ami by starting from the amazon g3 gpu instance, so either use our or start from that as well in order to save time

            Comment


            • Reuben
              Reuben commented
              Editing a comment
              Ok that sounds good! I launched the Zephyr AMI on the g3.4xlarge. Does this have a admin password setup already? When I "get windows password" it is not available... or maybe I just need to wait longer?
          • Andrea Alessi
            3Dflow Staff
            • Oct 2013
            • 1304

            #12
            It should be "3Dflow" which is the same password of tightVNC (and you should change them )

            I think Roberto forgot to add it in the description, sorry about that!

            Comment

            • Reuben
              3Dfollower
              • Jul 2017
              • 21

              #13
              Great! Wow that turned out to be very easy. I had spent all weekend trying to get this to work on Google Cloud Compute, Azure, and AWS. Am running the merge with control points again, should be no issue with the g3.4xlarge vm I think.

              Thanks again for your help.

              Comment

              • Andrea Alessi
                3Dflow Staff
                • Oct 2013
                • 1304

                #14
                Happy to help and sorry about forgetting to write the password in the AMI description ^^'

                If you disable the bundle adjustment it shouldn't take toll on the memory though, out of curiosity, did you try that?

                Comment

                Working...