No announcement yet.

More Questions

  • Filter
  • Time
  • Show
Clear All
new posts

  • More Questions

    Hi Guys,

    I have a few more questions on Zephyr.....hope they make sense

    1) Suppose I have a project of 200 photos, and I start by using "default" in the alignment phase, and this results in all 200 photos being 'aligned' - is there any benefit further down the road to have used "Deep"? Or would that just be wasting time / cpu cycles, given the default option aligned them all in the first place (i.e. will they be even better aligned?)

    2) Suppose i walk around my subject several times and the photos contain enough information that Zephyr is able to reconstruct the background (walls, floor, ceiling) as well as my subject on the table, at what point would you use the tools so you just have a 3D model of the "subject" left? (i.e. straight after the sparse reconstruction by reducing the bounding box, or right at the very end by using the cutting tools?). Is there any benefit to leaving all those extra pixels in until right at the end, or like the question above is this just wasting cpu time?

    3) Suppose I have a project with 200 photos again, which all are 'aligned' according to Zephyr, but I get to the final textured mesh and some reconstructed text on my subject is blurred or there are two pieces of the "same" text slightly apart (so obviously one or a few of the cameras are not well enough aligned), what is the easiest way to identify which photos are causing the issue? I can just remove them one by one but this is obviously time consuming - Is there a way to click on sections of the dense point cloud and see what images that point comes from?

    4) How religiously should I be using the "image quality index" tool at the start of a new project? Is there a quality index number you recommend to throw photos away?

    5) Do/Should all the photos have the same ISO number? I.e. Which is better - A range or photos from ISO 200 to ISO 800, or all of them at a fixed ISO 800 (and the onese that would have been at ISO 200 at a faster shutter speed or higher F-Stop)

    [Obviously all at ISO 200 is better but I am not able to achieve this - so far I have been fixing my Shutter speed and Aperture and allowing the ISO to vary for correct exposure. I appreciate I should be at ISO 100 and using a tripod allowing the shutter speed to vary, but I am not able to do this as shooting hand held]

  • #2
    Hi beardless,

    no worries, all questions are always welcome

    1. higher parameters do not translate automatically in better results. In fact, deep uses more keypoints, which in some cases (epsecially with bad photos) will lead to more ambiguity and could result in a failed orientation. You should consider the output of the first phase the calibration of the cameras (internals) and their position and rotation from the subject (externals). If default yields a correct output, there is no need at all to use deep.

    2. i suggest you always first set the bounding box correctly after the sparse point cloud. Consider this: you are telling zephyr to use a certain discretization (number of points in a certain volume) - if the bounding box is too big, you'll have much less detail. If the bounding box is too small, you'll have a lot of noise. Setting the correct bounding box is important - zephyr should be able to do that automatically most of the time but it's always better to double check. Cleaning the dense cloud is also a good idea, if possible.

    3. if you select points with the selection tools, you'll see that automatically in the bottom window (the camera navigator, switch from the "log" tab to "camera navigator" will show only the cameras that see those points. From there it's easier to understand which cameras are problematic. Please also note you can simply exclude cameras from texturing rather than removing them completely from your project.

    4. the image quality index is a relative value, so you should compare it only with photos taken in the same settings and project. There is no magic number to be honest and it should not be taken religiously - it's a simple tool that allows you to quickly see if there are major problems, but it always need to be used cautiously

    5. the lower the iso, the better - personally i try to stay at iso 200 or lower, however you can get good images with not much noise with modern devices even with iso 800. It boils down to your equipment and lighting. The rule of thumb is, the lower the better (but obviously, an underexposed image won't work ). You can safely mix images with different iso. What i do, is seeing if i can obtain good result with a very low iso and maybe a long exposure time. If not, i'll increase the iso and change settings along the way.


    • #3
      I couldn't get my iso below 400 with supernova lights lol

      I thought my camera had defect because nothing showed up on the screen until I got way more lights to photograph something on a turntable indoors


      • #4
        Thanks Andrea - great feedback as ever.

        Regarding point 3) - please could you elaborate with more detailed steps - I wasn't able to get it working? i.e. First of all ahould i be doing it from the sparse or dense point cloud?


        • #5
          Hi beardless,

          it's for both sparse clouds and dense clouds. Make sure to tick "show viewed cameras on navigator" though (my bad for not mentioning this!)

          Click image for larger version

Name:	cameraselection.jpg
Views:	1
Size:	1.17 MB
ID:	4299


          • #6
            Hi Andrea - Thanks - got that working. Once I have got it into that subset mode - how to get get it back to normal? (i.e. un-select all the points so all cameras show again)


            • #7
              ah got it - double click!