Different cameras in one project - suboptimal photos - how to improve result

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • logan74k
    3Dflower
    • Oct 2018
    • 6

    Different cameras in one project - suboptimal photos - how to improve result

    Hi all, new to Zephyr but impressed so far. I have a project which is using two different cameras (A Nikon and a Canon) with different focal lengths of the subject. I'm finding the software is dropping many (more than half) of the frames on completion of calculations. I tend to set everything to high and ultra etc to try and get me the best result with my suboptimal shots. Retaking them is not an option - so I'm just wondering if there are any tips for the best likelihood of success with getting a model from dissimilar photos - and if theres any way to calibrate for two cameras in one project. I see the camera model named in the image preview but it seems to only auto-recognize EXIF from one camera per project, the second batch of images from the other camera is always below and listed as unknown auto calibrated (photo dimensions)
  • Andrea Alessi
    3Dflow Staff
    • Oct 2013
    • 1305

    #2
    Hi Logan,

    thank you for your kind words!

    The fact that you are using different cameras is not an issue. However, this leads me to think that you have taken the dataset at two different points in time, which sometimes makes things very difficult. Most likely, you are suffering from a lack of overlap from the two sets (or your subject changed a lot). However this is speculation as i don't have the dataset

    Make sure you are feeding the original images. It's unlikely that your camera did not save images, so either you have post processed them in other software (if so, please never crop, undistort or alter the image in any way before feeding them to zephyr) or other software stripped the exif data without telling you (e.g. skype send file function).

    Moroever, I suggest you almost never use ultra unless you are 100% sure about what you are doing - we hid it as they should be used only when needed - higher parameters do not mean automatically better results, but always lead to very long processing time.

    Unfortunately there isn't much more i can help you with without having a look at the dataset. If you want to share the photos with me i'll be happy to have a look at them and get back to you.

    Comment

    • logan74k
      3Dflower
      • Oct 2018
      • 6

      #3
      Thanks a lot for the detailed reply. The issue with the EXIF for me turned out indeed to be my rotating them in Windows Photo Viewer prior to loading them into the software. Hadnt even considered that... Apparently just rotating them stripped the data. Lucky I kept the originals and went back to rotate in Photoshop, which preserved the data for Zephyr to read.

      They were taken at different times - not only that, but under different lighting conditions. Regardless Zephyr seems to have been able to use a smattering of photos from both sets in one project, even without having EXIF calibrated both cameras. Which I was impressed by. Does Zephyr find and select points to match based on grey value, or does it consider color as well? Or is it using a technique I cant even fathom? The technology is very interesting.

      Also I saw that the software prefers large pixels as it relates to the sensor size, and that larger pixels can improve results. Is this fundamentally a different thing than taking very large (25mp) photos and downsizing them a bit prior to loading into Zephyr? Could that potentially improve anything?

      Comment

      • Andrea Alessi
        3Dflow Staff
        • Oct 2013
        • 1305

        #4
        Hi logan,

        you should never pre-process images - rotation does not affect results in zephyr so it's always a safer bet to leave them as they are. Make sure photoshop also edited the exif data. It's better to have a stripped image rather than a wrong exif (e.g. an image 4000x3000 that you rotate, and while the exif says 4000x3000 it is now a 3000x4000 ).

        Sorry if i repeat myself but for better results, never crop, undistort, rotate, shear, or alter your images. If you shot in raw format you can use your camera software to do minor editing.

        Our keypoints descriptors are very robust to lighting changes and outliers, so while it is absolutetly possible to mix different sets, it has to be done knowing what it entails. For example the object may have changed slightly and some images, while for example being placed correctly using background data, will bring in noise rather than actually helping the reconstruction.

        Downsizing means you are using software to alter the image - so while in many cases you can get very similar result, it's always better to feed the image directly "as seen" by the camera (wheter JPG or RAW) rather than downscaling it in another software.


        Comment

        • logan74k
          3Dflower
          • Oct 2018
          • 6

          #5
          Good to know the software has no problem with rotation. I'm used to a laser light scanner which shows you in real time what it's doing, and could get hopelessly confused if the next scan is upside down relative to the previous.

          That said, I still haven't managed to get quite the result I believe is possible. I'm fairly certain the model position is 100% consistent between photo sessions - I can get a number of frames from both sessions to mesh in one project. One big remaining problem seems to be the software coming back with extreme 'fisheyed' views of some of the cameras in the camera navigator bar after computing the sparse point cloud. It generates points way out in the middle of nowhere based on these distorted cameras. These photos came in as raw photos as suggested with accurate EXIF data including focal lengths. Any ideas are appreciated!

          EDIT: I should clarify I'm not aiming for a 'nice' or even complete model here, understanding my input photos aren't shot with an eye for photogrammetry and I don't have nearly enough coverage. I'm just trying to get a model as complete as possible, using as many of the 80 or so pictures I have as will be accepted. My application doesn't need it to be pretty or watertight as long as I can get 2/3 of a volume recognizable. I've been exclusively using full or nearly full compositions IE including the whole hand. Should I bother trying to bring in closer photos that only include specific areas of the hand and a shallower depth of field, or would that serve only to confuse the algorithm?
          Last edited by logan74k; 2018-10-04, 05:27 AM.

          Comment

          • Andrea Alessi
            3Dflow Staff
            • Oct 2013
            • 1305

            #6
            it's really hard to tell without having a look at the actual photos, can you share them with me ?

            Comment

            • logan74k
              3Dflower
              • Oct 2018
              • 6

              #7
              Sure - Is there a good secure way to send them? They're about 12mb each

              Comment

              • Andrea Alessi
                3Dflow Staff
                • Oct 2013
                • 1305

                #8
                i'll contact you privately at the email address you have used to register at the forum

                Comment

                • logan74k
                  3Dflower
                  • Oct 2018
                  • 6

                  #9
                  Got them uploaded thanks Andrea - The 11 with masks are the ones which had meshed together successfully in a previous project, albeit with the camera warp issues. The remaining 12 are more coverage of the back of the hand - I'm hoping to ultimately achieve a model which can wrap around underneath and reconcile some detail on the back of the hand as well.

                  Comment

                  • Andrea Alessi
                    3Dflow Staff
                    • Oct 2013
                    • 1305

                    #10
                    Hi,

                    i only see 23 photos uploaded.

                    Anyways, i can see why zephyr is having a hard time. You should avoid flash photography as it changes significantly the surface properties, as well as it is already a difficult subject to shoot , being metallic.

                    You could probably squeeze in more shots by choosing the best subsets in your full dataset, but in this case less is more.

                    You could probably also force some other using pro/aerial via control points, but i doubt you will get a satisfactory result with only 20 photos for the complexity of the object.

                    Comment

                    • logan74k
                      3Dflower
                      • Oct 2018
                      • 6

                      #11
                      The only cameras Zephyr didn't drop are the 11 that have masks, but I was going to shoot for getting the remaining 12 in. The rest of the set are closeups/detail so I'm not sure if those are worth trying? I might have to look into control points, thanks for the suggestion. Still no idea why the cameras came back so distorted? Do you think in this case masks are useful, and should I do the initialization using them or ignoring them?
                      Last edited by logan74k; 2018-10-04, 10:51 PM.

                      Comment

                      • Andrea Alessi
                        3Dflow Staff
                        • Oct 2013
                        • 1305

                        #12
                        Cameras get undistorted automatically by zephyr. If you mask the thumbnail can get "warped" a bit and you shouldn't be concerned.

                        If the closeups are well shot and there is overlap it's always worth trying bringing them in especially when the dataset is less than optimal

                        Comment

                        Working...