Announcement

Collapse
No announcement yet.

help with images not being picked up by program

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • help with images not being picked up by program

    Hello,

    I'm trying to make a photogrammetry rig using raspberry pies like I've seen other people do. I have 108 raspberry pies that each take a picture and then use ssh to send it back to my main computer. I'm getting what to me look like good images, but 3D Zephyr only picks up 80 of the 108 cameras and I can't get a full scan when it does that. Can anyone help me figure out what's going on? Also, I tried to upload the pictures with this post, but for some reason this website won't allow me to upload any of the pictures from my rig, although I can upload images from other sources. I'm thinking maybe the files are being corrupted when I ssh them from the pies back to my main computer and maybe it's causing issues with 3D zephyr even though I can open the files. Has anyone else had this problem? I can upload the images to one drive and post a link if that would help.

  • #2
    piman - Welcome to the forum! Please upload the dataset to one drive and share a link here for some targeted feedback.

    Have you had a chance to look at some of our video tutorials? Here is one on fixed multi-camera systems: https://www.youtube.com/watch?v=d0Q4ZBorKRU

    Typically multi-camera systems use a feature-rich reference subject to orient all cameras successfully, then use the XMP workflow for consistent results. This is a feature of 3DF Zephyr Pro/Aerial https://www.3dflow.net/technology/do...flow-tutorial/

    Comment


    • #3
      @cam3d,

      Thank you for the reply. I just followed the steps on the link you listed and selected the "Clone calibrations" option and added some markers to my rig, but it still only uses about 80 of the 108 cameras. I previously downloaded the data set shown in the tutorial from Pi3dScan's website and I can reconstruct it with no problem. Here is a link to my pictures. Please let me know what you think.

      https://1drv.ms/u/s!AhL-xC-YpLj7me0b...SFvig?e=g30omm

      Comment


      • #4
        Hi again piman - Thanks for sharing your dataset. Some things to note:

        - Richards cameras (Pi3dScan) are all portrait, yours are landscape. This means you have wasted space on your images, which otherwise could better frame your subject.

        - Some of your cameras are not in perfect focus, go through and tweak these for better results - Colour balance is also inconsistent, but this shouldn't affect reconstruction until the texturing stage.

        - Your subject is wearing feature-poor clothing, use a colorful feature-rich t-shirt/hat to set your initial multi-camera orientation with confidence before using that orientation of cameras for any subsequent processing. Importantly, if your cameras move position, they will need to be 're-calculated' with a feature-rich subject before they can be reused with feature-poor subjects. Some people use a mannequin covered with fine print newspaper as a tool for camera orientation.

        - With the Human Body Default orientation preset in 3DF Zephyr, 104/107 cameras are oriented (Deep only adds one extra camera - not worth the extra processing time in my opinion)

        Having built RPI multi-camera systems myself, I know how finicky they can be - it's a lot of trial and error, with many headaches. You are on the right track and I'm happy to help where I can with any insight I can provide. Let me know if you have any further specific questions - Happy to help!

        Comment


        • #5
          @cam3d, were you able to get 104 cameras oriented with the data set I provided? I was only getting 80 with those presets.

          Comment


          • #6
            We too have a similar system of multiple Raspberry Pis, each camera taking a single image of a given target from a slightly different viewpoint, and at the same time. We too have trouble getting all of the images to 'calibrate'. For one of our data sets, we have collected 24 images at once, but at a variety of angular separations between cameras. We have been able to 'calibrate' only 14 of these 24 images. We have been making use of a separate code to find key points between images. In analyzing results from this, we have been able to show that those images which 3DZephyr failsto 'calibrate' are characterized by having a very small number of matched key points linking them to other images in the set. To fix this, we have been exploring various methods to fix the cameras permanently in place, and then to determine the camera 'calibrations' absolutely, either by observing a set of fiducial targets, or simply using 'rulers' and 'protractors' to find the camera absolute positions and orientations. But we would then need help in entering these calibrations into 3DZephyr.

            However, we're not at all certain that this will suffice. The fact that the 'lack of calibration' is accompanied by a lack of matched key points means that, even when calibrated, these images will not contribute points to the 'sparse point cloud' and hence to the model. This will be a continuing major problem. Thus we think that an alternative approach is in order. We need a means to find matched key points that recognizes that the 'calibration' is perfectly known, and therefore searches for key point matches based on this perfectly-known position and rotation knowledge. We can see a way to do this if we use the known 'calibration' to compute a known 'fundamental matrix', and then search for key point matches along epi-polar lines (stereo-processing) rather than over the whole image. But this seems to be fundamentally different than what 3DZephyr currently does.

            Can you help us here? As a start, would you share with us the key point matching method 3DZephyr uses to find key points? We could then begin to explore ways these methods can be enhanced, when the actual 'calibration' is a prior known quantity.

            As an aside, we have had no problems downloading Raspberry Pi images into 3DZephyr. Nor have we had problems with the varying color balance between cameras. We'd be glad to share this experiences with others.

            George Lindquist

            Comment


            • #7
              piman Yes, I got 104/107 aligned using your dataset and the human-default preset. However, I did some digging and have found repeatably that even with identical XML data, the human-default orients ?80 images unless specifically, you click onto the custom dialogue then run, in which case it orients almost all of them.

              I can't decern any difference in settings that would be contributing to discrepancy this so I'm paging Roberto or Andrea Alessi to shed some light on what's happening here.

              Comment


              • #8
                Originally posted by George Lindquist View Post
                Can you help us here? As a start, would you share with us the key point matching method 3DZephyr uses to find key points? We could then begin to explore ways these methods can be enhanced, when the actual 'calibration' is a prior known quantity.

                As an aside, we have had no problems downloading Raspberry Pi images into 3DZephyr. Nor have we had problems with the varying color balance between cameras. We'd be glad to share this experiences with others.

                George Lindquist
                If your subject is not textured enough, you can do a calibration of the your rig with something else and then re-use the same calibration over and over. See a tutorial here https://www.youtube.com/watch?v=d0Q4ZBorKRU

                3DF Zephyr uses 3DF Samantha, a proprietary algorithm. References and publications are available here https://www.3dflow.net/technology/



                Originally posted by cam3d View Post
                piman Yes, I got 104/107 aligned using your dataset and the human-default preset. However, I did some digging and have found repeatably that even with identical XML data, the human-default orients ?80 images unless specifically, you click onto the custom dialogue then run, in which case it orients almost all of them.

                I can't decern any difference in settings that would be contributing to discrepancy this so I'm paging Roberto or Andrea Alessi to shed some light on what's happening here.
                Presets do not have a 1-1 relationship to the advanced settings. The advanced selected setting you see is the closest possible to the values contained in the preset. To actually see the difference, you'd have to switch to custom. Anyways, when a dataset fails or not with that small change , it's likely an issue with the photos themselves or the rig - usually there is probably simply not enough overlap between those photos, so i suggest for example trying to tilt them a bit or add a new row of cameras

                Comment

                Working...
                X