No announcement yet.

Known Camera Positions Part Deux

  • Filter
  • Time
  • Show
Clear All
new posts

  • Known Camera Positions Part Deux

    Hi Alessi - Im back after being submerged in a consulting project.

    I'm still not clear on how even in 3D Flow Pro I would be able to predefine camera positions. I know that you said to "take a good reference image" but in the case I am trying to build, that will not be possible.

    I want to install the cameras outside in a manner where I do have at least 3 cameras on every point, but in most of the cases ONLY 3. IE what I have are clusters of 3 overlapping cameras and the clusters themselves overlap

    so think of it this way

    cluster PS1 has 3 cameras facing "fwd" that overlap with cluster PM2 which has 3 cameras facing "back" of which 2 overlap with PM3 (and one overlaps with PS1)
    1 camera from PM3 overlaps with PM4 which is also facing "fwd"... These then overlap with BM5 which is a cluster of 3 that are facing "back"

    distances between the PS station and the PM station is roughly 10meters, distances between the PM clusters are under 1meter. Distance between PM and BM is again about 12 meters.

    and all of these points are visible from a vertical camera facing downwards at a height of about 15m.

    See drawing.

    now this needs to get installed in a "real world" environment, so setting up a "good calibration" image is almost impossible. How can I (using precise laser tape measure) pre-configure the known locations of the cameras?

    (and yes I know, I need to by 3DF Pro... still working on the cost justification)

  • #2
    Hi Karlsc,

    I assume those cameras will never move, so you can take a reference dataset. This means, place markers around and shoot those cameras as well as many other images that will help you in the orientation phase. After that, you will be able to export the cameras as they are a fixed rig and use them to do a reconstruction using known parameters. You can do that with the xmp workflow explained here:

    In that example, all cameras are fixed, but you can have a certain number fixed, then discard the extra images you took only to help the orientation, and then use only those when reconstructing a new dataset. If the subject is difficult, place a lot of random noise in the scene which will help with the orientation.

    I'm not sure what your final goal is - if it is to make a dense cloud or mesh, please remember that with this camera setup you will likley not get good results.


    • #3
      Ok - (setting aside that your control point documentation doesn't really make sense to me) any suggestions on how I can improve the results? Remember in the above drawing each "camera" is actually a cluster of 3 cameras with 50% overlapping fields of view IE the middle Camera is 100% repeated by the 2 edge cameras so say with a 140 degree angle of view per camera, the total field of view will be 180 degrees of view (so that the edge cameras overlap each other's field of view by 100 degrees)

      The hard part is the "calibration" step. The eventual goal is to have this be done by untrained users who can easily provide distance and angle between the pods as hints to the SFM. wouldn't such spatial hinting help the SFM step for Zephyr?


      • #4
        Hi Karlsc,

        please let me know your questions on control points and what is not clear. Specifically, to which documentation are you referring to? The manual, the tutorial or the videotutorials (or all of them?) - please let us know so we can improve

        If you need to do a full reconstruction, you probably would need more cameras and try to "fill" those gaps. From your drawing, it seems they are pretty far apart.

        The calibration is not that hard. Unfortunately some training is required, taking some random photos is not going to cut it. You do not need to provide angle or distances. You just need enough features in the scene and enough photos, then use known distances to scale. Did you have the chance to watch the videotutorial i linked? If there is something unclear, please let me know and I'll be happy to further explain the process to you.


        • #5
          I watched the tutorial. Not very helpful. and ive read all I can and looked at all the control point videos and they don't really help on what control points are, and what use they are intended for.

          frankly I'm old. I prefer reading to youtube. youtube takes too long and cannot be used as a reference (for example it would be nice to have documentation of what the custom settings like "ungroupOnFinal" actually do the old UX adage is that if you expose the setting IN ANY WAY to a user - you need to document what it does)

          "The final orientation matters - consider adjusting the final orientation with control points" - a completely useless bit of information if there is no documentation that explains
          ==>what you mean by orientation
          ==> what the connection between orientation and control points is

          you then talk about "scaling your model with control points" --- which now adds the idea of scaling as well as orientation to control points. again not at all useful if one does not know the full functionality of control points

          then you talk about exporting the XMP file format. ok.. since XMP is an extensible metadata format. if you documented that format and how the camera rig is stored in that, it strikes me that I could edit that file myself WITH the known/measured camera spacings I have in the above model. Depending on the precision you rely on this might well solve the solution of using sparse camera reconstruction for the initial model rather than having to guess at what sort of pattern "will work well".

          it would be even better if I could specifiy a configuration manually in XMP, import the XMP and then use a "good pattern" to have the cameras get better precision in location.

          as for "random patterns around your booth" - it would be nice if you offered more guidance on this. you have a screen that is full of a pattern for camera calibration. I've tried to capture that screen in Photoshop and print it at scale to provide a registration image, but that does not seem to work.

          If you could publish something that could be set up as a "cone" in the center of a circular pattern and a separate one that can be projected onto a surrounding background with documentation as to how far away (focal length wise) that is optimal for that background, it would save immense numbers of hours

          So pretty much everything about control points is unclear to me despite having read all the documentation multiple times and watching the videos. I am fairly certain that it comes from my having a mental model around what they do that is different than what you have in writing your documentation. But therein lies the rub. Nowhere do you explain the general concept of control points.

          are they anchors used to integrate various model components? are they simply identified points in the point cloud? I really don't get them

          finally, I would still urge you to provide a subscription model for your PRO version. even at 150Euro per month, I think you would find more sales and more revenue that way. the 2400Euro price tag is hard to justify in a startup when I currently only need it intermittently. In fact if your "one time" subscription was 1 month, 150Euro and your contracted (2 year commitment) was 100 Euro, I'd probably have put 450Euro into your pockets by now, and would be considering the 2 year subscription moving forwards. For small startups OpEx is much preferred to CapEx


          • #6
            Hi Karlsc,

            thank you for your feedback.

            Please note that we also have text tutorials in case you don't like youtube as well as a PDF documentation.

            Please follow this tutorial: - as stated there:

            Control points allow you to define a specific coordinate. That coordinate can then be used to calculate the distance between other points and cameras, as well as a constraint for alignment.
            That's all control point are. If that's unclear, please help me understand what you're having problems understanding and i'll be happy to further explain this concept. Control points are coordinates you tell zephyr.

            ==>what you mean by orientation
            ==> what the connection between orientation and control points is
            Orientation is the outoput of the Structure from Motion phase. Cameras need to be oriented in 3D space (often the wrong term is "aligned" - cameras are oriented, not aligned). You align workspaces, not cameras.

            Control points are just coordinates. They have many uses. You can use those coordinates to constrain zephyr to reach a solution you want (e.g. "that X" on the photo is "this gps coordinate"i know because i took with a GPS, or simliarly "that distance" between "that point" and °that other point" is "this value" - or control distance ).

            How to these steps are performed and their usage (rescaling, workspace merge, bundle adjustment etc, ) are shown in all tutorials, manual and videotutorials so i won't elaborate further unless you have specific questions.

            You seem a bit confused about what cameras are and how the reconstruction work - maybe you'r missing a very important concept, so for clarification i'll write it regardless: all reconstructions are with an arbitrary scale, rotation and translation. You can use XMP in order to save and re-load camera orientations once you have done it once and reuse indefinetly. That is explaind in our documentation and tutorials.
            If you feel there is something not explaind about XMP files, please feel free to ask. Have you tried doing the steps shown in the videotutorials ? If so, please share your datasets and your questions and I'll be happy to fill in the gaps.

            Note that "camera" in photogrammetry does not mean the phisical device. You can have one camera (device) taking many photos. The photos reconstructed in 3D are all called cameras regardless that you used 1 moving camera (device) or a syncrhonized cluster of cameras (device) like you do

            Nowhere do you explain the general concept of control points.
            I think you're overthinking it. Control points are specific user defined coordinates, as explained above and in all documentation.

            then you talk about exporting the XMP file format. ok.. since XMP is an extensible metadata format. if you documented that format and how the camera rig is stored in that, it strikes me that I could edit that file myself WITH the known/measured camera spacings I have in the above model.
            You won't need to edit the xmp files. You will simply create a reconstruction, transform the workspace in a known reference system using control points (and control distances), then re-use the same xmp files for other captures. The SfM phase will be skipped and everything will be ready in the correct scale. Please see the above mentioned videotutorial. If there is something unclear i'll be happy to elaborate more. There is no distance concept in xmp files and it's something you won't have to never ever edit manually yourself. I'm highlighting this as you've expressed that you want to do this twice and it's important to highlight that it is not the correct workflow.

            random patterns around your booth
            Please keep in mind the base concept of structure from motion. You need features in your images, and non repeating patterns. Literally anything random will work, e.g. draw some squiggly line, tape some posters, etc. If you want to print some paper (which again, is not required - any random patternw works) you can generate noise with many software, e.g. Gimp.


            • #7
              Thanks Andrea for your response. Yes it seems I WAS overthinking the Control Point concept.

              and I understand the base idea of SFM and how images are created from non-repeating patterns. However I have struggled with patterns that I think will work well and had others work that did not seem would for me. So some concrete examples might be helpful for others who like me suffer from blondness :-P

              I fully understand that "a camera" is not necessarily one camera but really is a mapping of a 2D image backplane into a 3D space, from which you can then project forward that 2D image to synthesize 3D information via parallax calculations with overlapping images that have different parallax and/or orientation.

              what I meant WRT the XMP file format is that if I know the fairly precise spatial locations and orientations of the "image positions" (if you don't want to use "camera") wrt some reference point - then I should be able to generate that data in the XMP file without going through the full SFM algorithm.

              furhtermore it strikes me that if the XFM file gives the relative coordinates of the camera as X, Y, Z Theta and phi, and then SFM is run… it becomes a matter of calculating error deltas rather than having to interpolate fully from scratch

              part of what I'm looking is that motion in SFM as in all things is relativistic IE it matters not if the camera is moving positions or if the things before the camera are moving. but if we know the camera orientation a priori then it is easier to calculate complex and hard to resolve objects as they are "flown over"