Advice for the best results for residential interiors from video input?

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • Adam J
    3Dflower
    • Mar 2023
    • 5

    Advice for the best results for residential interiors from video input?

    Hi Zephyr-ites,

    I'm an architect and I'm interested in using Zephyr Lite to produce 3D models of client's existing homes, which I can then modify to show them my concept proposals. My hope is that I can make this a very simple process for the client - all they would have to do is walk around their home recording a video on their phone, which they could then send to me, and I would do the rest. This would mean I wouldn't have to visit to undertake a survey (what I would do traditionally), and could provide a fully remote service, and one that could be much quicker and cheaper than the standard architectural service.

    The workflow I had in mind was:
    1. Input client video into photogrammetry software
    2. Export point cloud
    3. Import point cloud into Sketchup
    4. Produce simple model of the existing building in Sketchup using the point cloud as a reference for dimensions
    I've tried three Photogrammetry programs so far, using my own flat as a test case: Reality Capture, Metashape and Zephyr Lite. Of those I think Zephyr was the best, because the other two 'misunderstood' the geometry of some rooms - rooms that should be rectangular in plan came out trapezoidal, whereas Zephyr seemed to get the basics right.

    For my first tests I walked around my home once quickly, but I found that I got much better results by walking around twice slowly. I tried walking around the whole flat twice, and then walking around each room twice. I also tried extracting 1, 2 and 5 FPS. The best results came from walking around twice, and extracting 5 FPS from the video. I also turned all the lights on, though I was shooting during the day, so I don't know if that made a difference. Zephyr managed to produce a coherent model of the entire flat, though there is still room for improvement - what I would most like is a way to avoid having plain flat walls 'bubble' outwards, because this makes it hard to find an accurate point to dimension from/model to.

    When it comes to scale, I didn't think Zephyr would have any idea of what the scale of the model was, but in some cases when I imported the point cloud into Sketchup I found that the scale was right, or off by a factor of 10 exactly. Once, however, it was just wrong. Is this a coincidence?

    Do you have any advice on how I could improve the process, given the restriction that the video will be taken by the client on a phone, so the instructions to them have to be simple, and the quality of the camera will vary? Are any of the things I'm doing unnecessary? I also want to avoid a lot of time consuming work 'by hand' within Zephyr, because I'm trying to keep the cost of the service down.

    The settings I used:
    • For the camera orientation, I chose category Urban, and the preset Deep
    • For the dense point cloud and surface reconstruction, I chose category Urban again, and preset High Details
    I've attached a couple of screenshots of what I think is the most successful test so far, to give you an idea of what my results are:
  • cam3d
    3Dflover
    • Sep 2017
    • 656

    #2
    Hey Adam J, here are my thoughts:

    - It's going to be very difficult to get good results from datasets captured by non-professionals as input image quality is the main consideration when it comes to improving surface reconstructions.

    - All 3D reconstructions made by 3DF Zephyr are subject to an arbitrary scale factor, translation and rotation. Sometimes Zephyr will guess pretty close, other times it's totally off, there are a lot of variables, so to nail down the scale it's best to use control points (available in the full version of Zephyr). Here's a workaround for Lite/Free.

    - Is it possible to send a camera to your clients? That way you maintain a bit of control over the quality of capture and might save yourself from major headaches from potentially terrible data acquisition.

    - Ideal FPS extraction will depend on how fast the operator walks around so this will vary from capture to capture.
    ??
    - Turning on as many lights as possible is generally a good idea as more light = less sensor noise, faster shutter, and higher f/stop.

    - Homogeneous (flat featureless) surfaces are very difficult to reconstruct using photogrammetry - Putting up posters with lots of features will help with reconstruction of flat walls like this.

    - If you can orient cameras successfully with General Defaults, I recommend keeping to those settings rather than pushing them up. Higher settings can increase processing time dramatically and occasionally produce worse results due to false positive matches.

    - Same thing with meshing - No point in processing at higher settings if it just increases the resolution of a blob :-)

    I hope this helped a bit!

    Comment

    • Adam J
      3Dflower
      • Mar 2023
      • 5

      #3
      Hi Cam,

      Really appreciate you taking the time to read my post and answer. Everything you've written makes sense, and I guessed that the best way to improve results would be to improve the input image quality - and of course that is the one thing I won't be in control of if I go down this route!

      One thing I wondered is if its possible (or a good idea) to mix still photography and frames extracted from video? Still photography even with a phone camera provides much better image quality than phone video, but I wouldn't expect clients to be able to take enough still photos to create a good reconstruction, however could a few higher quality still images enhance the 'base' video frames?

      Thanks for clarifying about the scaling. Your workaround is essentially what I did to check and correct the scale - I knew the kitchen counter height, so I used that to determine the scale factor when the scale was wrong.

      Interesting point about the ideal FPS, I hadn't thought of that but it makes sense.

      I'll do some more tests with the general defaults to see where the optimal balance between time and quality is. Thanks for the advice

      Comment

      • cam3d
        3Dflover
        • Sep 2017
        • 656

        #4
        Originally posted by Adam J
        Hi Cam,
        One thing I wondered is if its possible (or a good idea) to mix still photography and frames extracted from video? Still photography even with a phone camera provides much better image quality than phone video, but I wouldn't expect clients to be able to take enough still photos to create a good reconstruction, however could a few higher quality still images enhance the 'base' video frames?
        It's possible but really there's probably not much point and adds another potentially problematic variable. Usually I'd tell people to take photos instead of video, but yeah you're right, can't expect people to take enough stills for a successful capture, video is likely the best option here.

        Comment

        • Adam J
          3Dflower
          • Mar 2023
          • 5

          #5
          Thanks Cam

          Comment

          Working...
          😀
          😂
          🥰
          😘
          🤢
          😎
          😞
          😡
          👍
          👎