I'm just learning Skydio 3D scan and drone photogrammetry. So, I used my X2 to scan a house with 3D scan for practice. I set the floor, ceiling and four pillars at the corners of the house. It was a newer house and there were no trees nearby, so it was easy to set 3D scan up for this. I used the default 3D scan settings and later processed the photos on Pix4D Mapper. My goal was to build a 3D model of the house and maybe port the model to a CAD package. I've come to understand that a 3D model is mesh or textured mesh object. The photogrammetry software first does camera "calibration", then match point generation, then sparse point cloud generation, then dense point cloud generation, then mesh model generation and then finally generates a textured mesh model. The mesh model didn't look CAD quality to me, so I went looking for culprits. Maybe that's as good as it gets, and I have to further work it in a CAD package to make it better, but I did notice that I had ended up with about 10% or 15% uncalibrated cameras/images in the processing. The photos looked sharp, and the lighting was ok. It was an overcast day. I really didn't understand why I had so many uncalibrated images. Then it occurred to me that the side walls of the house really did not have much in the way of distinguishing features. So, I'm thinking that maybe the camera shots were too close and that there wasn't a lot of distinguishing features in these shots for the processing software to "lock onto" or match with those of other photos. So, I'm wondering if maybe up-close scanning for inspection purposes may, at times, be at odds with scanning to build models since being further back away from the target object provides more distinctive features in each photo for the photogrammetry software to match with those of other photos but being up close provides photos with the detail desired for inspection. Does anyone have any background or knowledge in this? Is what I'm supposing here true? Thanks.