Camera positions were determined similarly to my RTI accuracy test, namely by using an icosphere's vertices as camera position anchors, while having the camera track the center of the scene.
I wrote a script to position the camera, record its location and rotation and write these informations to a txt-file.
The camera parameters where chosen to reflect those of of a Canon1100d at 50mm focal point and f-Stop 22. This camera was calibrated using the Agisoft Lens procedure inside said virtual environment prior to the actual sequence capture.
A cylinder, an icosphere, and a monkey head were used as test objects.
To aid surface point recognition by Photoscan's SIFT-algorithm, the test objects, aside from exhibiting lambertian surface qualities, were textured using a monochrome gaussian noise image.
Inside Photoscan, the rendered frames, the camera calibration file, as well as the txt-file with the exact camera positions and rotations were imported and processed.
The resulting dense point clouds were imported to CloudCompare, were they were registered with the original meshes. Finally, the Hausdorff-Distance for any given point to the nearest mesh surface was applied to the each point as a scalar value.
A visualisation of those distances is given below:
Overall, the calculated point clouds are sufficiently accurate. The most errors occur in areas with high ambient occlusion (i.e. creases), probably due to less available cameras for these areas. In these cases, the erronous points seem to be pertruding outside of the object (here into red color spectrum). Sharp edges seem to cause some errors as well, most notably on the icosphere and cylinder examples. In those cases, the highest error distances are negative, meaning the points are retracted into the object (here blue color spectrum).
Further tests are planned with the introduction of external error sources like non-lambertian surface qualities, homogenous surface texture and non-uniform camera distribution.