The idea is to get the best of both worlds: reliable low frequency depth with high absolute accuracy from Photogrammetry, and high frequency surface normal data with high relative accuracy from RTI.
When a mesh is contructed from a point cloud (via e.g. Poisson Reconstruction), each point coordinate in space as well as its normal are considered. The goal of this test was to replace the existing low accuracy normal values of the Photogrammetry-point cloud with the high accuracy normals from the RTI-data, before actual surface reconstruction.
A miniature lamassu-statue from Persepolis served as a test-subject.
To improve camera position estimation in Photoscan, a copystand was equipped with coded targets in known positions. Each target along with its relative xyz coordinate was stored in a csv-file for later import in Photoscan.
Upon completion of the RTI-capture sequence, the camera was unscrewed from the copy stand, the object was lit by diffuse ambient light and one additional SfM-sequence was captured using the same focal length.
Using GIMP, a maximum across all RTI-sequence images was extracted (import all as layers, set all layers to "lighten") and saved as "albedo.jpg"
The RTI-sequence was processed (uncropped) and a color representation of the RTI's normals was saved using the RTIViewer's "Normal map" filter. This image ("normalmap.jpg") was then the imported in GIMP and scaled to fit the dimensions of "albedo.jpg".
With the help of ExifToolGUI, all metadata from "albedo.jpg" was copied to "normalmap.jpg".
Finally, "albedo.jpg" was placed into the SfM-sequence folder, and "normalmap.jpg" was renamed to "albedo.jpg" (from here on refered to as trojan-normal).
After that, the SfM sequence was processed in Photoscan up to and including the mesh generation (~2 mil. faces). In Photoscan's pictures workspace, "albedo.jpg"'s path is changed to that of the trojan-normal. A texture was then generated using "single image" and "albedo.jpg" (which is now trojan-normal) and the mesh (including it's texture) was exported as an OBJ-file.
This mesh was, along with its texture, imported in CloudCompare, where a RGB-point cloud was sampled from it and exported as a TXT-point cloud file (PC_oldNormals.txt). This point cloud was used as reference for comparisons.
This TXT-point cloud file consists of the following TAB-separated value-fields for each point:
X Y Z R G B Nx Ny Nz
Next, a Python-script that maps the values in the R, G, B fields (0-255) to normalized vector values (-1.000000 to +1.000000), places these values in the corresponding normal-vector fields (Nx, Ny, Nz), and saves the result as a TXT-file (PC_newNormals.txt) was applied.
Finally, this altered point cloud is imported into CloudCompare, and Poisson Surface Reconstruction is applied to both PC_oldNormals and PC_newNormals and the resulting meshes are inspected.
The results showed slight improvements, especially in areas with small details; most notably the detail on the bull's belly and the wing's feathers.
On the other hand, it introduced noise in areas with steep angles, i. e. ares in the normal map with low z/blue-values. This could be improved by ignoring such cases during normal replacement.
A possible continuation of these efforts could a script that scans each point's neighbor's normal values and corrects the point's z-coordinate to match the expected slope, thereby actively improving the point cloud's accuracy.
To be continued!