RTI – accuracy tests

General considerations

First, let's consider the assumptions the PTM and RTI/HSH algorithms make considering lighting, camera and subject:

  • A constant light intensity is expected; this is approximated in the real capturing process by a constant distance between light and subject.
  • This light source is assumed to be infinetly far from the subject, i.e. all light rays illuminating the object's pixels are parallel to each other, and have therefore the same vector
  • The subject's surface is expected to be monochrome and lambertian, i.e. perfectly diffuse with no highlight reflections, refraction or light bouncing.

Any deviations from this model are not considered in the surface normal calculation.

To test the absolute accuracy of the RTI files, I constructed a virtual environment in Blender, eliminating all possible external error sources.
Aside from the ability to provide above mentioned ideal lighting and surface characteristics, a virtual simulation approach offers further advantages:

  • exact and known position of the light source
  • perfectly uniform light angle distribution
    (vertices of upper half of reference-icosphere as anchorpoints for light position, light always pointing to center of scene)
  • use of test subject with known and controlled topography and surface properties
    (icosphere; grey, lambertian surface)
  • elimination of FOV distortion and depth unfocus by use of orthographic camera

 

RTI Reflectance Transformation Imaging Blender Python Thomas Graichen

As for rendering pipelines inside Blender, I chose the Internal renderer. Although the Cycles renderer gives more photorealistic results, it does so at the cost of mathematical models of lighting, which are far more convenient for establishing ideal lighting conditions for this test.

I wrote a Python script that handles light placement, frame creation, and the export of a light position file for the RTIBuilder.

Light position calculation accuracy

First I thought it best to check some basic aspects of the RTIBuilder's LP-file generation process:

  • How do the values in .LP relate to light positions in x, y, z?
  • Are the values really normalized in -1 to +1 range?
  • How accurate are those calculations?
RTI reflectance Transformation Imaging accuracy test Blender Thomas Graichen

An initial test involved checking the general light position calculation accuracy of the HSH-Algorithm employed by the RTIBuilder. A highly subdivided and smoothed icosphere served as the black reflective sphere; its material and rendering properties were chosen so as to minimize the reflective highlight to an area as small as possible, thus further aiding the HSH-algorithm's highlight detection precision.

Five pictures under different extreme angles were produced: horizontal N, S, E, W and zenith; only absolute values of -1.000000, 0.000000, +1.000000 were used as light coordinates (see blended image left).

The real corresponding light positions were noted and the pictures were processed in the RTIBuilder.
The Sphere detection was surprisingly accurate in position and radius and only off by 1/10th per cent of the image width.
The Highlight detection was able to correctly identify all highlights on the sphere.

A few interisting observations upon inspection of the calculated LP-file: 

  • values on all axes nearing -1 or +1 showed a very low error (~0.03%)
  • values nearing 0 showed higher errors (~0.6% on x and y-axis, ~1.8% on z-axis)

To visualize these deviations, a second light source was introduced into the virtual environment and positioned according to the calculated values; the green reflection represents our real light positions, the red reflection represents the calculated ones:

RTI reflectance Transformation Imaging accuracy test Blender Thomas Graichen

On visual inspection, these errors seem to be neglegible, and may be solely attributed to floating point inprecision.

Further tests with 91 uniformly distributed light angles (see below left) painted a clearer picture of the absolute error in the light position calculation (see below right):

RTI reflectance Transformation Imaging accuracy test Blender Thomas Graichen
RTI reflectance Transformation Imaging accuracy test Blender Thomas Graichen

Normal map accuracy

Since the test object's geometric properties are known and quantifyable, it was possible to extract an ideal normal map of the object's surface and directly compare it to an RTI- and PTM-normalmap extracted using the RTIViewer:

RTI reflectance Transformation Imaging accuracy test Thomas Graichen normalmap PTM Polynomial Texture mapping Blender

In general, the normal map extracted from an RTI-file exhibits fewer errors than the one from the PTM-file. Both show higher errors in the blue channel (z-coordinate of the surface vector), which seems to correlate with the higher errors for the z-values in the previous test. Another possible explanation could be a systemic problem in the way these fringes are illuminated. These differences are best visualized by a mean of all captured images (see below).

RTI reflectance Transformation Imaging accuracy test Thomas Graichen normalmap PTM Polynomial Texture mapping Blender

Since the topmost surfaces on the sphere are illuminated by almost every lighting angle, a more precise normal calculation for those surfaces might be possible. On the other hand, the lowermost surfaces are often not illuminated at all, thus limiting the lighting angles usable for normal calculation.