Fusing Scattered Images with Multiresolution Point-Based Model
Keung-Tat Lee
To create a realistic three dimensional digital model, one of the ways is to directly capture the model from real object. Information including geometry and images is acquired from real object. Then, the model is constructed by fusing scattered images onto the geometry. To reproduce the appearance of the original object, existing methods usually fit the data using parameterized reflectance model such as Lambertian model, Torrance-Sparrow model, etc. However, these models usually are restricted for specific surface properties, so a more robust approach is needed.
Addressing the problem, a new system is proposed in this thesis. In the system, instead of considering the parameterized reflectance model, we interpolate the scattered radiance samples (image pixels) over a spherical function for each surface point. As a result, we can approximate the reflected radiance looking from different directions. Besides, for efficient data storage, radiances are encoded into spherical harmonic coefficients. Then during rendering, the reflected radiance is decoded from the coefficients according to the viewing direction. Thus, the appearance of the object is preserved and the method is independent of the surface property of the object.
Besides, to display the model interactively, we represent the object as a multiresolution point-based model and render with level-of-detail control. A multiresolution point-based viewer is implemented. Using the viewer, the model is displayed at low resolution when looking from far away or moving it, and at high resolution when idle. As a result, interactive frame rate is achieved.