Jan Vlietinck

30 December 2003

http://users.belgacom.net/xvox/

LOD i LOD i+1 LOD i+2

Figure 1: Depth adaptive tessellation with level of detail

Displacement mapping is a geometrical technique allowing intricate surface detail with moderate storage requirements. Unlike texture mapping, displacement mapping deforms the basic object geometry and creates a new dense triangle mesh for the object. With trilinear displacement mapping the mesh density can be made dependent on the object depth and geomorphing allows to smoothly interpolate the mesh density between different levels of detail.

Unfortunately, end 2003, graphics processors do not support hardware displacement mapping. They lack a primitive processor to tessellate the dense triangle mesh and vertex texturing to access the displacement map.

Still it proves to be possible to implement a form of hardware displacement mapping with the available hardware. The tessellation can be emulated with vertex morphing, and the vertex texturing can be emulated with vertex streams. This article describes how to do this.

Displacement mapping adds geometrical detail to a surface by modulating the surface normal. Typically the surface is subdivided into patches. The patches are described by some parametric representation of control vertices and normals. Rendering the patch consist of tessellating the patch into triangles. The triangle vertices and the normals are calculated from the parametric description. Prior to rendering, the vertices are displaced along the normal with a scalar sampled from the displacement map:

V’(u,v) = V(u,v) + d(u,v) * N(u,v)

Here we limit the patch to a planar square. In this simple case the displaced vertex can be expressed as:

V’(u,v) = ( u, v, d(u,v) ).

The square is tessellated into triangles. In order to make the tessellation depth adaptive, multiple tessellation resolutions of the square exist as is shown in figure 1. Each resolution corresponds to a different displacement mipmap. These mipmaps are given a number called the Level Of Detail (LOD). In this particular scheme, an increment of the LOD with one corresponds to a doubling of the number of triangles.

As a square moves closer to the eye it is morphed as a mesh with n triangles into a mesh with 2*n triangles. A vertex shader performs this depth adaptive morphing. As the vertex shader cannot create extra vertices, the morphing is always done with 2*n triangles starting with n true and n degenerated triangles. With the technique, apparently the vertex shader creates new vertices. The morphing is achieved by linear interpolation between a vertex (ui, vi, di) at LOD i and a vertex (ui+1, vi+1, di+1) at LOD i+1.

The coordinates (u, v) are taken from a first vertex stream and the displacements d from a second vertex stream. This is done so because the (u, v) coordinates can be reused for each displacement mapped square, resulting in less memory used. Because of the geomorphing, the vertex shader needs input from two levels of detail: (ui, vi) (ui+1, vi+1) from a first stream and di, di+1 from a second stream.

The displacements can be packed in one 32 bit word storing each as a 16 bit short. So per vertex of a mipmap there is a requirement of 4 bytes of storage in a vertex buffer.

With the depth adaptive tessellation we want to achieve that displaced squares are rendered with triangles that have a size on screen that is independent of the depth of the square. Thus if a triangle at LOD i is split in two triangles at LOD i+1, the area of these three triangles should be the same on screen. The area of a triangle on screen is proportional to the squared inverse of its depth in eye space. So what is the relation between the depth at LOD i and the depth at LOD i+1, given that the triangle area is doubled? The answer comes from the equation:

_{}

For numerical convenience we introduce the reversed LOD, RLOD. This number means the same as LOD but has a reversed order. In eye space the depth of the RLOD i=0,1,2 can be thus be found at z values of

_{}, _{}, _{}, _{}, _{} …

or

_{} i=0,1,2

Given z of a vertex in eye space, the corresponding RLOD i can be found as

_{}

or

_{}

Infact this gives us a floating point number for the level of detail per vertex, leading to a continuous level of detail. The integer part can be used to select the mipmaps at RLODi and RLODi +1, and the fractional part can be used to morph between these two mipmaps.

a b

Figure 2: Morphing pattern for mipmaps at RLOD l and l+1, with l even

In the vertex stream we store also an RLOD as indicated by figure 2. Prior to rendering, the square patch is subdivided into subsquares. For each subsquare the subdivision is so that the RLOD of the vertices is guaranteed to lie in the range [l l+2] or in the range [l+1 l+3] with l being even. This corresponds to the two morphing patterns a) and b).

Vertices with a stored RLOD value of l are morphed via interpolation between stored (ul, vl, dl) and (ul+1, vl+1, dl+1) with the floating point number f = saturate( i - l), with i being the RLOD calculated from the vertex depth z.

All of the above information results in the following vertex shader written in HLSL:

struct VS_INPUT

{

float4 d1_d2 : POSITION;

float4 u1v1_u2v2 : NORMAL;

float lod : COLOR;

};

struct VS_OUTPUT

{

float4 position : POSITION;

float2 tex0 : TEXCOORD;

};

vector depth : register(c0);

vector m1 : register(c1);

vector m2 : register(c2);

vector m3 : register(c3);

vector m4 : register(c4);

vector c5 : register(c5);

VS_OUTPUT main(const VS_INPUT input )

{

VS_OUTPUT output;

vector v1;

v1.xy = input.u1v1_u2v2.xy * c5.z;

v1.zw = input.d1_d2.xw;

vector v2;

v2.xy = input.u1v1_u2v2.zw * c5.z - v1.xy;

v2.z = input.d1_d2.y - v1.z;

float t = saturate( 2*log2(dot(v1, depth)) - input.lod);

v1.xyz = v2.xyz * t + v1.xyz;

vector p = m1;

p = v1.x * m2 + p;

p = v1.y * m3 + p;

p = v1.z * m4 + p;

output.position = p;

output.tex0 = v1;

return output;

}