Detail Textures
- the basic idea is that, since geospecific
textures are rarely high-resolution enough, you can render additional high-frequency
detail with geotypical textures, to
avoid having a big blurry surface
Distance-based blending
- Jason Booth of Turbine Games
said:
- "Why not use a single texture, essentially noise, over your terrain
and then color the vertices of your polygons. Then, when close, multipass
that texture with your actual texture (using the alpha values at the vertices
to blend in the texture smoothly). Over a set number of frames blend the
two from 100% vertex/noise based to 100% texture. Then use another pass
to blend in a smaller, detail noise texture to add noise at the pixel level
and avoid the big blur syndrome."
- John Ratcliff of Verant put it
this way:
- "Synthesize a high resolution detail map for the polygons immediately
around the camera position, to show a great deal of perceived local detail.
For everything else, mountains and such off in the distance, the base texture
map works fine."
- the vtlib implementation re-uses
the distance from camera to each vertex, already computed by the CLOD algorithm,
to do an alpha-fade of the detail texture
Masking
- you need some way to specify what areas are covered with which ground type(s)
- this is commonly done with a bitmap mask, giving an 'amount' 0-255 for each
type, for the entire terrain surface
- another way would be to use a polygonal representation for the coverage,
which is a better fit with how data is actually obtained from GIS layers, and
allows very sharp transitions without the cost of a large mask bitmap
- however, so far i have only heard of people (game developers) implementing
the bitmap mask approach
Blending: How to Render
- Once you've got geotypical texture and a mask for each one, the decision
is whether to do the blending with the CPU or the graphics card
- With the CPU, you can blend each of the ground textures together into a
single texture, which simplifies the rendering down to a single texture pass.
This rapidly exceeds texture memory, unless texture LOD is well-managed such
that textures are not computed until necessary, which probably works best for
applications with a near field of view, and slow-moving viewpoints. It becomes
complicated to handle distant terrain, which must be synthesized from lower-resolution
downsamples of the ground type masks to produce lower-resolution textures.
- The alternative is to the use the graphics card to do your blending. The
entire terrain mesh repeatedly drawn per frame, for each potential ground type,
using each type's mask. Ideally, these passes are combined as much as possible
using the card's multi-texture capabilities.
- The performance tradeoffs are complex, involving CPU consumption vs. bus
bandwidth vs. assumption of multi-texture hardware. These have been discussed
in great depth on the 3DGameDev
Algorithms
mailing list:
-
some
game developers refer to this subject as "Splatting"
- Multi-pass issues
- you should consider the Z-buffer problem of drawing the surfaces over
each other
- Seumas says how he does
it is:
- he alters the front & back clipping planes slightly
before drawing the second pass, which solves the overdraw-Z buffer problem
- his detail texture is dark and monochromatic, using alpha to vary
the amount of darkening
- with the vtlib implementation (see below) i haven't seen any Z-buffer
problems, so there's been no need to play with clipping planes or other
tricks
- there is API support for multitexturing in both Direct3D, OpenGL 1.1
extensions, and OpenGL 1.2, but there are still some older cards around
that won't have it
- if blending with a low-resolution texture which already contains color information,
should the detail texture be monochromatic or not?
- e.g. for grass, does it make sense to use a detail texture containing an
RGB image of grass, alpha-blended with the base color texture, or just supply
a grey detail texture and rely on the base texture for the 'green' color?
See also
vtlib implementation
of detail texture