Dr. Dobb's is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


Channels ▼
RSS

Open Source

Shadow Mapping


Jul02: Shadow Mapping

Sergei is the author of 3D Graphics Programming: Games and Beyond (Sams, 2000). He can be contacted at [email protected].


The term "shadow map" refers to a multipass technique for creating dynamic shadows. Implementing this algorithm normally requires specialized hardware support. However, the shadow map algorithm I present here takes advantage of commonly existing support for attenuated point lights and perspective texture mapping to implement a variation of shadow maps. It is applicable to a wide variety of accelerated hardware and can even be implemented using OpenGL (without any specialized extensions). In fact, I've built a sample OpenGL-based implementation of this algorithm in my freely available Small Dynamic Shadows Library (SDSL; available electronically, see "Resource Center," page 5), which also contains implementations for four other dynamic shadowing algorithms.

The Traditional Approach

The classic shadow map algorithms assume Z-buffer support. They first compute the image of the scene from the position of the light source. A Z-buffer of this image is saved as a texture and mapped onto the scene to be viewed by the camera. Of course, Z values stored in the texture correspond to points visible from the light source, which are illuminated by it. If for any point you can compute the distance to the light source and compare it with the corresponding texel value from the texture (the one that is mapped onto this point), you can determine if the point is illuminated by the light source or not. If the distances are the same, it is illuminated; if the distances are different, then something else is closer to the light source and the given point is shadowed. In Figure 1, for instance, point B is shadowed since its distance to the light source is larger than the distance that was recorded in the texture (notably distance to point C). This is not the case for point A, which is illuminated.

An Alternative Approach

Although elegant, this algorithm does require some special hardware support, specifically to perform the distance comparisons. However, you can approximate this algorithm using commonly available operations.

Illumination calculations for attenuated point light sources let you color objects based on the distance to the light sources. Blending several images together lets you obtain images where pixels that should be shadowed are marked.

Consider this algorithm: You first compute the image of the scene from the position of the light source. The light source shines with only ambient white light that is linearly attenuated with distance. Normally, three attenuation coefficients are present in most renderers and the intensity of the attenuated light is computed as in Example 1.

To linearly attenuate with distance, you can set constant and quadratic coefficients to zero, whereas the linear attenuation coefficient has to be set to a value dependent on how large the scene is; that is, based on where this light should stop producing any effect.

Thus, with such illumination, pixels that are closer are whiter, whereas pixels that are further away from the light source are darker (see Figure 2).

As Figure 3 shows, it is not difficult to find the transformation matrix to produce Figure 2. You also want to accommodate all shadow casters in the image in such a way that they occupy the maximum amount of pixels. Thus, the k axis of the light source viewing coordinate system will aim into the middle of the set of shadow casters. You can chose the i axis to be in the XZ plane of the world-coordinate system (arbitrarily, since orientation in the image doesn't matter) and find the last axis as mutually perpendicular to the first two; see Example 2. For any object, the transformation includes the rotation to the aforementioned system, translation away from the light source, and the perspective transformation dependent on the size of the scene and the size of the frame buffer.

The transformation can be described as the concatenation of the three matrices in Example 3. You may also have to apply a viewport transform as the final step to move the resulting image to the middle of the frame buffer.

Furthermore, you can compute two images from the position of the camera. On one image, you project the texture computed during the previous step from the light source onto all objects in the scene. This is done with the help of procedural texture coordinate generation. Most modern graphics systems allow this mechanism, where a texture generation matrix is applied to spatial coordinates of an object and the result is used as texture coordinates. Essentially, you specify as the texture generation matrix a matrix similar to the one used to obtain the image from the position of the light source. The difference may have to do with different viewport transforms used. After all, texture space is commonly limited in range to [0,1] or [-1,1]. Thus, this first image maps the texture to all objects in the scene as projected from the light source. There is no illumination. The second image you compute illuminates the scene with the same attenuated light source as in the first step. Figure 4 demonstrates the two images that contain the approximation of the distance information to the light source. The color of the point closest to the light source also colors all pixels further along the way in the left image. In the right image, every pixel is colored depending on its own distance only.

If you subtract from the value of every pixel in the left image the value of the color of the corresponding pixel in the right image, you can obtain the shadow mask. If the result of subtraction is zero, it means this point was illuminated in the same way in both images. Thus its view of the light source was unobstructed and thus it is illuminated. If, on the other hand, the result is positive, it implies that some other pixel was abstracting the light source and the pixel with a positive result is shadowed. By scaling the values up and clamping from above, you can get a binary shadow mask where all shadowed pixels are white and illuminated pixels are black (see Figure 5). The negative values indicate the region that was not mapped with the computed texture in the left image in Figure 4. You can consider these points to be illuminated and thus adjust the criterion whereby points whose result of subtraction is less than or equal to zero are illuminated.

Finally, you can draw the scene with normal illumination and subtract from this image the shadow mask. That is, from every pixel of the image you subtract the value of the corresponding pixel of the shadow mask. Figure 6 illustrates this process. With OpenGL, the subtraction of images can be done with the help of the accumulation buffer.

Conclusion

There are several drawbacks to this algorithm that are immediately noticeable. First, the range of color values is very narrow. Usually a color value is only 8-bits long per component. Although problematic with OpenGL in some other more flexible renderers, it is possible to manipulate with illumination calculations more intimately and use different attenuation coefficients per different color component, thus using more bits to approximate the distance. However, it is surprising that you can do a lot even with just 8 bits.

A second problem is an inevitable presence of artifacts caused by different interpolation of texture coordinates and color values. This could be alleviated somewhat by changing the criterion of which points are illuminated. Originally, this criterion was to consider points whose result of subtraction is less than or equal to zero as illuminated. You can also add those points with results smaller than some small positive number. Such points often indicate an artifact. Additional artifacts may appear on open models illuminated from the inside. A pixel illuminated from the back face of a polygon is not distinguished from the pixel illuminated from the front. Thus, if a light shines into an opening on an object, you may observe a light spot on the side of the object opposite to the opening. Better modeling will help this problem. Finally, the most significant drawback is the necessity to do four rendering passes. Again, if the system is somewhat more flexible, it is possible to do passes two and three simultaneously. Yet the cost remains high.

On the upside, the algorithm is very general, inherently produces self shadowing, and doesn't limit the shape of models in the scene. Nor does it require building additional primitives on the fly (as is the case with a shadow volumes algorithm, for example). Although it requires multiple passes, most of them are fairly cheap and it is not difficult to imagine that this algorithm can be used in real-time applications (video games, for instance) with perhaps only slightly more powerful graphics hardware than what we have today.

DDJ


Related Reading


More Insights






Currently we allow the following HTML tags in comments:

Single tags

These tags can be used alone and don't need an ending tag.

<br> Defines a single line break

<hr> Defines a horizontal line

Matching tags

These require an ending tag - e.g. <i>italic text</i>

<a> Defines an anchor

<b> Defines bold text

<big> Defines big text

<blockquote> Defines a long quotation

<caption> Defines a table caption

<cite> Defines a citation

<code> Defines computer code text

<em> Defines emphasized text

<fieldset> Defines a border around elements in a form

<h1> This is heading 1

<h2> This is heading 2

<h3> This is heading 3

<h4> This is heading 4

<h5> This is heading 5

<h6> This is heading 6

<i> Defines italic text

<p> Defines a paragraph

<pre> Defines preformatted text

<q> Defines a short quotation

<samp> Defines sample computer code text

<small> Defines small text

<span> Defines a section in a document

<s> Defines strikethrough text

<strike> Defines strikethrough text

<strong> Defines strong text

<sub> Defines subscripted text

<sup> Defines superscripted text

<u> Defines underlined text

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task. However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.