Dr. Dobb's is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


Channels ▼
RSS

Parallel

Photorealism in Computer Graphics


NOV88: PHOTOREALISM IN COMPUTER GRAPHICS

PHOTOREALISM IN COMPUTER GRAPHICS

by Steve Upstill

Steve Upstill is a graphics researcher for Pixar and can be reached a 3240 Kerner Blvd., San Rafael, CA 94901.


With the phenomenal increase in computer power of recent years, computer-graphic imagery (CGI) has emerged as one of the easiest ways to consume excess cycles. For a long time the novelty of creating pictures on a computer was enough to justify the effort. Like Johnson's dog, who walked on its hind legs, "It is not done well; but one is surprised to find it done at all."

But expectations for the quality of CGI have increased. With viewers jaded by a barrage of imagery in television commercials and network logos, that computer look grows old. The ambitions of computer-graphic researchers, as published in SIGGRAPH and other forums, have risen proportionately, with the current goal being to produce synthetic images of realism that are comparable to photographs.

Figure 1, page 20, which depicts a transparent light bulb, shows an example of the kind of complexity required by photorealistic computer graphics. The light bulb includes a variety of light sources and surface types, previously mapped imagery, transparency, reflections, a glow, and so on. It also includes small details, like the roughness of the surface of the glass, which escape conscious notice but none the less lend important depth to its realism.

Figure 1: A transparent light bulb illustrates the complexity required by photorealistc computer graphics

A quick glance around any natural environment reveals a mind-boggling variety of textures, reflections, diffusions, and optical "special effects;" the kinds of phenomena nature throws at us willy-nilly. Ask a random graphics hacker to produce such a scene as a digital image, and the reaction would likely be a dropped jaw, a derisive snort, or impassioned rationalization. The least likely response would be "Sure, right away".

The Traditional Approach

There is a fundamental flaw in the way programmers have approached the problem of realism and computer graphics; the flaw is typical of the way science proceeds. Starting with the crudest imaginable imagery (such as wire-frame drawings and pen plots), rendering methods have improved over the years so that they now can provide flat polygonal surfaces, and shading techniques have been developed to provide correct metallic surfaces and texture mapping that do indeed make pictures look better. As they have appeared, these methods have been incorporated into rendering and shading programs, which have tended to grow in size and complexity.

As software engineering this approach is fundamentally flawed, but computer graphics has nonetheless embraced it implicitly. A typical rendering system simulates optical reality using a monolithic program controlled by a selection of options, parameters, and maps of various kinds. New techniques are introduced by hacking the initial implementation, until the program gets so complex that no one can deal with it productively. It is reimplimented from scratch, and the whole cycle begins anew. Even if this cycle could be sustained indefinitely, it would still fall short of its goal, because physical reality is much too complex ever to be encompassed by a monolithic rendering system.

A Different Approach

The problem with traditional approaches to realism and computer graphics is the assumption that all the functionality that will ever be needed can be packed into a single program and accessed by manipulating options, flags, and parameters. But physical reality in all its richness cannot be reproduced by twiddling knobs.

The idea of the RenderMan Shading Language, (see side-bar) on the other hand, is that a rendering system needs to be extensible and should provide the programmer with a solid basic system to which new tools can be added as the need arises. Not only will the programmer be able to customize the system to meet unanticipated need, but libraries of tools, created by third-party developers independent of both the end user and the basic system, can be developed.

In short, the shading language is a door to extensibility in rendering. Using the shading language a programmer writes a small procedure for performing one of several functions in the rendering process. For example, a light source is defined as a shader whose function is to calculate the light striking a surface with a particular orientation at a particular point. A surface shader has the task, given the impinging light, of returning the light reflected from the surface toward the eye.

Once defined, a shader is embedded in a pre-existing rendering system which takes care of performing geometric transformations, calculating hidden surfaces, providing a shader with its parameters and calling them, and more. A shader has very tightly defined tasks, and so it is typically only a few lines of code in length. It is very much the tail wagging the dog. The key to the usefulness of the shading language is finding the right set of tasks for a shader to perform, making the task as simple and orthogonal as possible.

Design

Fundamental to the design of the shading language is the distinction between shape and shading. Shape concerns the geometry of a scene to be rendered: the surfaces used to describe the objects in the scene, their placement in the world, the positioning of a camera and light sources to illuminate them, and so on. The geometry of the scene is used to project the image data onto an image plane and determine which surfaces are visible from a given viewpoint.

Shading, on the other hand, concerns the appearance of the objects in the scene, how they look. What color is that wall? Is that telephone shiny or matte? Is that real wood grain or synthetic? Is the material on that chair cloth or leather or vinyl?

Dealing with shape in the context of rendering is a fairly well understood process. Projective geometry and hidden-surface removal can be sealed into a black box with a minimum of interference from users. It makes sense to do so because hidden-surface removal is both troublesome to implement and uninteresting to users. Shading, by contrast, demands a maximum of accessibility, because it exerts a powerful influence over the quality of imagery, it can be made fairly straightforward, and it is interesting to users.

Most of the visual interest in the world, the detail and small variations that give an image a sense of reality, can be expressed in shading. The crude shape of a chair does not make it seem real; the play of light on its fabric and the highlights on the wood finish, among other factors, do.

Modularity

The second important facet of the shading language is to break the shading process into functionally independent units, to simplify each one and allow shaders of different kinds to be mixed and matched without mutual interference. Each of these units is implemented as a procedure in the shading language, termed a shader.

Figure 2, next page, shows a model of the shading process in terms of shaders. Each block shows a type of shader. The only interaction between one module and another is the data items (light color, primarily) that move between modules. From the standpoint of the model as a whole, the internal operation of a module is completely unconstrained. It doesn't matter how the surface module arrives at its reflected color; the only important thing is that it produce a reflected color. You might think of shaders as "white boxes." The normal view of programming procedures is that of fixed "black boxes," which have well-defined input and output and unknown internals, so that they can be assembled into large systems based only on that external view. A shader, by contrast, is fit into a predefined system (the renderer), and the user's freedom lies in implementing the module itself.

Figure 2: Each Block represents both a subtask of the shading process and a shader used to implement that subtask

Reflected Ray Color           Transmitted Ray Color
                                  |
                 
 External Volume           Internal Volume             Light Sources  
                 
                                                               
Attenuated              Attenuated transmission             Light colors
reflection color                 color                           
     
             
 Displacement   Surface   
                   -       
                  
                    Displaced               Surface
                    Surface                 Color
                          
                             Atmosphere  
                          
                                   
                              Apparant
                              Surface Color

                         Shader Evaluation Pipeline

Shader Types

The RenderMan Interface specifies eight types of shaders. The types of shader are distinguished by purpose, the kinds of output they produce, and what inputs they use. The first five are concerned with what might traditionally be called shading, and the examples later in this article are chosen from these. The remaining three, not discussed any further here, are applications of shaders to other parts of the picture production process.

Light Source --A light source shader is given the position of a point on a surface and a direction vector. It returns the color and opacity of the light striking that point from that direction. Unlike most other shaders, there may be several light sources extant at one time. They are maintained in a list, and any set of them can be turned on or off for any surface in a scene.

Volume --A volume shader is associated with an object, a closed collection of surfaces. As light passes through the object, an interior volume shader is called to modify the light, presumably according to the material making up the object. When the light emerges from the object, an exterior volume shader is called to make any changes to the light in the immediate vicinity of the object.

Displacement --The purpose of a displacement shader is to provide detail over the surface of an object, small variations which would be difficult to specify geometrically. A displacement shader takes the position of a point on a surface and the normal to the surface, warping the surface by moving the point.

Surface --When light strikes a surface, it bounces around in the material of the surface and emerges in some direction usually with a different color than it had when it arrived. A surface shader has the job of computing reflected light in a particular direction (usually, but not always, toward the eye), given a point on the surface, the orientation of the surface, and a set of light sources.

Atmosphere --In moving from a surface toward the camera, light moves through an atmosphere (fog, for example), which can introduce other artifacts. An atmosphere shader implements those changes by taking a color and direction of light moving from one point to another, modifying it if necessary.

The three shaders mentioned next are applications of the shading language to processes which, strictly speaking, don't concern shading.

Deformation --The geometric transformation capabilities of the RenderMan Interface include nonlinear transformations of geometric coordinates. These are specified with a deformation shader, which takes a position on a surface and outputs a new position.

Projection --Normally a renderer projects the geometry of a scene onto an image plane using a straightforward orthogonal or perspective projection. Other projections can be specified using a projection shader, which takes a position in world space and outputs a position in screen space.

Imager --The mapping from floating-point color values to the output values of a pixel can be controlled by an imager shader, which takes an RGB triple on input and places three values, of arbitrary meaning, on output. An imager shader might be used, for example, to convert RGB output into a device-dependent format, such as NTSC or four-color separation.

Characteristics of a Shading Language

The development and runtime environments of the shading language make writing shaders much easier than one might think. A variety of tools (provided by the language) and services (provided by the renderer) often reduce the size of a shader to a few lines of code.

Rendering Environment--When a shader is called, all the work of receiving and transforming geometric data and calculating hidden surfaces is already done. A shader has only a single, tightly constrained role to play.

Precalculated Geometry--The geometric information used by a shader is provided as global variables, which are preset every time the shader is called.

Special Data Types --The shading language provides two special data types, point and color for manipulating geometric and shading information. The language includes special operators for point data, and the standard programming language operators are overloaded to operate on both types.

Varying Variables --A common procedure in surface shading is to bind a set of values to the vertices of a surface (a polygon, say) and interpolate the values across the surface. The same capability is provided in the shading language, but that fact is irrelevant to the shader itself. The interpolation takes place behind the scenes, and a surface shader operates the same whether a variable is bound to the surface as a whole or varies from one point to another.

Integration Constructs --Besides the usual programming language control constructs, such as for and if-then-else, the RenderMan Shading Language supports three special constructs that make it simple to control spatial integration of light emerging from light sources and impinging on a surface.

Filtered Map Access --Texture maps and other forms of two-dimensional, multi-channel data can be accessed from shaders as simple function calls, with prefiltered return values. Any multichannel value can be stored in a map and then accessed by a shader to control any aspect of shading.

Function Library --The shading language provides an extensive library of math, color, optical, geometric, and noise functions. The common shading functions for calculating ambient, diffuse, specular, and Phong reflections are also predefined.

A Typical Shader

The RenderMan Shading Language will be familiar to anyone experienced with an algebraic programming language, particularly C. It's interesting mostly by contrast, so the best way to present it is to dissect a typical shader.

Example 1, page 22, shows marble( ), a surface shader. This shader is called whenever a programmer needs the color of the light reflecting from a surface at a given point. This shader can be roughly divided into two sections. The first section, ending with the surfcolor = . . . statement, computes the reflectivity of the surface by using noise to perturb the color of the surface. The second part invokes all relevant light sources to determine the amount of light striking the surface from each, summing the resulting eyeward reflections. The result can be seen applied to the light bulb in Figure 3, page 22.

Example 1: marble() is called whenever a programmer needs the color of the light reflecting from a surface at a given point

Figure 3: The light bulb with a "marble" surface shader

The first distinction between the shading language and C is that marble() is not a function or a procedure, but a shader: that fact is denoted by the keyword surface preceding the declaration and indicating that this shader is devoted to computing surface reflections. Shaderhood means that the function can be invoked by a program running the RenderMan Interface and linked into the rendering process at runtime.

A shader is invoked in the interface by creating an Instance of the function, giving values to its instance variables (in Listing One, Kd and Ka). Different instances of a shader (bound, say, to different surfaces) can have different values in their instance variables. For example, the marble() shader might be instanced in a RenderMan program by the routine call

RiSurface( "marble", "Kd", .3, RI_NULL);

That would give the instance variable Kd the value .3, which it would have every time this instance of the shader was called. Unlike a C function, a shader "initializes" its parameters/instance variables with a default value, so the shader may be instanced without supplying values for the instance variables. In this example instance, Ka would have .1 as its default value. The RenderMan Shading Language supports the standard C scalar-type float, but no others. The only structured data types are built into the language: color, giving a light or reflectance color, and point, giving a point in three-dimensional space.

A number of variables used in marble() are neither local to the shader nor instance variables: P, Cs, I, N, and Ci. They are more important than normal global variables, for they are the implicit parameters of the shader, giving the position in three-space of the point being shaded (P), the surface normal at that point (N), the reflective color of the surface (Cs), and so on. All are set before the shader is called. The output of the shader, the color of the reflected light, is passed back to the renderer in the global variable Ci.

The RenderMan Shading Language has predefined extensions in both operators and functions. The functions face-forward() and normalize(), in line 13, for example, are standard in the language. The functions ambient() and diffuse() calculate the contributions of all ambient and diffuse light sources in the scene impinging on the surface point.

Arithmetic operations are overloaded to work on points and colors where appropriate. On line 8 the point P is multiplied by 4; the scalar constant is promoted to a point by putting its values in each component of a point before multiplication. Standard control constructs for looping (while, do, for) and conditional execution (if-then-else) are included in the shading language, as can be seen on line 17.

A shader has no return value. It communicates its results by setting one or more global variables. The surface shader calculates the light leaving a surface, and it leaves its result in the global color variable Ci. If the surface is not opaque, the shader can set the global color variable Oi to values between 0 and 1.

Global Variables

The global variables available for access by shaders are listed in Table 1, below.

Table 1: Global variables available to shaders

Type      Name      Storage Class       Purpose

color     Cs        varying/uniform     Surface color
color     Os        varying, uniform    Surface opacity
point     P         varying             Surface position
point     dPdu      varying             Change in position with u
point     dPdu      varying             Change in position with v
point     N         varying             Surface shading normal
point     Ng        varying/uniform     Surface geometric/normal
float     u,v       varying             Surface parameters
float     du, dv    varying/uniform     Change in u,v accross element
float     s,t       varying             Surface texture coordinates
color     L         varying/uniform     Direction from surface to light source
color     Cl        varying/uniform     Light color
color     Ol        varying/uniform     Light opacity
point     I         varying             Direction from surface point
color     Ci        varying             Reflected color of light from surface
color     Oi        varying             Opacity of surface
point     E         uniform             Position of the eye

Surface color and Transparency: Cs and Os represent the current surface color reflectance and opacity, as defined via the interface from an application program. They may be fixed for each point on the surface (storage class uniform) or vary across the surface (varying).

Surface position and Normal: The point value P represents the position of the point being shaded, and Ng is the geometric normal vector, perpendicular to the surface, at that point. The shading normal vector N is by default equal to Ng, but may be different for shading purposes, perhaps after applying a bump map.

Parameter Space: The floating-point values u and v give the position of the current point on the current surface in parameter space. The points dPdu and dPdv are parametric derivatives, giving the change in surface position P per unit u and unit v, respectively. dPdu and dPdv are useful for evaluating the rate of change in the surface with u and v, the surface's parametric velocity. The normal vector Ng is just the cross product of dPdu and dPdv.

Texture Space: The floating-point values s and t give the texture-space coordinates of the current point on the surface. They may be used to index into a texture map or bump map (see the section on built-in functions, which follows), and so on, for modulating surface properties coherently.

Reflection Variables: A surface shader reports its reflected light back by setting the color variable Ci, and reports the opacity of the surface via color Oi. The latter is set to 1.0 before the shader is called, indicating a perfectly opaque surface, so the shader needn't bother with it if that condition holds. marble() is opaque, so it only concerns itself with reflected color Ci.

Light Variables: The color Cl gives the color of light arriving at the surface at a given point, the color Ol its opacity, and the point L gives the direction from which it is arriving. Although the variables mentioned are always defined when a shader is called, Cl, Ol, and L are redefined for each light source. The illuminance construct for surface shaders "loops" over all light sources, setting Cl, Ol, and L once for each light source.

Eye Position: When a shader is called, the point E is set to the position of the eye in world space. It is of particular interest for surface shading, because the direction (P - E) gives the viewing direction at the surface point. The marble() function uses only diffuse and ambient reflections, which are independent of viewing direction, so E is not used there.

Surface Elements: Two other global variables are provided in support of suite elements. Some renderers operate by dividing surfaces up into pieces small enough that it doesn't matter that they are planar, and they can be rendered with a single color. Some shaders may find it handy to know how large a surface element is being shaded. For these purposes the global floating-point values du and dv give the change in parametric surface parameters u and v across the element. Of course this assumes that the boundaries of the element form a rectangle in parameter space.

The rest of this article gives an example of three of the most common types of shaders: light source, displacement, and surface.

A Light Source Shader

One of the most endearing aspects of the shading language is that it is simple to write shaders, so you can have fun dreaming up special-purpose shaders. Example 2, this page, shows a special-purpose light source shader similar to one used in the Pixar film Tin Toy. That film takes place in a room in a house lit by sunlight through a window; the panes of the window cast bright patches on the scene. The windowlight() shader casts conventional parallel sunlight, masked by the edges and crosspieces of the window. A still from the film is shown in Figure 4, next page. Note how the shadow from the window falls realistically over the objects in the scene.

Example 1

surface
marble( float Kd=.5, Ka=.1;
        color veincolor=0)
{
     float khi ;
     point fnormal ;
     color surfcolor ;
     point freq = 4 * P;
     float turbulence = 0;
     float amplitude = 1, octave;

     /* Make sure the eye-vector points the right way */
    fnormal = faceforward( normalize(N), I ) ;

     /* Accumulate a 3-dimensional noise function over 6 octaves,
        weighting each by 1/f. */
     for(octave = 0; octave < 6; octave = octave + 1 ) {
          turbulence = turbulence +
               amplitude * abs(.5 - noise(freq));
          amplitude = amplitude * .5;
          freq = freq * 2;
     }

    turbulence = 0.5 + 0.5 * sin(6*(xcomp(P)+turbulence)) ;
    /* sharpen peaks */
    turbulence = turbulence * turbulence * turbulence;

    /* map discontinuously to vein colors */
    khi = clamp((turbulence-0.5)*4, 0, 1);

    surfcolor = (1-khi)*Cs + khi*veincolor;
    Ci = surfcolor*(Ka*ambient() + Kd*diffuse(N));
}

Example 2: A special-purpose light source shader

light
windowlight {
     point     from=     point(3, 1, -1), /* Center of the window */
               to=       point(0, 0, 0)
     float     intensity = 1,
               order = 2,     /* (# of panes up and down)/2      */
               fuzz = .02,    /* blurred region around panes     */
               framewid = .1, /* width of a pane from amember    */
               panewid = .5,  /* width of the glass in a pane    */
     color     lightcolor =   color (1, .9, 6),
               darkcolor =    color (.05, .2, 1);
}
{
     float halfframe plus, halfframeminus;
     float windowwid;
     point wfrom, wto;
     point wL, wP;
     float shade, yabs, zabs, ymd, zmod;

     /* where pane-to-frame transition begins */
     halfframeplus = framewid/2+fuzz;
     halfframeminus = framewid/2-fuzz;
     windowwid = framewid+panewid;

     /* Move window center, sunlight vector back to world space */
     wfrom = transform ( "world", from);
     wto = transform ("world", to);
     wL = wto - wfrom;

     wP = transform( "world", P);
     /* Project surface position onto x=xcomp(wfrom) plane */
     wL *= (xcomp(wfrom) - xcomp(wP))/xcomp(wL);
     wP = wP + wL - wfrom;

     /* absolute distance from window center */
     yabs = abs (ycomp(wP));
     zabs = abs (zcomp(wP));
     if ( max(yabs, zabs) > (windowwid*order); { /* Outside window? */
          shade = 0;
     } else {
          /* Modulus reduces insideness to a single pane */
          ymod = mod(yabs, windowwid);
          zmod = mod(zabs, windowwid);
          shade =
               smoothstep (halfframeminus, halfframeplus, ymod)*
               smoothstep (halfframeminus, halfframeplus, windowwid - ymod)*
               smoothstep (halfframeminus, halfframeplus, zmod)*
               smoothstep (halfframeminus, halfframeplus, windowwid - zmod);
     }
     L = to - from; /* always the same (needed for surface shading) */
     Cl = mix(darkcolor, lightcolor, shade);
}


Figure 4: A still image from "Tiny Toy," a computer-generated film

The version of windowlight() shown here is instanced by providing a from point (the center of the window in three-space) and a to point (which gives the direction of the sunlight). The operating assumption is that the window is in a wall in the x=xcomp(from) plane in world space. The basic operation is this: a point on a surface being shaded is cast back onto that plane along the direction of the light. The resulting point is subtracted from the window center to get the distance in y and z between the projected point and the center of the window. If the projected point is within a pane, shade becomes 1 and the output color is light color; if it is not within a pane, shade is 0 and the output is dark color. An interpolation function is used around the edges of the panes to blur the edges of the window.

Here is a solid use for point-transformation routines. Because shaders operate in a coordinate space which is a priori undefined, and we want to operate in world space (to make the plane of the window predictable), we have to transform points.

Another new item in windowlight() is the appearance of the smoothstep() function. This is built into the shading language; its syntax is

smoothstep( min, max, val );

All arguments are float expressions. If val <= min, smoothstep() returns 0; if val >= max, 1. In between, a smooth cubic interpolation eases between the two, smoothstep() provides the fuzzy shadows in the window light.

The last function in windowlight() is on the last line. mix() performs a linear interpolation between lightcolor and darkcolor based on shade, which should always lie between 0 and 1.

Figure 4 illustrates a number of other effects available using the shading language. The cellophane on the toy box, for example, is a flat surface that was given a wrinkled appearance by a displacement shader. The floor, the design on the box, and the wall all employed texture maps. A special-purpose surface shader was used to provide the window-shaped highlight on the toy; no ray tracing was required.

A Displacement Shader

The light bulb image of Figure 3 used a variety of special-purpose shaders. One of these was a displacement shader, threads(), which displaced the surface of a simple cylinder to form a spiral thread. It used the surface parameters of the cylinder: the global variable u gave the relative position of a surface point around the cylinder, and v gave the relative position across it. Both ranged from 0 to 1. The shader is shown in Example 3, page 26.

Example 3: This shader threads() uses a sinusoidal modulation of the surface based on u and v

displacement
threads ( float     freq = 5.0,
                    amplitude = 0.1,
                    phase = 0,0,
                    offset = 0,0;)

{
     point nDel;
     float scale;

     /* surface displacemtn occurs along nDel */
     nDel = normalize (N);

     /* The amount of displacement is given by a sinusoid along the
          length of the cylinder, with the place around the cylinder
          providing an additional phase factor for spiral threads */
     scale = (sin (PI*2*(v*freq + u + phase)) + offset) * amplitude;

     if ( v> 0.95) /* Damp the oscillation to 0 at the ends */
          scale = scale * (1.0 - v) / .05

     else if(v < 0.05)
          scale = scale * v /.05;

     nDel = nDel * scale;
     P = P + nDel;
     N = claculatenormal(P);
}

The shader threads() uses a sinusoidal modulation of the surface based on u and v. The primary modulation is in v, with the frequency freq given as an instance variable. u is used as a phase factor, so that the threads really do spiral. There is another explicit phase factor, phase, which effectively rotates the threads about the cylinder. The instance variable offset provides a _DC_ level for the oscillation relative to the cylinder's surface, and amplitude scales the oscillation to control the thread's depth.

For niceness, the oscillation is damped at the top and bottom of the cylinder, converging on 0 at the extremities to provide a perfectly circular opening at either end.

A Surface Shader

Because a surface shader has the ability to manipulate opacity, it has a powerful potential to manipulate the apparent geometry of a surface. For the light bulb picture, I needed a spiral filament, but I didn't want to take the time to define it as some kind of bicubic surface, so I wrote the filament() shader to make a simple cylinder look like a filament.

It uses the surface parameters of the cylinder to create a spiral function on the surface in a way similar to the way the thread() displacement shader works. Rather than modulate the surface, though, it performs a simple test: is the surface point within (instance variable) width of the spiral? If so, the surface is opaque, and given a color which by default approximates tungsten. If not, the surface is transparent.

The filament() shader is depicted in Example 4, page 28. It consists of a cylinder with a cone at either end so that the filament spirals down to a point for joining with its support wires. filament() computes the distance from the spiral using global texture coordinates s and t rather than surface parameters u and v. I could have mated the filament on the cones to the cylinder by defining a texture space inside the application to extend over all three objects. Instead I did it by manipulating the phase instance variables until the filaments joined.

Example 4: filament() computes the distance from the spiral using global texture coordinates s and t rather than surface parameters u and v

     surface
     filament (float freq = 5.0, phase = 0.0, width =0.3;
               color filcolor = color (1, .84, .76); )
     {
          float distance; /* Distance from the spiral */

          distance = mod((t * freq + s + phase), 1.0);
          if (distance < width) {
               /* Inside the filament */
               Ci = filcolor;
               Oi = 1.0; /* Make it opaque */
          } else {
               Oi = 0.0;      /* Make it transparent */
          }
     }

The glow around the filament was provided by another special shader. Applied to a sphere (elongated here by a nonuniform scale operation), it simply manipulated the opacity of the sphere to peak in the center and fall off to zero at its edges.

This filament also sags a little in the middle. This catenary "droop" was obtained using a simple displacement shader.

Conclusion

Two kinds of professional programmers can profit from language implementations such as the RenderMan Shading Language. One kind of programmer is the one who needs to reproduce reality for databases of colors and textures for special-purpose applications such as product design, architectural rendering, and industrial material specification. RenderMan standard and shading languages make it reasonably simple to compile, maintain, and use such databases. There are opportunities awaiting both for putting these databases out for the public and for providing material-manipulation front ends for them.

The other kind of programmer who can benefit from the shading language is the one on the receiving end of that process. Given a reasonably easy, uniform way to include complex, interesting light and material characteristics in nominally simple scenes, the realism and visual appeal of computer-graphic images should see a radical improvement.

The RenderMan Specification

The fundamental goal of the RenderMan specification, developed by Pixar and announced earlier this year, is to make it possible for programmers to create photorealistic digital images that are comparable in quality to those produced with a camera. Up to this point most modeling systems have concerned themselves only with the shape (geometry) of an object. The RenderMan specification makes it possible for programmers to more easily address the visual attributes (surface characteristics) of an object as well.

To accomplish this goal the RenderMan three-dimensional scene description interface consists of three parts: 1. The shading language (described by Steve Upstill in the accompanying article); 2. A set of geometric primitives for specifying the shape of objects; and 3. The "glue" for attaching surface attributes (defined by the shading language) to that shape (defined by the geometric primitives). Among the geometric primitives are those for polygons and patches; the shading language provides surfaces, textures, atmospheres, and lights.

The interface procedures are described in C and are accessible from programs written in C. Over the coming months Pixar will release additional interfaces for other languages. Furthermore, the specification is written so that PHIGS, PHIGS+, and other existing graphic standards can be readily interfaced to RenderMan. The specification is device- and display-independent.

So far several major players in the computer-graphic and workstation market have announced support for the RenderMan specification, including Sun, Apollo, NeXT, MIPS Computer, DEC, Symbolics, and Prime, as well as AutoDesk, Industrial Light, and Magic (LucasFilm), Walt Disney, and Pacific Data Images. The last is a leader in television animation and graphics.

The RenderMan specification is available to developers from Pixar for $15. Although the specification document is copyrighted, Pixar will grant free licenses to developers who create products that meet the RenderMan specification. Copies of the specification can be obtained from Pixar at 3240 Kerner Blvd., San Rafael, CA 94901; 415-258-8100. --eds.

_PHOTOREALISM AND COMPUTER GRAPHICS_ by Steve Upstill


Related Reading


More Insights






Currently we allow the following HTML tags in comments:

Single tags

These tags can be used alone and don't need an ending tag.

<br> Defines a single line break

<hr> Defines a horizontal line

Matching tags

These require an ending tag - e.g. <i>italic text</i>

<a> Defines an anchor

<b> Defines bold text

<big> Defines big text

<blockquote> Defines a long quotation

<caption> Defines a table caption

<cite> Defines a citation

<code> Defines computer code text

<em> Defines emphasized text

<fieldset> Defines a border around elements in a form

<h1> This is heading 1

<h2> This is heading 2

<h3> This is heading 3

<h4> This is heading 4

<h5> This is heading 5

<h6> This is heading 6

<i> Defines italic text

<p> Defines a paragraph

<pre> Defines preformatted text

<q> Defines a short quotation

<samp> Defines sample computer code text

<small> Defines small text

<span> Defines a section in a document

<s> Defines strikethrough text

<strike> Defines strikethrough text

<strong> Defines strong text

<sub> Defines subscripted text

<sup> Defines superscripted text

<u> Defines underlined text

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task. However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.