Dr. Dobb's is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


Channels ▼
RSS

Parallel

Augural Image Zooming


Aug00: Augural Image Zooming

Els is a member of the Department of Mathematics at the U.S. Naval Academy and a research scientist with Pegasus Imaging. He can be contacted at [email protected].


Figure 1 is an image represented as a 192X300 matrix of pixels, each with a specified color value. Figure 2 is a smaller image, represented as a 96X150 matrix of pixels, thus containing only one-fourth the quantity of data of Figure 1. Figure 3 is the same 96X150 image as Figure 2, but covering the same area as Figure 1, so that each pixel has four times the area of the pixels in Figure 1. Even though Figure 3 is coarser than Figure 1 (that is, of lower resolution rather than smaller), it is obvious that they represent the same scene.

Figure 3 was derived from Figure 1. Each coarse pixel of Figure 3 corresponds to a 2X2 group of four fine pixels from Figure 1; and the color value (having three components red, green, and blue) of the coarse pixel is the average of the color values of the four fine pixels. The coarse image, Figure 3, created by shrinking the fine image, Figure 1, is identical up to rounding error with an image of the same resolution created directly from the given scene.

The derivation of a finer image from a coarser one is referred to as "image zooming." Unlike shrinking, zooming in an exact sense is theoretically impossible, because many different finer images correspond to a single coarser image. So in zooming an image, you don't expect to reproduce the finer image exactly -- rather you seek to choose the most plausible of all the possible finer images corresponding to the coarser.

Image zooming has a variety of applications. Multitudes of images travel daily over the Internet. Due to the limited capacity of the transmission channel relative to the quantity of data in an image, transmission of a single image often requires seconds or minutes. This delay can be alleviated by sending images in a progressive format: a low-resolution version of the image to begin with, followed by additional data providing a gradual increase in resolution. The recipient can then get an early impression of an image and interrupt the transmission if desired. Skillful zooming improves the appearance of the image while it has been only partially transmitted. You might zoom an image to move it from one device to another with higher resolution. A computer monitor may display an image at a resolution of 72 pixels per inch, while a printer may provide resolution of 300 pixels per inch. Printing an image displayed on the monitor entails either printing the image at a smaller size or increasing the resolution.

In the U.S., digital television (DTV) is coming into real use. The new format provides resolution more than double the old NTSC. However, much programming has been recorded on NTSC-compatible media, such as VHS videotape. Viewing such media on a high-resolution television entails either using only a small portion of the screen or zooming the video signal.

Various image-zooming approaches are already in use. The most primitive, but also one of the most popular, is resampling or pixel replication. Resampling consists of simply repeating the pixel values of the image as necessary to achieve the desired size. Figure 3 could be zoomed by resampling to the same resolution as Figure 1, but with no change whatsoever in appearance. The resampling method is fast, but leaves much to be desired in quality. Figure 3 shows objectionable features (jaggies) along sharp edges, such as the edge of the model's dress, or the edges between her right shoulder or right cheek and the background. Smooth areas such as the dimple in her throat or the right side of her forehead show less prominent artifacts.

Figure 4 is a slightly more sophisticated approach, produced by linearly interpolating to determine the new pixel values. This eliminates most of the objectionable features seen in smooth regions of Figure 3. Moreover, jaggies on sharp edges are much alleviated. However, edges and other regions of detail show blurring.

In this article, I present a fundamentally new method for zooming images, which I call the "augural method." My goal in developing the augural method was to zoom images with a minimum of computation, while keeping smooth regions smooth and sharp edges sharp. The method described here serves to double the scale of an image; the same principles have been applied to zoom an image to any desired size.

Principles of the Augural Method

For starters, I assume that the coarse image to be zoomed is a digital photograph of a real scene; for example, Figure 1 depicts a woman wearing a black dress in the depicted environment. The transition from real scene to digital image may have comprised several steps involving cameras and scanners; but I assume that the ultimate effect is equivalent to laying a rectangular grid in front of the scene and filling each grid cell with the average color value of the corresponding region of the scene. Each grid cell then becomes one pixel of the image. A finer image of the same scene can be created simply by using a grid with smaller rectangles.

The augural method is thus inappropriate for certain types of images: synthesized images containing features such as aliased edges, which would not appear in a digital photograph of a real scene; or dithered images, where the pixel values no longer represent average color values of the corresponding grid cells. The method is applicable, however, to synthetic images (such as ray-traced images) created using an underlying ideal model.

Figure 5 is a magnified portion of an ideal scene, together with pixelizations at various resolutions. Region I shows the ideal scene. Region F shows a fine pixelization, while Region CR shows a coarse pixelization at half the resolution of Region F, each pixel in Region CR covering the same area as four pixels in Region F. Region C represents an intermediate stage whose pixels cover the same area as just two pixels from Region F. In Regions F, C, and CR, each pixel value equals the average value of the underlying ideal image over the corresponding rectangle. But each pixel value in Region C can also be calculated as the average of the corresponding pair of pixel values from Region F. Similarly, each pixel value in Region CR equals the average of the corresponding pair of pixel values in Region C.

The goal of zooming is to derive a good approximation to the fine pixelization in Region F from a coarse pixelization like that in Region CR. I chose to consider the simpler problem of producing the intermediate pixelization in Region C. Essentially, the same process can then be applied to the intermediate pixelization (with a 90 degree change in orientation) to produce the fine pixelization.

The essence of the augural method is to start with a coarse pixelization, hypothesize a model for the underlying ideal structure in each neighborhood of the real scene, analyze the coarse pixel values to refine the details of the model, and use this model to calculate the corresponding pixel values in the fine image. Deriving a model for the underlying image structure is a deep, subtle problem without a complete solution. The augural method, by compromises and makeshifts, makes a crude judgment that nonetheless proves quite useful.

Images are two dimensional. The first compromise of the augural method is to operate in only one dimension at a time. The resolution of the image is doubled in two steps; first the vertical resolution is doubled (Region CR to Region C), and then the horizontal (Region C to Region F). Moreover, when doubling the vertical (or horizontal) resolution, each column (or row) of the image is considered in isolation, and doubled in resolution. (The underlying image-structure model, however, is considered in two dimensions.) Thus, each coarse pixel corresponds to a pair of fine pixels. The following discussion is couched in terms of doubling the length of a row; but doubling a column is essentially the same except for orientation.

I describe the method as applied to 8-bit gray-scale images, each pixel value a number in the range 0 to 255. Color images can be zoomed quite satisfactorily by zooming the red, green, and blue color planes separately. Other levels of precision can be handled by straightforward modifications.

I denote the coarse pixel values in a row from left to right by C[0], C[1],..., C[w-1], where w is the width of the coarse row. I must calculate estimated fine pixel values F[0],F[1],...,F[2w-1] (twice as many fine pixels as coarse pixels in a row). The coarse pixel with value C[n] covers the same region of the image as the two fine pixels with values F[2n] and F[2n+1]. As already noted, C[n] equals the average of F[2n] and F[2n+1]: C[n]=(F[2n]+F[2n+ 1])/2.

Let D[n]=F[2n+1]-F[2n]; then F[2n] and F[2n+1] can be calculated from C[n] and D[n]: F[2n]=C[n]-D[n]/2, F[2n+1]= C[n]+D[n]/2.

The problem now is how to determine the value of D[n] corresponding to a particular C[n]. Splitting the coarse pixel with value C[n] (henceforth called the "home" pixel) entails predicting D[n] and then using it to find F[2n] and F[2n+1]. I proceed across the row from left to right, so when splitting the home pixel with value C[n], not only are all coarse pixel values of the row available, but also available are the fine pixel values from F[0] to F[2n-1]. To predict the value of D[n], I utilize the three values F[2n-1], C[n], and C[n+1] (and eventually F[2n-2]).

I imposed some abstract mathematical conditions upon the zooming process. The first condition is variation preservation. The variation of the row is the total amount of fluctuation up and down as one travels across the row. The variation of the original coarse row is Vcoarse= |C[1]-C[0]|+|C[2]-C[1]|+...+|C[w-1]- C[w-2]|; the variation of the calculated fine row is Vfine=|F[1]-F[0]|+|F[2]-F[1]|+ ...+|F[2w-]-F[2w-2]|.

It is always true that VfineVcoarse; variation preservation is the additional condition that VfineVcoarse. I imposed this condition as a preventative against zooming artifacts: the introduction of detail into areas such as smooth regions, where the eye expects none. This seems visually more disturbing than the absence of detail where expected. Sophisticated zooming tools such as fractals or wavelets sometimes generate such artifacts, which as a rule entail a variation increase. Variation preservation also eliminates the possibility of pixel-value overflow or underflow.

You can show that variation is preserved if for each D[n] the resulting F[2n] and F[2n+1] satisfy: |F[2n]-F[2n-1]|+|F[2n+1]- F[2n]|+|C[n+1]-F[2n+1]||C[n]-F[2n-1]|+|C[n+1]-C[n]|.

The second condition is that the zooming process be invariant when all pixel values are modified by addition of or multiplication by a constant. The rationale is that adding a constant to all pixel values has an effect akin to twiddling the brightness knob on the computer monitor, and multiplying all pixel values by a constant is like twiddling the contrast knob. Both transformations change the colors of objects in the image without affecting structural features. So, if you define a new set CT of coarse pixel values by the transformation CT[n]=pC[n]+q, the corresponding fine pixel values FT obtained by zooming CT should be related to the original fine pixel values F by: FT[n]=pF[n]+q. The transformed fine-pixel differences DT then satisfy: DT[n]=FT[2n+1]-FT[2n](pF[2n+1]+q)- (pF[2n]+q)=pD[n].

This condition lets me set p and q to any convenient values for the purpose of splitting the home pixel with value C[n]; I can use standard settings for "brightness" and "contrast" of the image, which will simplify my task. I choose to let: p=1/(C[n+1]- F[2n-1]) and q=-F[2n1]/(C[n+1]-F[2n-1]), the case C[n+1]=F[2n-1] being moot because variation preservation then requires that D[n]=0. Then: FT[2n-1]=0, CT[n]=(C[n]- F[2n-1])/(C[n+1]-F[2n-1]), CT[n+1]=1. The fine-pixel difference D[n] can be calculated as: D[n] =DT[n]/p=(C[n+1]-F[2n-1])DT[n].

The prediction of D[n] from the values of F[2n-1], C[n], and C[n+1] now reduces to a single input variable, the "augury": A= CT[n]=(C[n]-F[2n-1])/(C[n+1]-F[2n-1]), and a single output variable, the "declivity": D=DT[n]=D[n]/(C[n+1]-F[2n-1]).

The declivity will be calculated as a certain function G, the "geminator," of the augury: D=G(A).

Premises and Consequences of Geminators

A geminator arises from an assumption (which I call a "scenario") concerning the type of structure found in the image. Figure 6 is part of an image with certain features of particular interest highlighted. Each highlighted region represents two coarse pixels (having values C[n] and C[n+1]) with an adjacent fine pixel (having value F[2n-1]). The middle (home) pixel with value C[n] is to be split into two fine pixels with values F[2n] and F[2n+1]. Recall that the only information available is the average value over the rectangular region that is each pixel, as in Figure 7.

The image may contain structures quite complex at the given resolution level (such as the yellow region in Figure 6). Faithfully zooming these regions is most likely impossible, and I did not try. Rather, I chose to consider simple features such as smooth areas (red) and sharp, straight edges at various orientations to the pixel grid (red, green, and blue). These features are simple enough to be feasible of analysis, occur frequently in most images, and are visually important.

Consider first the blue-shaded region in Figure 6, an example of a sharp, straight edge passing through the home pixel perpendicular, or almost so, to the pixel row. I call this the "perpendicular-edge" scenario. In this scenario, the declivity can be calculated as a function of the position of the edge within the home pixel. The augury can also be calculated as a function of this position; in fact, the augury equals the distance between the edge and the left boundary of the home pixel. Thus, the declivity can be calculated as a function of the augury, yielding the geminator for the perpendicular-edge scenario.

The result is the function GP(A)=2A for 0A0.5; GP(A)=2-2A for 0.5A1. Figure 8 is the graph of GP (in blue), and also its peak value, which occurs when the perpendicular edge exactly bisects the home pixel.

Values of A less than 0 or greater than 1 do not occur in the perpendicular case; however, for such values of A, variation preservation requires that GP(A)=0. In fact, the variation-preservation condition translates into a straightforward condition on the geminator G for any scenario: 0G(A) GP(A); in other words, the graph of G is constrained to lie within the variation- preserving zone shown as light gray in Figure 8. This zone takes the single value 0 for values of A less than 0 or greater than 1.

Figure 10 is the result of zooming using GP. For comparison, Figure 9 shows zooming by resampling. Repeated zooming exaggerates the special characteristics of a given zoomer. I therefore zoomed a small portion of the image (bounded by the blue rectangle) twice to facilitate detailed comparison.

Figure 10 cannot be considered a success, though it does have some interesting features. Vertical and horizontal edges indeed retain sharpness. Smooth areas break up into patches of constant color. And diagonal edges acquire a stair-step appearance. Repeated zooming accentuates all of these effects. The failure of this zoomer, derived from a sharp-edge scenario, to handle smooth areas properly is unsurprising. However, its performance on diagonal edges is disappointing.

Consider next a smooth area of the image (such as the model's cheek, red in Figure 6), which I call the "smooth" scenario. You can calculate the geminator in the smooth scenario to be GS(A)=(A+1)/5, constrained to stay within the variation-preserving zone (red in Figure 8). Figure 8 also indicates the special case of a fixed rate of brightness change, with resulting augury A=3/7 and resulting declivity D=2/7.

The "gentle-edge" scenario supposes a sharp, straight edge just slightly inclined to the pixel row (also red in Figure 6). Though visually wholly unlike the smooth scenario, this scenario yields exactly the same geminator GS.

Figure 11 is the result of zooming using GS. Exactly as expected, it performs well in smooth areas, but on edges shows blurring similar to linear interpolation.

Because neither the perpendicular-edge scenario nor the smooth scenario yielded satisfactory results on diagonal edges, I next investigated the "diagonal-edge" scenario, wherein a sharp edge runs diagonally through the home pixel (green in Figure 6). For the first time I needed to take aspect ratio into account: Splitting a square pixel into two narrow rectangular halves requires a different geminator from splitting a narrow rectangular pixel into two square halves. In both cases, I determined the geminator by calculating augury and declivity from the areas of various trapezoids and triangles, and then solving for the declivity in terms of the augury.

The case of splitting a narrow rectangular pixel yields the geminator GD(A)=4A -4A2-0.25 for 0.25A0.75, GD(A)=GP(A) elsewhere. The case of splitting a square pixel into two narrow rectangular halves yields a different geminator HD. I omit here the formula for HD because of its complexity, but Figure 8 shows the graphs of both GD and HD. Figure 8 indicates also the peak values for GD and HD (both occurring when the diagonal edge bisects the home pixel), as well as other critical values for both functions.

Figure 12 is the result of zooming using GD and HD. This zoomer not only performs well on diagonal edges, but by good fortune also yields acceptable results on perpendicular edges (unlike GP, which performed strangely on diagonal edges). In smooth areas, GD and HD yield a faceted appearance similar to GP.

Clearly what is needed is a zoomer that combines the performance of GD and HD on edges and the performance of GS on smooth areas. The elements F[2n-1], C[n], and C[n+1] of the augury, are simply not sufficient information to distinguish between the smooth and diagonal-edge scenarios. I thus resorted to using the value of F[2n-2] to decide between these two scenarios when splitting a given pixel.

The diagonal-edge scenario supposes that a sharp edge is traveling diagonally across the home pixel. In such a case, D[n-1] (the difference F[2n-1]-F[2n-2]) should be close to zero compared to the difference C[n+1]-F[2n-1]. On the other hand, in the smooth scenario, the difference F[2n-1]-F[2n-2] should be about 2/7 of the difference C[n+1]-F[2n-1].

The criterion I used is this: the diagonal-edge geminator GD or HD is used if: |D[n- 1]|<0.05|C[n+1]-F[2n-1]|; the smooth geminator GS is used otherwise.

Figure 13 is the result of zooming using a GD and HD in combination with GS. This combination zoomer keeps smooth areas smooth and sharp edges sharp.

Zooming with Integer Arithmetic

Most image formats specify pixel values as integers, and for better speed and wider applicability I wanted a zooming algorithm that operates entirely in integer arithmetic. To this end, I made several adaptations to the hitherto-described method.

First, recall: C[n]=(F[2n]+F[2n+1])/2, D[n]=F[2n+1] -F[2n]. In integer arithmetic, division by 2 in the calculation of C[n] entails losing the lowest precision bit. However, you can still recover F[2n] and F[2n+1] from C[n] and D[n] by: F[2n]=C[n]- (D[n]>>1), F[2n+1]=C[n]+((D[n]+1)>>1). Here, I shift right by one bit rather than divide by two so that the result is rounded down rather than towards zero.

One advantage of distilling the three values F[2n-1], C[n], and C[n+1] down to the single value A is that you can use a lookup table to evaluate the geminator, a function of a single variable, whereas a lookup table with three input variables might be completely infeasible. Thus, the computational complexity of the geminator is of no concern.

However, consider the range of possible values of the augury: A=(C[n]-F[2n-1])/ (C[n+1]-F[2n-1]). With pixel values ranging from 0 to 255, the augury A ranges in magnitude from 1/255 to 255, even excluding zero and infinite values. To reduce lookup table size needed, I manipulated augury and declivity on a logarithmic scale. This also eliminates multiplication and division operations from the algorithm.

In integer arithmetic, the augury is represented by the variable LgA, calculated as: LgA=Lg(C[n]-F[2n-1])-LgE, where LgE=Lg(C[n+1]-F[2n-1]). The function Lg() is a modified logarithm, calculated by means of a lookup table. Modifications are necessary for three purposes: to handle zero arguments, handle negative arguments, and represent the output as an unsigned char.

To handle zero arguments, I exploited the truncation error in calculating C[n] (or C[n+1]) as an average of two fine pixel values. This error causes the value to be underrepresented by 0.5 about half the time, underrepresented by 0.25 on average. The argument to Lg(), being always a difference between a coarse and a fine pixel value, is also underrepresented by 0.25 on average. To compensate for this, Lg() adds 0.25 before taking the logarithm, thus eliminating the occasion for taking the logarithm of 0.

To handle negative arguments, I adopted the convention of representing the logarithm of a positive number by the logarithm rounded to the nearest even integer and representing the logarithm of a negative number by the logarithm of the absolute value, rounded to the nearest odd integer -- exploiting the fact that the parity of a difference of integers is even if and only if the two arguments have the same parity, just as the sign of a quotient of real numbers is positive if and only if the two arguments have the same sign.

Finally, I normalized the function Lg() by subtracting a constant so that Lg(0)=0, and hence all values of Lg() are nonnegative. This constant cancels out when the two outputs from Lg() are subtracted in the calculation of LgA.

Thus, the actual formula for Lg(t) is: Lg(t)=logB|t+0.25|-logB 0.25, rounded to the nearest odd integer for negative t and rounded to the nearest even integer otherwise. Larger values for the logarithmic base B yield more compact look-up tables at the expense of precision of calculation. I recommend the value B=1.027653151, which allows Lg and the other lookup tables to be byte valued, while keeping as much precision as possible, so that Lg(255)=254 and Lg(-255)=253.

The declivity D is represented on a logarithmic scale by the variable LgDelta, related to D by the formula: LgDelta=logB D-logB 0.25, rounded to the nearest even integer for positive values of D and to the nearest odd integer for negative values. The fine-pixel differences D[n] and D[n-1] are represented on a logarithmic scale by the variable LgD, related to D[n] (or D[n-1] as occasion demands) by: LgD=logB D[n]-logB 0.25-logB 0.05, likewise rounded to an even or odd integer according to the sign of D[n]. These various versions of the logarithmic scale allow LgD to be calculated as: LgD=LgDelta+LgE, while the criterion |D[n-1]|<0.05|C[n+1]-F[2n-1]| for choosing the diagonal-edge versus the smooth scenario translates into simply: LgD<LgE.

The actual fine-pixel difference D[n] must be recovered from the logarithmic LgD in order to calculate the fine pixel values; this is done by the modified exponential function Xp() (also calculated by lookup table), given by: Xp(LgD)= (+_(0.25)(0.05)BLgD, rounded to the nearest integer, negatively valued for odd LgD, and positively valued otherwise.

Finally, the geminators GD, HD, and GS must be adapted to the appropriate logarithmic scale on both input and output. The logarithmic equivalents are functions LgG_D(), LgH_D(), and LgG_S() (all calculated by lookup tables). LgG_D() is related to GD by the formula: LgG_D(LgA) =logB(GD(BLgA)/0.05); with corresponding relations for LgH_D and LgG_S. Zero values for the geminator are represented on the logarithmic scale by a value small enough that the eventual value of D[n] is guaranteed to be zero.

In summary, a single coarse pixel with value C[n] is split into two fine pixels with values F[2n] and F[2n+1] by the following sequence of steps. LgD holds the value that remains after splitting the previous pixel.

LgE=Lg(C[n+1]-F[2n-1]);

LgA=Lg(C[n]-F[2n-1]);

LgDelta=LgD<LgE?LgGH_D(LgA):LgG_S(LgA);

LgD=LgDelta+LgE;

D[n]=Xp(LgD);

F[2n]=C[n]-(D[n]>>1);

F[2n+1]=C[n]+((D[n]+1)>>1);

Here LgGH_D represents either LgG_D for horizontal zooming or LgH_D for vertical zooming. All functions are calculated by lookup tables. The algorithm design completely eliminates the need for range checking on lookup table input.

Sample Code

I've implemented the augural method described here in ZOOMTEST, a sample program (available electronically; see "Resource Center," page 5). The program consists of files: IMAGE.H, BMP.H, ZOOM.H, ZOOM.CPP, BMP.CPP, and ZOOMTEST.CPP. This is intended as an example program rather than a finished software product, so issues such as error checking and interfacing have been kept as sparse and unobtrusive as possible. The source code is distributed under the terms of the GNU public license.

ZOOMTEST has the usage syntax: ZOOMTEST <infile> <outfile>. Input and output files are 24-bit BMP format.

The 24-bit color format is handled by zooming the red, green, and blue color planes separately. The edges of the image (where F[2n-1] or C[n+1] would fall beyond the edge of the image) are handled by effectively setting F[2n-1] or C[n+1] to C[n] as necessary.

The augural method requires only a few rows of the image held in memory at any given time. The design of the example program exploits this. I defined an abstract base class Image (in IMAGE.H). The basic idea of an Image is that it writes a specified number of image rows to a specified location in memory on demand. Classes derived from Image are BmpFile (defined in BMP.H and BMP.CPP), which allows rows to be read from a 24-bit BMP file; and HZoom and VZoom (both defined in ZOOM.H and ZOOM.CPP), which double the horizontal and vertical resolution of an image, respectively.

ZOOMTEST.CPP provides a simple command-line interface. It opens a BMP file, zooms it vertically and then horizontally, and then writes the resulting image to a new BMP file. (VZoom should precede HZoom, because their geminators differ. HZoom followed by VZoom will produce inferior results.)

The core of the zooming process is embodied in the procedure SplitPxl() found in ZOOM.CPP. This procedure actually splits a red, green, or blue pixel component.

Enhancements

The principles outlined here can be further developed. More detailed calculations allow the resolution to be increased by an arbitrary factor, rather than simply doubled. A more sophisticated analysis of image structure using two-dimensional information yields more precise and complete location of edges for better image quality. A commercial version of the augural zoomer includes these enhancements, and also incorporates Intel machine code for greater speed. (For more information, send e-mail to sarmstrong @ jpg.com.)

DDJ


Related Reading


More Insights






Currently we allow the following HTML tags in comments:

Single tags

These tags can be used alone and don't need an ending tag.

<br> Defines a single line break

<hr> Defines a horizontal line

Matching tags

These require an ending tag - e.g. <i>italic text</i>

<a> Defines an anchor

<b> Defines bold text

<big> Defines big text

<blockquote> Defines a long quotation

<caption> Defines a table caption

<cite> Defines a citation

<code> Defines computer code text

<em> Defines emphasized text

<fieldset> Defines a border around elements in a form

<h1> This is heading 1

<h2> This is heading 2

<h3> This is heading 3

<h4> This is heading 4

<h5> This is heading 5

<h6> This is heading 6

<i> Defines italic text

<p> Defines a paragraph

<pre> Defines preformatted text

<q> Defines a short quotation

<samp> Defines sample computer code text

<small> Defines small text

<span> Defines a section in a document

<s> Defines strikethrough text

<strike> Defines strikethrough text

<strong> Defines strong text

<sub> Defines subscripted text

<sup> Defines superscripted text

<u> Defines underlined text

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task. However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.