It is often said that it's only when you try to explain something to someone else that you come to realize that there are holes in your understanding of the topic in question. Such was the case when it came to presenting the concept of rounding in my most recent book, How Computers Do Math (John Wiley & Sons Inc.) featuring a virtual 8-bit computer-calculator called the DIY Calculator. As soon as I started my research into rounding, I was surprised to discover the wide range of algorithms in use. I was even more surprised to find that even the simplest technique--truncation, also known as chopping--has different results when working with signed vs. unsigned binary representations.
More Insights
White Papers
- Stop Malware, Stop Breaches? How to Add Values Through Malware Analysis
- Securosis Analyst Report: Security and Privacy on the Encrypted Network
Reports
More >>Webcasts
More >>
Let's start the ball rolling by considering the various rounding schemes in the context of the sign-magnitude decimal numbers we know and love so well. The most fundamental fact associated with rounding is that it involves transforming some quantity from a greater precision to a lesser precision; for example, suppose that we average out a range of prices and end up with a dollar value of $5.19286. In this case, rounding the more precise value of $5.19286 to the nearest cent would result in $5.19, which is less precise.
Given a choice, we would generally prefer to use a rounding algorithm that minimizes the effects of this loss of precision, especially in the case where multiple processing iterations--each involving rounding--can result in "creeping errors."
By this we mean that errors increase over time due to performing rounding operations on data that has previously been rounded. However, in the case of hardware implementations targeted toward tasks such as digital signal processing (DSP) algorithms, for example, we also have to be cognizant of the overheads associated with the various rounding techniques so as to make appropriate design trade-offs (see Fig. 1).
Round-toward-nearest: This is perhaps the most intuitive of the various rounding algorithms. In this case, values such as 3.1, 3.2, 3.3 and 3.4 would round down to 3, while values of 3.6, 3.7, 3.8 and 3.9 would round up to 4. The trick, of course, is to decide what to do in the case of the halfway value 3.5. In fact, round-toward-nearest may be considered to be a superset of two complementary options known as round-half-up and round-half-down, each of which treats the 3.5 value in a different manner as discussed below.
Round-half-up: This algorithm, which may also be referred to as arithmetic rounding, is the one that we typically associate with the rounding we learned at grade school. In this case, a halfway value such as 3.5 will round up to 4. One way to view this is that, at this level of precision and for this particular example, we can consider there to be 10 values that commence with a 3 in the most-significant place (3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7, 3.8 and 3.9). On this basis, it intuitively makes sense for five of the values to round down and for the other five to round up; that is, for the five values 3.0 through 3.4 to round down to 3, and for the remaining five values--3.5 through 3.9--to round up to 4.
The tricky point with the round-half-up algorithm arrives when we come to consider negative numbers. In the case of the values –3.1, –3.2, –3.3 and –3.4, these will all round to the nearest integer, which is –3; similarly, in the case of values like –3.6, –3.7, –3.8 and –3.9, these will all round to –4. The problem arises in the case of –3.5 and our definition as to what "up" means in the context of "round-half-up." Based on the fact that a value of 3.5 rounds up to 4, most of us would intuitively expect a value of –3.5 to round to –4. In this case, we would say that our algorithm was symmetric for positive and negative values.
However, some applications (and mathematicians) regard "up" as referring to positive infinity. Based on this, –3.5 will actually round to –3, in which case we would class this as being an asymmetric implementation of the round-half-up algorithm. For example, the round method of the Java Math Library provides an asymmetric implementation of the round-half-up algorithm, while the round function in Matlab provides a symmetric implementation. Just to keep us on our toes, the round function in Visual Basic for Applications 6.0 actually implements the round-half-even (banker's rounding) algorithm discussed below.
Round-half-down: This acts in the opposite manner to its round-half-up counterpart. In this case, a halfway value such as 3.5 will round to 4. Once again, we run into a problem when we come to consider negative numbers, depending on what we assume "down" to mean. In the case of a symmetric implementation of the algorithm, a value of –3.5 will round to –3. By comparison, in the case of an asymmetric implementation of the algorithm, in which "down" is understood to refer to negative infinity, a value of –3.5 will actually round to –4.
As a point of interest, the symmetric versions of rounding algorithms are sometimes referred to as Gaussian implementations. This is because the theoretical frequency distribution known as a Gaussian distribution--which is named for the German mathematician and astronomer Karl Friedrich Gauss (1777-1855)--is symmetrical about its mean value.
Round-half-even: If halfway values are always rounded in the same direction (for example 3.5 rounds to 4 and 4.5 rounds to 5), the result can be a bias that grows as more rounding operations are performed. One solution toward minimizing this bias is to sometimes round up and sometimes round down.
In the case of the round-half-even algorithm (which is often referred to as "banker's rounding" because it is commonly used in financial calculations), halfway values are rounded toward the nearest even number. Thus, 3.5 will round up to 4 and 4.5 will round down to 4. The round-half-even algorithm is, by definition, symmetric for positive and negative values, so both –3.5 and –4.5 will round to –4.
Round-half-odd: This is the theoretical counterpart to the round-half-even algorithm, in which halfway values are rounded toward the nearest odd number. In this case, 3.5 will round to 3 and 4.5 will round to 5 (similarly, –3.5 will round to –3 and –4.5 will round to –5). In practice, however, the round-half-odd algorithm is never used because it will never round to zero (rounding to zero is often a desirable attribute for rounding algorithms).