The C++ Standard allows for three possible representations for integer types (one's complement, two's complement, and signed magnitude) and overflow checking must allow for all three. The only issue that complicates the code is the possibility that an integer type may not be symmetrical around zero. To make the checks as efficient as possible, the code assumes that there is at most one extra value and it is negative. Although this is not explicitly stated in the Standard, I believe implementations failing the assumption are either non-existent or at least exceedingly rare.
Eliminating unnecessary checks is achieved using templates and specialization. An arithmetic operator is specialized on whether overflow is possible and the signedness of its operands. If no overflow is possible, the operation is simply executed. Otherwise, the operation must check for overflow. The operator is specialized on signedness because detecting overflow with unsigned values can be done more efficiently. Unsigned operations wrap back around zero instead of overflowing. For signed values, the overflow must be checked first because allowing the overflow to occur causes undefined behavior. The actual steps taken by the compiler can be best illustrated by an example. Given the following statements:
typedef ranged_type<unsigned, 0, 100> R; R x, y, z; // The variables are defaulted to the // minimum of the range which in this // case is 0. Sometime later these // variables are initialized. Their // actual values do not matter for // this example. const R a = 5, b = 7, c = 1; R r = ((x + a) * (y + b)) / (z - c);
c are initialized to their respective literals. They are ranged checked, and these checks may or may not be removed by the compiler's optimizer. On typical 32-bit machines,
char is 8 bits,
short is 16 bits, and
long are both 32 bits. The compiler determines that the maximum number of digits required for the two sums is eight (the maximum number of digits for the two operands, plus one). A base type must be chosen for the intermediate value.
unsigned char would be a natural choice, but the
traits class defines a static constant member called
min_intermediate_bits. This member defines the minimum number of bits to use in choosing the base. It is defaulted to (in this case) the number of digits type
unsigned int can hold. The purpose is to limit the amount of casting that would be done as each operation is evaluated. If
unsigned char or
short were used, then for each operation, the operands would be promoted to
unsigned int (if not already) and subsequently cast back to
unsigned char or
short until the maximum number of digits reached that of
unsigned int. So,
unsigned int is chosen for the base type used in the intermediate. Since
unsigned int can hold an eight-digit value, no overflow can occur and no check is done for either sum.
Next is the multiplication. Given that the operands are each eight-digit values, an intermediate with at least 16 digits is needed. As before, though
unsigned short would fit the bill, the compiler chooses
unsigned int. The product will fit, so again, no check is done.
Now comes the subtraction. Both operands are no more than seven digits, so the intermediate has to have at least seven also. This is a special case. Even though
unsigned int is chosen and can hold an eight-digit value, a check is still necessary because the difference may be negative. If
signed int had been used, no check would have been done. To keep the code simple, the current implementation will define all intermediate values using types of the same signedness as the base type of the ranged type.
The last of the arithmetic operations is the division. The divisor has a maximum of sixteen digits and the dividend has a maximum of seven. Thus, the quotient will have a maximum of sixteen digits. This is another special case. A check for divide-by-zero has to be made. Finally, the result is assigned to the variable
r. Since the lower bound is zero and the base type is unsigned, no check is made. The upper bound does have to be range checked because the maximum value of
unsigned int is greater than 100.
As sub-expressions are evaluated, the number of digits in the intermediate values will grow, and the compiler will use larger integer types. It will even use implementation-specific types such as GCC's
long long. This may be undesirable, so the
traits type also defined a static constant member called
max_intermediate_bits. This puts an upper limit on the number of digits when the compiler chooses the base type for an intermediate. It defaults to the number of digits in
unsigned int as appropriate.