The Case Against int, Part 1
For most of us,
int is probably the first type we learned in C or C++. Nevertheless, I have a hard time coming up with contexts in which it is genuinely useful beyond classroom exercises.
To see why, start by figuring out the value of
n after executing
auto n = 200 * 200; // What is the value of n?
The correct answer is not 40000 — it's either 40000 or undefined, depending on the implementation. The problem is that the C++ standard defines
int as having "the natural size suggested by the architecture of the execution environment," and permits that size to be as small as 16 bits. Therefore, there is no guarantee that an implementation will support an
int value larger than 32767.
The problem is compounded by the fact that today, it is hard to imagine a processor with less than 32-bit
ints except for very small embedded processors — and programs written for those processors will have other constraints in addition to the size of
int. As a result, we have a situation where all of the computers that people are likely to encounter offer a useful range for
int, but if we take advantage of that range, our programs are not portable — at least not officially.
int was intended to be a type that could hold array indices, so its values were intended to be commensurate with the size of the computer's memory. Unfortunately, around 1980, some Bell Labs folks ported a C compiler to some processors that happened to have 16-bit words and 32-bit addresses. At the time, most people expected
int to be 16 bits. The compiler writers did not want to impose extra overhead on people who wrote
int expecting 16-bit arithmetic, so they kept 16-bit
int values. If you wanted to convert a pointer to integer and retain its value, you had to convert it to
Meanwhile, C was being ported to other processors that had true 32-bit words. Sometimes, 16-bit arithmetic was actually more expensive than 32-bit arithmetic. On those processors, 32 bits was the natural size for
So the notion that
int was a machine-dependent type entered the C vernacular well before C++ was even widely known. And of course, the desire to keep C++ compatible with C argued against changing this machine dependency.
Today the problem is just as bad. The growing number of 64-bit computers means that compiler writers again have an unpleasant choice: If
int is commensurate with the size of a pointer, it has to be 64 bits — but what about the additional overhead for programs that expect
int to be 32 bits? As a result, today's C++ programmers probably don't have to worry about
int being 16 bits, but they may have to worry about
int being 64 bits — either because they don't want the overhead or because they do want the extra capacity.
What is a programmer to do? The problem results from decades of history, so one should not expect the solution to be simple. It will probably come as no surprise that I suggest that you begin by understanding what problem you are trying to use
int to solve. Next week, I'll consider some answers to that question and their implications.