# CRC: The C Stands for Confusion

The Cyclic Redundancy Check (or CRC) is a pretty universal method of producing a check word to detect errors, especially in stored or transmitted data. It isn't hard to understand how it works, but it seems to be hard to specify exactly how to make a particular CRC and, even more so, how much CRC you need for a particular purpose.

Consider, the simplest CRC — a parity bit. In principle, it is simple. If you and I agree to validate data with even parity, then any data word will have an extra bit appended so that the number of 1 bits will be even. So, for a 4-bit word, 0000 gets a parity bit of 0 (zero is considered even). 1011 gets a parity bit of 1.

If you think about it, this is assured to catch any one bit error that occurs. Actually, it will detect any odd number of bit errors. Using 1011 as an example, if any of the bits flip (including the parity bit), the parity will be incorrect and the receiver can deduce an error occurred. If two bits flip, though, the parity will be correct (for example, if 1011 turns into 1000, the parity is still 1). A three bit error would still trigger an error detection.

It is pretty clear that parity (or a 1-bit CRC) is only good if you want to catch single bit errors. In theory, you should be able to determine from your bit error rate how common 2-bit errors are and decide if that probability is acceptable. If it isn't, you have to add more check digits.

The parity case just works out the way it does, but if you have a longer CRC, you have to do some math (or at least look up something in a table). If you really want to look up the basic algorithm, check out Wikipedia, which has a pretty good write up, or the classic "A Painless Guide to CRC Error Detection Algorithms." Here's the gist: Every CRC has a generator "polynomial." The algorithm does an integer division of the data packet (treated as a polynomial) by the CRC polynomial. You don't use the result, but the remainder from the division is the CRC. A reverse process can determine if the CRC matches the data. If it doesn't, that implies that an error occurred.

Don't get hung up on the word *polynomial*. The idea is that each bit is the coefficient of a polynomial. So, for a 4-bit CRC with polynomial 1001, the polynomial terms are X^{3} and 1. There will always be a 1 at the end of the generator polynomial. In addition, there's usually an implied 1 at the start of the generator (but not always, keep reading). The implementation is simple and involves an `XOR`

operation and a shifting of the generator in a way that mimics long division.

The reason I'm not that interested in the algorithm itself is that you'll rarely write it yourself. There are plenty of libraries (and FPGA IP) out there that do the work. But that's part of the problem. Because it is so often abstracted, people don't always make smart choices about both selecting and specifying CRCs.

The first problem is selecting the right CRC. You already saw that parity is only reasonable for a certain number of bit errors. The same is true of any length CRC. The math is pretty stout, but it stands to reason that longer CRCs can protect more data against more bit errors. However, it isn't quite that simple. The selection of the polynomial makes a big difference as well.

Rather than go through the math, I'll refer you the excellent paper "CRC Polynomial Selection for Embedded Networks." If you don't want to wade through the analysis, check out the table on page 6. It shows which CRCs can detect all errors of a certain size or less (related to the Hamming distance) for a certain number of bits. For example, a 10-bit CRC (with polynomial 0x327) can catch all 1- and 2-bit errors (HD=3) for up to 1,013 bits.

Obviously, if you have 1,000 bits and are only worried about catching 2-bit errors, a 32-bit CRC is overkill. Yet you see that all the time. You also see that some well-known systems use polynomial generators that are not very good. The paper covers up to 16-bit CRCs, although the same author has another paper that covers 32-bit CRCs less exhaustively.

So now you know how to pick a good CRC in length and the polynomial. What else is there? Well, you still have to agree between the generator and the checker exactly how things will be done. The bit size and the polynomial are part of the equation. But even that's subject to interpretation. There are at least three different ways polynomials are commonly expressed. The Wikipedia list sidesteps the issue by listing all three!

And once you agree what the polynomial means, there are several other parameters you have to agree upon. To make matters more confusing, there are some well-known programs that say they implement a particular CRC, but they do so using a different set of parameters than other well-known programs that supposedly implement the same algorithm.

As an example, consider the CCITT CRC-16. Everyone agrees the polynomial is 0x1201 (using the notation I usually use). However, you need to agree on the other implementation parameters:

- The directions the bits are processed in (MSB first or LSB first)
- The initial value
- If the output is reversed before final processing
- If the output is
`XOR`

d with a constant at the end of the calculation

The direction matters only because the sender and the receiver have to agree to get the same answer. The other parameters accommodate different schemes to identify common errors that CRCs don't always catch. For example, if the initial value is zero, any number of leading zeros will not factor into the CRC calculation (that is, 00001 and 001 will have the same CRC).

A very competent tool to test CRC generation is jacksum. It is written in Java, so it should run where you need it. It implements several common CRC algorithms and even comments on the ambiguity in what CCITT CRC-16 means in its documentation. To avoid the problem, the author gives you a way to specify all of the parameters for any CRC you like. That means you get to pick which version of the CCITT you want to use.

Of course, CRCs are mostly of interest today in hardware and very small systems, or where you are matching some standard. If you have computing power, more sophisticated hashing algorithms are around (and jacksum will do those, too). But it pays to not just *use* a tool like CRC, but to *understand* it, so that it can be applied in a way that makes sense (or can be replaced by something else that makes more sense). It isn't sufficient to simply "use CRC" to do data protection.