Entropy
Entropy is a measure of data purity. For some column with n rows, let the number of occurrences of some value x be mx. Thus, the probability that x occurs in the column is Px=mx/n. The overall entropy measure for a column of data can be calculated as shown in Figure 6. This gives you a real number greater than or equal to zero that describes the distribution of values. An entropy of zero indicates that all values in the set are the same. Increasing entropy denotes more values and/or a more even distribution.
The upper bound varies depending on the number of rows you have, but it is easy to find. Calculate the entropy as if every record is unique, which means Px=1/n. This gives the upper bound, which occurs when all rows are unique.
Once you know the range you're working with, you can measure the entropy of all columns in the table and choose the ones you want based on their relative entropy. Consider a reasonably well-distributed column with a distribution value (in other words, the values for Px) of 20 percent, 50 percent, 20 percent, and 10 percent. The entropy for this is 0.53. A distribution of 10 different values, each with 10 percent of the rows, yields an entropy of 1. In selecting your columns, you may want to choose only those within a given range.
Do this calculation for each column in a given table and you can compare the values to determine which are the most evenly distributed. Note, however, that since entropy depends on the number of rows in the table, you can't compare columns with different numbers of rows (in different tables).
D.R.S.