# The Wavelet Packet Transform

APR94: The Wavelet Packet Transform

# The Wavelet Packet Transform

## Extending the wavelet transform

### Mac A. Cody

The wavelet transform enables analysis of data at multiple levels of resolution (also known as "scale"). In addition, transient events in the data are preserved by the analysis. When the wavelet transform (WT) is applied to a signal in the time domain, the result is a two-dimensional, time-scale domain analysis of the signal. The transform has proven useful for the compression and analysis of signals and images.

The fast wavelet transform (FWT) is an efficient implementation of the discrete wavelet transform (DWT). The DWT is the WT as applied to a regularly sampled data sequence. The transform of the data exhibits discrete steps in time on one axis, and discrete steps of resolution on another. The algorithm and a C language implementation of the FWT were presented in my article, "The Fast Wavelet Transform" (DDJ, April 1992).

Conversely, the time scale on each level is twice that of the level below it and half that of the level above it; see Figure 1(b). This characteristic of the DWT poses problems when attempting to localize higher frequencies. Discrimination of frequency is sacrificed for time localization at the higher levels in the transform.

It turns out that the DWT is actually a subset of a far more versatile transform, the wavelet packet transform (WPT). Developed by Dr. Ronald A. Coifman of Yale University, the WPT generalizes the time-frequency analysis of the wavelet transform. It yields a family of orthonormal transform bases of which the wavelet transform basis is but one member.

In this article I'll develop the wavelet packet transform algorithm from its roots in the wavelet transform algorithm. After that, I'll present C code to implement the algorithm.

### From a Humble Root a Tree Shall Grow

In the fast wavelet transform algorithm, the sampled data set is passed through the scaling and wavelet filters (the convolution operation). They are, respectively, low-pass and high-pass filters with complementary bandwidths, also known as a quadriture mirror filter (QMF) pair. The outputs of both filters are decimated (desampled) by a factor of two. The high-pass filtered data set is the wavelet transform detail coefficients at that level of scale of the transform. The low-pass filtered data set is the approximation coefficients at that level of scale. Due to the decimation, both sets of coefficients have half as many elements as the original data set.

You can turn the tree algorithm on end, with the initial data input at the top and the detail and approximation coefficients fanning out towards the bottom. The fast wavelet transform algorithm can now be viewed as a partial graph of a binary tree (the significance of this will be seen shortly); see Figure 2(b). The flow of the algorithm moves down and to the left, forming new levels of the transform from the approximation coefficients at higher levels. The detail "branches" are not used for further calculations.

Observe that the wavelet transform operation can be stopped at any level while working down the tree. The resulting "partial" transform is still a valid orthonormal transform. For example,

if the unit interval for a data set were 32 points (25), the corresponding transform would have five levels (4 through 0). If the transform operation were stopped at level 2, the transform would have only three levels, but the approximation and detail coefficients of the transform would correspond exactly to a wavelet transform with a unit interval of 8 (23) samples; see Figure 2(b).

The implication of this observation is that the QMF pair is an orthonormal transform kernel, just as butterfly operation is the kernel of the FFT. As long as the filters are designed to be orthonormal wavelet filters and the original data set meets the unit-interval requirement described above, repeated applications of the kernel will always yield an orthonormal transform.

Now, the set of detail and approximation coefficients at each level of the transform forms a pair of subspaces of the approximation coefficients of the next-higher level of scale and, ultimately, of the original data set. The subspaces created by the wavelet transform roughly correspond to the frequency subbands shown in Figure 1(a). These subspaces form a disjoint cover of the frequency space of the original data set. In other words, the subspaces have no elements in common, and the union of the frequency subbands span the frequency range of the original data set.

What Coifman proposed is that any set of subspaces which are a disjoint cover of the original data set is an orthonormal basis. The wavelet transform basis is then but one of a family of orthonormal bases with different subband intervals. As with the wavelet transform basis, each disjoint cover roughly corresponds to a covering of the frequency space of the original signal. Coifman dubbed this family a "wavelet packet library." The various orthonormal bases are formed by arbitrary applications of the orthonormal transform kernel upon the detail coefficients as well as the approximation coefficients of higher transform levels.

The application of the transform kernel to both the detail and approximation coefficients results in an expansion of the structure of the fast wavelet transform tree algorithm. The tree algorithm for the wavelet packet transform can be represented as a full binary tree; see Figure 3. As read from left to right, the a and d symbols at each node indicate the order of orthonormal transform kernel filter operations performed which yield each particular subspace of the original data set. Each node in the transform tree is also representative of a particular wavelet packet. The transform coefficients computed at each node are a correlation of the original data set and a waveform function representing the wavelet packet.

For example, the sequence aaad in Figure 3 represents four operations of the orthonormal transform kernel representing one of 48 possible wavelet packets. (The combination of all possible translations in time and dilation in scale for the wavelet packets is J*2J; in this instance J equals 4.) The first three represent low-pass filter/decimation operations performed by the transform kernel. The fourth represents a high-pass filter/decimation operation performed by the transform kernel. This subspace should be recognizable as exactly the level 0 detail coefficients of the wavelet transform. The operations of the orthonormal transform kernel correspond to the wavelet function of the wavelet transform. Likewise, the wavelet packet represented by aaaa is the scaling function of the wavelet transform.

### Packets, Graphs, and Bases

The wavelet transform basis is actually a subset of a family of bases formed by the wavelet packet transform. The heavy lines in Figure 3 indicate the graph forming the wavelet basis. Note that the wavelet basis consists of the subspaces d, ad, aad, aaad, and aaaa. The sequences a, aa, and aaa are intermediate steps leading to the generation of the subspaces of the wavelet basis at the lower levels. Since the orthonormal transform kernel can be arbitrarily applied to either approximation or detail branches on the tree, J*2J graphs representing different orthonormal bases can be created; see Figure 4.

The variety of orthonormal bases which can be formed by the WPT algorithm, coupled with the infinite number of wavelet and scaling functions which can be created, yields a very flexible analysis tool. The flexibility of WPT versus the FWT can be compared to that of having a complete set of sockets for a ratchet rather than a single socket to attach to it. The ratchet (algorithm) works the same regardless of the socket (basis) that is chosen. The flexibility of the tool is in choosing the appropriate socket (basis) for the particular nut (problem). The choice of wavelet and scaling functions is then analogous to selecting from English, metric, or Torx socket sets for use with the ratchet. The WPT allows tailoring of the wavelet analysis to selectively localize spectral bands in the input data as well as to correlate the signal to the wavelet. Not only can the best wavelet be chosen to analyze a particular signal but the best orthonormal basis can as well. In signal-processing terminology, the various bases of the wavelet packet transform can be used as arbitrary adaptive tree-structured filter banks.

### Piece-wise Convolutions and Traversing the Tree

The implementation of the WPT is itself a generalization of the FWT routine presented in my previous article. As with the FWT, the kernel operations are the decimating and interpolating convolutions, as presented in Listing One (page 101). The convolutions performed are actually piece-wise convolutions, due to the discrete nature of the data. The routine DualConvDec2 replaces the routines ConvolveDec2 and Dotp in the FWT code. The routine DualConvInt2Sum replaces ConvolveInt2, DotpOdd, and DotpEven in the inverse FWT code.

Both convolution routines are designed to operate upon aperiodic data of finite length. The data does not represent an infinitely repeating pattern and is assumed to be surrounded by zero-valued data; see Figure 5. To support this data model, each data array is appended with additional data storage equal to the length of the wavelet filter, minus one (the shaded elements). The extra data is filled with the terminating convolution values as the wavelet filters "slide off" the end of the data set. Extending the convolution data by this additional amount during the decomposition process of the forward transform ensures perfect reconstruction of the original input data by the inverse transform. DualConvDec2 also uses partial dot products at both ends of the convolution to simulate the implied zero-valued data (the dotted lines) outside of the data array. The dotted lines on the filter elements indicate coefficients not used in the partial dot product calculations. DualConvInt2Sum does not need to do this, since all information necessary for reconstruction is contained within the extended data arrays. Note that DualConvInt2Sum performs only the odd-valued dot product at the beginning of the convolution for the initial, reconstruction data point.

The WPT data is stored in the structure WPTstruct, defined in Figure 6(a). The structure contains storage for the number of levels in the transform, the length of the original, untransformed data array, and a pointer to a two-dimensional matrix of data arrays. The size of the matrix in dependent upon three factors. These are the number of levels in the transform, the length of the original data array, and the length of the transform filters. The length of the data array is itself affected by the length of the transform filters; see Figure 6(b).

Figure 6(c) shows the matrix structure as the wavelet packet binary tree. The data-array pointers are allocated memory from the heap as necessary to form the disjoint cover for the chosen orthonormal basis. Those array pointers not required for the disjoint cover are set to zero. The example shown in the figure represents a three-level wavelet basis for an input of 40 data points with transform filters containing six coefficients.

The routine DualConvDec2 is used by AWPT, the forward wavelet packet transform routine; see Listing Two (page 101). AWPT accepts pointers to the input data array, the WPT data structure, and the transform filter arrays, and the length of the filters. The transform routine works down the levels of the binary tree performing convolutions on the data arrays. Each level of the binary tree is traversed by taking adjacent pairs of array pointers as the destination nodes for the low-pass and high-pass convolutions. If the array pointer for the low-pass convolution is zero, the convolutions are not performed since the destinations are not part of the current disjoint cover.

At the highest level (where i equals 0) the input data array is the source for the convolutions. On each subsequent level, the sources are the arrays on the previous level. The appropriate array in the binary tree is determined by dividing the current j index by 2. After each convolution operation, the data length is divided by two, in order to keep track of the effect of the decimation operation during the convolutions.

DualConvInt2Sum is used by IAWPT, the inverse wavelet packet transform routine. IAWPT accepts pointers to the source WPT data structure, the output data array, the transform filter arrays, and the length of the filters. The inverse transform routine works up the levels of the binary tree, performing convolutions on the data arrays and reconstructing the higher-resolution data on each level. Each level of the binary tree is traversed in the same fashion as was AWPT. The destination arrays are on the next-higher level of the matrix, and they are selected by dividing the j index by 2. At the highest level (i equals 0), the destination array is the output data array. After each convolution operation, the data length is doubled to keep track of the effect of the interpolation operation during the convolutions.

The code listings presented here are written in ANSI C and have been tested with Borland Turbo C 2.0. They should compile on any compiler that is compliant with ANSI C. I've also written a wavelet packet transform demonstration program which is available electronically; see "Availability" on page 3. The electronic version includes the demo program, sample data files, support drivers, and documentation.

### Conclusion

The wavelet packet transform generalizes the discrete wavelet transform and provides a more flexible tool for the time-scale analysis of data. All of the advantages of the fast wavelet transform are retained since the wavelet basis is in the repertoire of bases available with the wavelet packet transform. Given this, the wavelet packet transform may eventually become a standard tool in signal processing.

### References

Cody, Mac A. "The Fast Wavelet Transform." Dr. Dobb's Journal (April 1992).

Coifman, Ronald R., Yves Meyer, and Victor Wickerhauser. Wavelet Analysis and Signal Processing. New Haven, CT: Yale University, 1991, preprint.

#### [LISTING ONE]

/* CONVOLVE.C */

#include "convolve.h"

/* DualConvDec2 - Convolution of data array with decomposition filters
followed by decimation by two.
Input(s): REAL_TYPE *src    - Pointer to source data sample array.
REAL_TYPE *htilda - Pointer to lowpass filter array.
REAL_TYPE *gtilda - Pointer to highpass filter array.
short srclen      - length of source data array.
short filtlen     - length of filter arrays.
Output(s): REAL_TYPE *adst  - Pointer to approximation data sample array.
REAL_TYPE *ddst  - Pointer to detail data sample array.
*/
void DualConvDec2(REAL_TYPE *src, REAL_TYPE *adst, REAL_TYPE *ddst,
REAL_TYPE *htilda, REAL_TYPE *gtilda, short srclen, short filtlen)
{
short i, j, brklen, endlen;
brklen = 1; /* initial break in dot product is after first element */
/* perform truncated dot products until filter no longer hangs off end of
array; decimation by two handled by two-element shift; break count
increases by two on every iteration */
for(j = 0; j < filtlen; j += 2, brklen += 2)
{
head_src = src + j; /* point to appropriate initial element at head */
lp_fltr = htilda;   /* set up pointer to lowpass filter */
hp_fltr = gtilda;   /* set up pointer to highpass filter */
ddot_p = *head_src-- * *hp_fltr++; /* initial highpass product of head */
for(i = 1; i < brklen; i++) /* perform remaining products of head */
{
}
*ddst++ = ddot_p; /* save the completed highpass dot product */
}
endlen = srclen + filtlen - 2; /* find total length of array */
/* perform convolution to the physical end of the array
with a simple dot product loop */
for(; j <= endlen; j += 2)
{
head_src = src + j; /* point to appropriate initial element at head */
lp_fltr = htilda;   /* set up pointer to lowpass filter */
hp_fltr = gtilda;   /* set up pointer to highpass filter */
ddot_p = *head_src-- * *hp_fltr++; /* initial highpass product */
for(i = 1; i < filtlen; i++) /* perform remaining products */
{
}
*ddst++ = ddot_p; /* save the completed highpass dot product */
}
/* perform convolution off the physical end of the array
with a partial dot product loop, like at the beginning */
for(brklen = filtlen - 2, j = 2; brklen > 0; brklen -= 2, j += 2)
{
head_src = src + endlen; /* point to last element in array */
lp_fltr = htilda + j;    /* set up pointer to lowpass filter offset */
hp_fltr = gtilda + j;    /* set up pointer to highpass filter offset*/
ddot_p = *head_src-- * *hp_fltr++; /* initial highpass product */
for(i = 1; i < brklen; i++) /* perform remaining products */

{
}
*ddst++ = ddot_p; /* save the completed highpass dot product */
}
} /* End of DualConvDec2 */
/* DualConvInt2Sum - Convolution of data array with reconstruction
filters with interpolation by two and sum together.
Input(s): REAL_TYPE *asrc   - Pointer to approximation data sample array.
REAL_TYPE *dsrc   - Pointer to detail data sample array.
REAL_TYPE *h      - Pointer to lowpass filter array.
REAL_TYPE *g      - Pointer to highpass filter array.
short srclen      - length of source data array.
short filtlen     - length of filter arrays.
Output(s): REAL_TYPE *dst  - Pointer to output data sample array.
*/
void DualConvInt2Sum(REAL_TYPE *asrc, REAL_TYPE *dsrc, REAL_TYPE *dst,
REAL_TYPE *h, REAL_TYPE *g, short srclen, short filtlen)
{
short i, j, endlen;
REAL_TYPE dot_pe, dot_po;
endlen = srclen + filtlen - 2; /* find total length of array */
filtlen /= 2;         /* adjust filter length value for interpolation */
j = filtlen - 1;      /* start with filter 'covering' end of array */
head_asrc = asrc + j; /* point to initial element at head */
head_dsrc = dsrc + j; /* point to initial element at head */
lp_fltr = h + 1;      /* set up pointer to lowpass filter */
hp_fltr = g + 1;      /* set up pointer to highpass filter */
/* initial lowpass and highpass odd product */
lp_fltr += 2;  hp_fltr += 2; /* skip over even filter elements */
for(i = 1; i < filtlen; i += 2) /* perform remaining products */
{
lp_fltr += 2;  hp_fltr += 2; /* skip over even filter elements */
}
/* save the completed lowpass and highpass odd dot product */
*dst++ = dot_po;
/* perform initial convolution with a simple dot product loop */
for(j++; j <= endlen; j++)
{
head_asrc = asrc + j; /* point to appropriate initial element at head */
head_dsrc = dsrc + j; /* point to appropriate initial element at head */
lp_fltr = h;          /* set up pointer to lowpass filter */
hp_fltr = g;          /* set up pointer to highpass filter */
/* initial lowpass and highpass even product */
/* initial lowpass and highpass odd product */
for(i = 1; i < filtlen; i++) /* perform remaining products */
{
}
/* save the completed lowpass and highpass even dot product */
*dst++ = dot_pe;
/* save the completed lowpass and highpass odd dot product */
*dst++ = dot_po;
}
} /* End of DualConvInt2Sum */

#### [LISTING TWO]

/* AWPT.C */

#include "wp_types.h"
#include "awpt.h"
#include "convolve.h"

/* AWPT - Aperiodic Wavelet Packet Transform: Data is assumed to be
non-periodic, so convolutions do not wrap around arrays.
Convolution data past end of data is generated and retained
to allow perfect reconstruction of original input.
Input(s): REAL_TYPE *indata - Pointer to input data sample array.
REAL_TYPE *htilda - Pointer to lowpass filter array.
REAL_TYPE *gtilda - Pointer to highpass filter array.
short filtlen     - Length of filter arrays.
Output(s): WPTstruct *out - Pointer to transform data structure.
Note: Structure pointed to by 'out' contains:
out->levels - Number of levels in transform (short).
out->length - Length of input data sample array (short).
out->data   - Pointer to pointer of arrays of data (REAL_TYPE ***).
*/
void AWPT(REAL_TYPE *indata, WPTstruct *out,
REAL_TYPE *htilda, REAL_TYPE *gtilda, short filtlen)
{
short i, j, nodes, datalen;
REAL_TYPE *src;
levels = out->levels;
/* loop for all levels, halving the data length on each lower level */
for (i = 0, nodes = 2; i < levels; i++, nodes <<= 1)
{
for(j = 0; j < nodes; j += 2)
{
if(out->data[i][j] == 0) continue;
if(i == 0)      /* ... source for highest level is input data */
src = indata;
else            /* ... source is corresponding node of higher level */
src = out->data[i-1][j >> 1];
DualConvDec2(src, out->data[i][j], out->data[i][j+1],
htilda, gtilda, datalen, filtlen);
}
datalen /= 2; /* input length for next level is half this level */
}
} /* End of AWPT */
/* IAWPT - Inverse Aperiodic Wavelet Packet Transform: Data is assumed to be
non-periodic, so convolutions do not wrap arround arrays.
Convolution data past end of data is used to generate perfect
reconstruction of original input.
Input(s): WPTstruct *in     - Pointer to transform data structure.
REAL_TYPE *htilda - Pointer to lowpass filter array.
REAL_TYPE *gtilda - Pointer to highpass filter array.
short filtlen     - Length of filter arrays.
Note: Structure pointed to by 'in' contains:
in->levels - Number of levels in transform (short).
in->length - Length of output data sample array (short).
in->data   - Pointer to pointer of arrays of data (REAL_TYPE ***).
Output(s): REAL_TYPE *indata - Pointer to input data sample array.
*/
void IAWPT(WPTstuct *in, REAL_TYPE *outdata,
REAL_TYPE *htilda, REAL_TYPE *gtilda, short filtlen)
{

short i, j, levels, nodes, datalen;
REAL_TYPE *dst;
levels = in->levels;
datalen = in->length >> levels;
/* loop for all levels, doubling the data length on each higher level;
destination of all but the highest branch of the reconstruction
is the next higher node */
for (i = levels - 1, nodes = 1 << levels; i >= 0; i--, nodes >>= 1)
{
for(j = 0; j < nodes; j += 2)
{
if(out->data[i][j] == 0) continue;
if(i == 0) /* ... destination for highest level is input data */
dst = outdata;
else       /* ... destination is corresponding node of higher level */
dst = in->data[i - 1][j >> 1];
DualConvInt2Sum(in->data[i][j], in->data[i][j+1], dst,
htilda, gtilda, datalen, filtlen);
}
datalen *= 2; /* input length for next level is half this level */
}
}/* End of IAWPT */
End Listings

### More Insights

 To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.