 # Performance Library Basics

Lori Matassa and Max Domeika are the authors of "Break Away with Intel Atom Processors: A Guide to Architecture Migration"

With each successive generation of processors, the number of developers who want to program in assembly language is diminishing. Further, every generation has required a new round of assemblers, compilers, and a new set of developer expertise. Tools vendors recognized a need to provide developers with software tools to support every processor release with a variety of functionality, a storehouse of optimized code for developers to draw upon. Thus performance libraries targeting several application domains have been developed. In this article, we discuss the basics of performance libraries, and how different areas of computer science benefit.

Performance libraries are constructed by first identifying common functions used in different application areas, then producing optimized routines to perform the required functionality. These optimized routines are built from lower level primitive operations ordered in such a way as to maximize performance on a given hardware platform.

These routines can be classified into domain-specific groupings. For instance, groupings pertinent to discussing Intel's Integrated Performance Primitives (IPP) are:

• Matrix and vector mathematics
• Signal processing and digital filtering
• Cryptography
• Image compression and decompression
• Video encode and decode
• Image processing and object recognition
• String processing

### Matrix and Vector Mathematics

Matrix and vector mathematics are the building blocks for higher level library functionality and are also employed in all manner of scientific and engineering computation. Thus, many performance libraries provide basic linear algebra capabilities offering optimized functions such as those for dot products, matrix multiplication, inverse, and normalization. BLAS is the prototypical matrix and vector mathematics library.

### Signal Processing and Digital Filtering

Signal processing is the analysis of, typically, time series data measurements of some physical events. These events can be sound, images, or other sampled representation of a physical occurrence. The types of processing on these signals include filtering, analysis, and transformations. Applications for signal processing include audio content creation and speech recognition.

Digital filters take a series of inputs, manipulate them in some way without destroying all of the original information, and produce a series of outputs. This "some way" generally preserves most characteristics of the original data but removes others. Often it involves noise removal or information extraction. The inputs are often samples from some sensor or medium, such as audio from a microphone or CD player, or samples of a carrier wave from an antenna; however, regardless of the source and meaning, the time at which the samples are filtered is expressed as numerical entries in an array.

Filters are classified based upon a number of different properties:

• Linear versus nonlinear. A linear filter is one that meets the following criteria:
• Adding two inputs then filtering produces the same result as filtering and then adding the two outputs: F(x1 + x2) = F(x1) + F(x2).
• Multiplying the input by a constant then filtering produces the same result as filtering then multiplying the output by a constant: F(ax1) = aF(x1).

• Time-invariant versus time-varying. Time-invariant filters are defined as systems for which shifting the input in time shifts the output by the same amount without otherwise affecting the output. Linearity and time-invariance simplifies both study and implementation of filters. The core Intel IPP filter operations are time-invariant and linear, because linear filters can be expressed as an array, and time-invariant filters are those for which that array doesn't change in the life of the filter. Frequency-based filters are inherently time-invariant.
• Causal versus noncausal. A causal function is one for which the output at time n only refers to inputs from time n or earlier.
• Real time versus batch mode. If the software system is taking input signals and producing outputs for immediate consumption, it might be thought of as a real-time system. By contrast, systems that have the entire signal available at once and filter it are termed batch-mode.

Fourier TransformsThe Fourier transform converts a signal indexed by time into a signal indexed by frequency. The inverse Fourier transform converts this signal back into the original time-based signal. The same information is contained in both the time-based and frequency-based versions of the signal, but the information is arranged differently.

The Fourier transform calculates the set of sinusoids that best represent the original signal. For this reason the sinusoid is called the basis function of the Fourier transform. A related transform, the Discrete Cosine Transform (DCT), also uses sinusoids, but transforms can and do use square waves, triangle waves, and so on as their basis functions. The use of the sinusoid by the Fourier transform matches a typical physical or electrical oscillation and is therefore appropriate to break down frequency in audio and other physical systems.

The version of this transform that is relevant to software-based filtering is the Discrete Fourier Transform (DFT). The DFT is very similar to the Fourier series, in that it breaks a signal into a series of frequencies.

Appealing characteristics of the Fourier transform with respect to digital filtering include:

• Linearity. The Fourier transform is a linear function. One effect of this function is that the time-domain signal can be interpreted as the sum of several independent frequency components. If each single spike transforms to a sine wave, then several spikes transform to the sum of the corresponding sine waves.
• Reciprocity. The Fourier transform is reversible. It is possible to reconstruct the original signal from the information in the Fourier coefficients. If it weren't possible, the DFT wouldn't be useful for filtering, only for analysis.

Classifying Filters. Filters are often classified as "pass" or "stop," meaning that they either attenuate a frequency range or attenuate all but that range. This is generally modified with "low," "high," or "band," indicating that the range is the low end of the frequency spectrum, the high end, or a contiguous range of frequencies in the middle. For example, a high-pass filter tries to remove all but a specified range of high frequencies, while a band-stop filter tries to eliminate a range of frequencies. A band-stop filter is often referred to as a "notch" filter, particularly when the band in question is narrow.

Time-Domain and Two-Dimensional Filters. Frequency-domain filtering is conceptually simple and intuitive. Time-domain filtering, on the other hand, can be mathematically simple but generally has less-than-intuitive results. Frequency domain filtering typically consists of simple multiplications to enhance or attenuate certain frequencies. In the time domain, that simple element-by-element multiplication with a frequency spectrum becomes a convolution with a signal. The filter that was designed to modify the frequencies becomes an almost random sequence of filter coefficients.

The same mathematical operations that produce filters in one-dimensional signals can be applied to two-dimensional images, generally by a simple expansion of the formula. Image filters can be designed and analyzed using the Fourier transform, and filters can be executed in the frequency domain or using convolution.

### More Insights To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.

## Featured Reports ## Featured Whitepapers 