Full-Text Searching & the Burrows-Wheeler Transform

Here's an indexing method that lets you find any character sequence in the source text using a structure that can fit the entire source text and index into less space than the text alone.


December 01, 2003
URL:http://www.drdobbs.com/windows/parallel-linq/windows/full-text-searching-the-burrows-wheeler/184405504

When it comes to full-text indexing, we usually think of methods such as inverted indices that break text on word boundaries, consequently requiring search terms to be whole words only. Yet all of us probably have had the experience of searching for not-quite-words — C++, VM/CMS, SQL*Plus, 127.0.0.1, <blink>, and the like — that were skipped by an inverted index, or broken into less-distinctive pieces. The same goes when you are working with data such as DNA sequences, where you need to quickly find any sequence of symbols.

In this article, I examine an indexing method that lets you find any character sequence in the source text — in time only proportional to the sequence length — using a structure that can compress the entire source text and index into less space than the text alone. This technique is exceptionally fast at detecting and counting occurrences of any string in the source text. The fact that you can build a string match incrementally—adding one character at a time and seeing the result at each step—gives you the flexibility to explore variable patterns such as regular expressions with maximum effectiveness.

Finding any character sequence in source text—and fast!

In Fast String Searching with Suffix Trees (DDJ, August 1996), Mark Nelson addressed full-text indexing using suffix trees; while in Data Compression with the Burrows-Wheeler Transform (DDJ, September 1996), he focused on the use of the Burrows-Wheeler Transform (BWT) for compression. While the BWT is commonly known as a data-compression technique, researchers have found that block-sorted data has a structure that lends itself naturally to search, while using space close to its minimal compressed size. This was first demonstrated in the FM index (see Opportunistic Data Structures with Applications, by Paolo Ferragina and Giovanni Manzini, Proceedings of the 41st IEEE Symposium on Foundations of Computer Science, 2000). In short, the same transformation that yields high-compression ratios, by grouping similar substrings together, also lets you find arbitrary substrings with little overhead.

Block Sorting Reviewed

When block sorting, you construct a list of n blocks consisting of the source text S (usually followed by a special End-of-String character $) of length n, cyclically shifted from zero to n-1 positions. When you sort the blocks, you get a matrix M; see Figure 1(a). The first column of M is called F, and the last, L. F has a simple structure, containing all the characters of S in sorted order, with duplicates. Column L has a more complex structure that contains enough information to reconstruct the original string, and usually forms the basis for BWT compression.

Figure 1(a): Block-sorted text. Start with the source string "abracadabra." Append an end-of-file metacharacter $, with the property that $<a-z. Cyclically shift the string from 0 to n-1 places, and sort the resulting list. Each row is called a "block," containing original text left-shifted 0 to n-1 places. M is the 12×12 block-sorted matrix for string "abracadabra$," F is the first column of the matrix, L is the last column.

In its naive form, M contains n2 characters, but a simple trick represents each block by its first character and a link to the block starting one character to the right; Figures 1(b) and 1(c). To decode a block, you read its first character, follow its link to the next block, read its character and link, and repeat the process until the desired number of characters have been read. This character-and-link representation slashes spatial complexity from O(n2) to O(n).

Figure 1(b): Replacing abracadra$ with "a" and a pointer to bracadabra$a.

Figure 1(c): Twelve blocks replaced with F column and 12 pointers to suffixes.

The links act as a permutation on M, which I call FL because it permutes the orderly F column to the higher entropy L column. FL is the permutation caused by shifting M one column to the left and resorting it; each row i moves to a new position FL[i].

Since the F column is a sorted list of characters, the next space saver is to change from explicitly storing the F column, to simply recording the starting position for each character's range in F, using an array that I call "C"; Figure 1(c). At a given position i in M, you look through C to find the section c of F that contains i. The F() method in bwtindex.cc applies this idea. Also see Figure 1(d).

Figure 1(d): Discard F column and use C to identify characters.

By storing only FL and C, you have a reasonable—but not minimal—representation of M, and you can decode successive characters of the source from left to right. The decode method in bwtindex.cc shows how to carry out this iteration. See Figure 1(e).

Figure 1(e): Decoding from any position. Chase pointers and translate each position to char using C. The decode method shows this technique in bwtindex.cc.

Useful Properties of the Permutation

Figure 2 shows how FL is order preserving on blocks that start with the same character. That is, given two blocks i and j that both start with c, lexical comparison implies that if i<j, then FL[i]<FL[j]. This is one of the core elements of the BWT.

Figure 2: This map is called FL because it rearranges the F column into the L column. FL is order-preserving for strings that start with the same character, so each character's section in FL contains an ascending list of integers. Here, you see the "fan-out" for blocks prefixed by "a."

This order-preserving property means that FL consists of sections of integers in ascending order, one section for each character. You can search one of these sections for a target value quickly, using binary search.

Pattern Matching

If you pick an arbitrary pattern string P, say "abr," one way to find all occurrences of it is to search the sorted blocks in M, finding the range of blocks that start with a, then narrowing it to blocks prefixed by "ab," and so on, extending the pattern from left to right. This method is workable, but a more efficient algorithm (first developed by Ferragina and Manzini) works in the opposite direction, extending and matching the pattern one character to the left at each turn.

To understand this method inductively, first consider how to match one character c, then how to extend a single character beyond a pattern that has already been matched. The answer to the first problem is easy, since you know blocks in the range C[c]...C[c+1]-1 start with c. I call this range "Rc."

To left-extend a pattern match, consider the string cP formed by prepending a character c onto the already matched string P. Use FL, starting from the range of locations prefixed by P, mapping FL inversely to find the interval of blocks prefixed by cP, as follows.

Given the next character c and the range RP of blocks prefixed by P, you need to find the range RcP of blocks prefixed by cP. You know two things about RcP:

My approach is to start with the widest possible range Rc, and narrow it down to those entries that FL maps into RP. Because of the sorting, you know that entries prefixed by cP form a contiguous range. Since FL is order-preserving on Rc, you can find RcP as follows:

The resulting [i,j] range is RcP, the range of blocks prefixed by cP. Figures 3(a), 3(b), 3(c), and 3(d) show this narrowing-down process; the refine method implements this algorithm in bwtindex.cc.

Figure 3(a): Start by finding the range of all blocks starting with "r."

Figure 3(b): To find R_br, find bs that precede rs. I show two copies of F for clarity. Working right-to-left, the copy on the right represents the previous range, and the one on the left is new. To search: 1. Take all blocks starting with b; 2. search this set for the first i where FL[i]startp, and the last j where FL[j]<=endp; everything in this [i,j] range must map into the target interval between startp and endp. This [i,j] interval identifies all bs that are followed by r; in other words, all blocks starting with "br." i and j become the new startp/endp for the next step.

Figure 3(c): Extending "br" to "abr." The next char is "a," so you start with the range of F containing as. Narrow this range down from all a's to a's followed by "br," again by inverting FL from the "br" range into the "a" range. This process can continue indefinitely, left-extending the pattern one character at each step. For any pattern, this process will give you the start and end (and count) of a range of matches in M. From any position in this range, you can decode forward to show the context of each match, but you don't know the offset of each match in the source file.

Figure 3(d): What you don't know is the distance of each match to end/start of text.

The result at each step in this process is a start/end position for a range of blocks prefixed by the pattern matched so far. The difference between these positions is the number of matches in the text, and starting from any position in this range, you can decode characters in and following each match.

The Location Problem

There is one valuable piece of information you haven't found: The exact offset of each match within the original text. I can call this the "location problem," because there is virtually no information in a sorted block to tell you how far you are from the start or end of the text, unless you decode and count all the characters in between.

There are a number of solutions to the location problem that I won't address here except to say that all of them require extra information beyond FL and C, or any BWT representation. The simple but bulky solution is just to save the offset of each block in an array of n integers, reducing the problem to a simple lookup, but adding immensely to the space requirement. The problem is how to get the equivalent information into less space.

Some approaches rely on marking an explicit offset milepost at only a few chosen blocks, so you quickly encounter a milepost while decoding forward from any block. Others use the text itself as a key, to index locations by unique substrings. Another possibility lets you jump ahead many characters at a time from certain blocks, so as to reach the end of the text more quickly while counting forward. The variety of possible solutions makes it impossible to cover them here.

A Word About Compression

Recall that I promised a full-text index that consumes only a few bits per character, but so far you've only seen a structure taking at least one int per character—hardly an improvement. However, the integers in FL have a distribution that makes them highly compressible. You already know FL contains long sections of integers in ascending order. Another useful fact is that consecutive entries often differ by only one; in normal text, as many as 70 percent of these differences are one, with the distribution falling off rapidly with magnitude. My own experiments using simple differential and gamma coding have shrunk FL to fewer than 4 bits per character, and more sophisticated methods (see "Second Step Algorithms in the Burrows-Wheeler Compression Algorithm," by Sebastian Deorowicz; Software: Practice and Experience, Volume 32, Issue 2, 2002), have shrunk FL to even more competitive levels.

The practical problem with compression is that elements of FL then vary in size, so finding an element FL[i] requires scanning from the beginning of the packed array. To eliminate most of the scanning, you need to use a separate bucket structure, which records the value and position of the first element of each bucket. To find FL[i], you scan forward from the beginning of the closest bucket preceding i, adding the encoded differences from that point until position i is reached. The process is laborious, but does not affect the higher level search and decoding algorithms.


Kendall is a software engineer living in San Francisco and can be contacted at [email protected].

Dec03: Full-Text Searching & the Burrows-Wheeler Transform

Figure 3(d): What you don't know is the distance of each match to end/start of text.

Dec03: Full-Text Searching & the Burrows-Wheeler Transform

Figure 1(b): Replacing abracadra$ with "a" and a pointer to bracadabra$a.

Dec03: Full-Text Searching & the Burrows-Wheeler Transform

Figure 1(c): Twelve blocks replaced with F column and 12 pointers to suffixes.

Dec03: Full-Text Searching & the Burrows-Wheeler Transform

Figure 1(d): Discard F column and use C to identify characters.

Dec03: Full-Text Searching & the Burrows-Wheeler Transform

Figure 1(e): Decoding from any position. Chase pointers and translate each position to char using C. The decode method shows this technique in bwtindex.cc.

Dec03: Full-Text Searching & the Burrows-Wheeler Transform

Figure 2: This map is called FL because it rearranges the F column into the L column. FL is order-preserving for strings that start with the same character, so each character's section in FL contains an ascending list of integers. Here, you see the "fan-out" for blocks prefixed by "a."

Dec03: Full-Text Searching & the Burrows-Wheeler Transform

Figure 3(a): Start by finding the range of all blocks starting with "r."

Dec03: Full-Text Searching & the Burrows-Wheeler Transform

Figure 3(b): To find R_br, find bs that precede rs. I show two copies of F for clarity. Working right-to-left, the copy on the right represents the previous range, and the one on the left is new. To search: 1. Take all blocks starting with b; 2. search this set for the first i where FL[i]startp, and the last j where FL[j]<=endp; everything in this [i,j] range must map into the target interval between startp and endp. This [i,j] interval identifies all bs that are followed by r; in other words, all blocks starting with "br." i and j become the new startp/endp for the next step.

Terms of Service | Privacy Statement | Copyright © 2024 UBM Tech, All rights reserved.