The SQLite Database Engine

SQLite, an open-source embedded relational database system packed into a small C library, is ideal for managing and processing data.


March 01, 2004
URL:http://www.drdobbs.com/the-sqlite-database-engine/184401773

March 04: The SQLite Database Engine

Have you ever found yourself writing a program where a simple B-Tree library didn't seem enough, but a large RDBMS was overkill? What you need in these situations is a library that lets your program handle its data all by itself, but do a little more than just store and fetch. Perhaps you need to write a throw-away script to do some text or log-file processing, but you have a huge mountain of data to slog through and complex calculations to go with it. Or your program needs to work equally well on Windows, UNIX/BSD/Linux, Mac, and perhaps a couple of embedded platforms.

What you need in these situations is SQLite, an open-source embedded relational database system packed into a small C library. It is ACID compliant; supports a large subset of SQL92, indexes, transactions, views, triggers, in-memory databases; and supports a wide variety of interfaces. Currently, there are SQLite interfaces for ODBC, .NET, Perl, Python, Java, Tcl, Ruby, Delphi, Objective C, PHP, Visual Basic, and languages you may have never heard of. SQLite is compact, implemented in a single library of less than 25,000 lines of ANSI C, and its source code is not copyrighted and free to use for any purpose. It is also fast, portable, and scalable, and runs on Windows, Linux, BSD, Solaris, OS X, and has been ported to both embedded systems and mainframes. Its database format is binary compatible between machines with different byte orders and scales up to 2 terabytes (241 bytes) in size. The source code, precompiled binaries, documentation, and other SQLite information is at http://www.sqlite.org/.

SQLite is ideal for managing and processing data within standalone applications that either don't need or cannot connect to multiuser databases. It improves upon simple B-Tree databases such as gdbm by adding relational capabilities, while offering the same functionality if you need it. Since it is the perfect companion to a capable scripting language, whipping up a Perl or Python script to aggregate, slice, and dice data can be done with minimal time and effort.

SQLite was originally developed by D. Richard Hipp to replace the need for large commercial database servers in standalone applications. Hipp wanted to produce a self-contained program that could run anywhere, regardless of what other software was (or wasn't) installed on the host system. SQLite 1.0 used GNU's gdbm B-Tree library as its storage manager. For licensing and performance reasons, Hipp then replaced gdbm with his own B-Tree implementation that supported transactions and stored records in key order, which allowed for optimizations such as logarithmic time MIN() and MAX() functions, and indexed queries with inequality constraints. SQLite has grown considerably in both features and users. As the author of the Python extension to SQLite (PySQLite) along with Gerhard Haring, I have been amazed to see more than 7000 downloads of our extension on Source Forge. SQLite is currently the highest rated database on Freshmeat.net.

SQLite consists of eight layers that work together to take a query and produce a useful result from a database, either in the form of an alteration or materialized result. The first four layers — interface, parser, tokenizer, and code generator — take the query and turn it into a mini-program. This program is written in a kind of assembly language, which is passed to the next layer, called the "virtual database engine" (VDBE). The VDBE, SQLite's virtual machine, is designed specifically for database operations and is the sole means through which all queries are processed. Everything that can be done in SQLite can be expressed as a series of the VDBE's 128 op codes from opening files, reading indexes, and processing records to firing triggers, manipulating schema, and committing transactions. One by one, the VDBE executes each step in the mini-program, eventually fulfilling the query's request. If you had the patience and inclination, you could actually write your own VDBE program to fulfill your requests rather than use SQL.

Listing 1 is the VDBE program generated to execute a select statement. For every SQL statement, you can obtain the generated VDBE program that fulfills it by prefacing it with EXPLAIN. The VDBE mini-program orchestrates all layers below it, which consist of the storage system, page cache, and OS abstraction layer. The storage system is an efficient B-Tree implementation based on that described by Donald Knuth. The page cache is an adjustable region of memory that SQLite uses to store frequently used pages in order to minimize disk seeks. At the bottom is the OS abstraction layer, which serves to group all OS idiosyncrasies in one place and fit SQLite to the various architectures and operating systems it supports.

Programming with SQLite

There are several ways you can program with SQLite in C/C++. One approach is to use the ODBC interface developed by Christian Werner. However, SQLite includes a C API that requires only about three to five functions to do everything.

Over the years, three variations have taken form within SQLite's C interface. The original interface used a callback function. When you executed a query, you first registered this function, which would be called for every row fetched in the query. On top of this was a wrapper that hid the callback function and felt more like most other client APIs. Later versions of SQLite brought an improved API that is a happy medium between the previous two, but more flexible and intuitive. This API is now considered to be the standard, and is the one I cover in this article.

There are five steps to working with SQLite:

  1. Connect.

  2. Prepare a query.

  3. Process the results.

  4. Finalize the query.

  5. Disconnect.

Listing 2 is a complete example that illustrates these steps. SQLite databases are maintained in a single file; that is, all objects associated with a particular database (indexes, tables, schema, triggers, and so on) are packaged together in a single operating system file. This is the file you are connecting to on sqlite_open.

SQLite has built-in functions, such as avg(), sum(), min(), max(), and count(), to name a few. All of these are implemented using an API that you can use to extend SQLite, creating your own custom functions and aggregates, which can be called from within its SQL. For example, you could add support for obtaining the system time and do things such as select CURRENT_TIME(). Listing 3 illustrates extending SQLite's statistical functions to include computing the area under a Gaussian distribution for a given mean and variance. Before the extension function can be used, it must first be registered in the database, as in Listing 4.

You can also create your own aggregates in SQLite using a similar approach. In this case, you register two functions: one is called for each record returned in the set, and the other to perform the final computation, which is called at the end of the set. Examples of implementing aggregates can be found in the SQLite source file func.c.

Features

Like most databases, SQLite supports operations such as creating auto-increment columns, returning said values, and binary data (BLOBS). In terms of auto-increment columns, SQLite is similar to MySQL. If you declare a column with type INTEGER PRIMARY KEY, SQLite always selects the next largest value for that column if no value is specified on INSERT. The value used in this case can then be obtained by the function sqlite_ last_insert_rowid(). Binary data is handled with two functions: sqlite_encode_binary() and sqlite_decode_binary().

SQLite supports transactions (although currently not nested transactions) through the use of a journal file. As records are modified, the database pages containing the original values are swapped to the journal file. In the event of a rollback, SQLite copies the original pages back into the main database file. This approach also allows for automatic recovery in the event of system crashes. Each time a client connects to a database, SQLite first looks for an associated journal file. If one is found, SQLite assumes a crash has taken place and proceeds to restore the contents of the journal to the database. Once completed, the client is then allowed to work with the database.

SQLite includes a nonstandard feature (clause) to arbitrate conflict resolution. A conflict in this sense occurs in the event of a constraint violation. The default behavior (ABORT) is to restore all changes made in the statement, and proceed with the transaction. You can change the default ABORT with REPLACE, IGNORE, FAIL, or ROLLBACK (listed in order of severity). From the documentation, REPLACE works as follows:

When a UNIQUE constraint violation occurs, the preexisting rows that are causing the constraint violation are removed prior to inserting or updating the current row. Thus the insert or update always occurs. The command continues executing normally. No error is returned. If a NOT NULL constraint violation occurs, the NULL value is replaced by the default value for that column. If the column has no default value, then the ABORT algorithm is used.

IGNORE causes the conflicting operation to simply be skipped, and the operations in the SQL statement continue. For example, if the 100th record modified in an update statement encounters a constraint violation, then it proceeds to record 101 and keeps on going without a peep. FAIL halts the statement but preserves the previous 99 updates. ABORT is like FAIL but restores all previous 99 updates to their original values. Finally, ROLLBACK halts the statement and aborts the entire transaction.

Conflict resolution can be defined in three scopes: object creation (tables and indexes), transaction, and statement. If a statement has no resolution defined, SQLite looks to the transaction. If nothing is defined there, it looks to the object. If undefined still, it defaults to ABORT. In Listing 5, I create an employee table with a unique name field and set its conflict resolution to ROLLBACK. I populate it with three employees, then try to insert the third again. Not only does SQLite forbid this, it also aborts the transaction, as ROLLBACK should do. Next, I override the table's conflict resolution by setting a different resolution at transaction level. This time it works: SQLite deletes the impeding record and replaces it with the contents of the INSERT statement. Finally, I do this again, but play around with setting resolution at statement level.

Triggers can be written for INSERT, DELETE, and UPDATE operations, including the update of specific columns. Listing 6 illustrates the use of SQLite triggers. Conflict resolution is also applicable here, although it might throw you. Resolution can be specified in the trigger statements; however, resolution in the calling statement, if defined, will take precedence. From there resolution proceeds up the chain as explained earlier. While SQLite does not support materialized views, triggers may be defined on views so that they appear as modifiable. In this case, the modification is defined solely by the logic set forth in the trigger. That is, no modification to the base tables is performed other than what the trigger is programmed to do. Listing 7 illustrates creating a view and an update trigger on top of it.

SQLite comes with an array of pragmas that can be used to set various aspects of runtime behavior. Setting a pragma is done by executing it as SQL, such as PRAGMA vdbe_trace=ON. There are pragmas for performance tuning, such as cache_size and synchronous. cache_size controls how much memory SQLite allocates for its page cache. The larger the cache, the more pages are kept in RAM, which helps reduce disk seeks and therefore increase overall performance. synchronous controls whether or not SQLite flushes data to disk at critical moments such as on transaction commits. Turning it off reduces disk writes (increasing speed) but does so at the risk of corrupting the database in the event of a system crash or power loss.

There are pragmas to control what kind of information is returned to the C API client functions, such as full_column_names, which qualifies column names with their table names, and show_datatypes, which returns column type information with fetched records. There are pragmas for maintenance, debugging, and other tasks. One final thing to note about pragmas is that some have different scopes. Some affect only the current session, others can affect the database and all subsequent sessions. Oftentimes for a given setting, there is both a pragma for controlling its value in both session and database scope. For example, default_synchronous affects the entire database and all sessions that connect to it, while synchronous affects only the current session.

SQLite has a host of extensions and adaptors that make it available for use in other languages. Listings 8 and 9, for example, illustrate Perl and Python, respectively.

Limitations

While SQLite is scalable with respect to database size (up to 2 terabytes), it is not designed for high concurrency. It has coarse-grained locking that allows single writer, or multiple readers at the database level. Thus, while you could get high concurrency with read-only applications, such is not the case for writes. It is possible for multiple clients to be connected to a single database, but each writer will block so long as another writer has the database locked for writing. According to a poll on the SQLite list, most users found SQLite to be fast enough that write blocking in negligible, and that the preference was not to complicate code by adding finer grained concurrency.

SQLite does not enforce data types in any way. While there has been some healthy debate on this topic, it is in the final analysis considered to be a feature. There are some instances (such as sorting) in which SQLite does make some distinctions between text and numbers, and within numbers integers and floating-point values. This is done automatically based on the values present in the column, not on the type declared in the schema. You might ask then what the types declared in the schema actually do. The short answer is nothing. However, the type names you declare there are passed to your program through sqlite_step, as in Listing 2, where you can cast the data to whatever type you like. Still, it is important to remember that there is no type checking, so it is up to you to ensure that the string value for a column declared as float is in fact capable of being converted to a float. SQLite lets you declare that column as type donkey, if you wish. As far as it is concerned, what you do with the text representation of a donkey type is your business.

Another limitation of SQLite is that it does not have the sophisticated planners and optimizers that you might find in large multiuser databases. Thus, you must take a more active role in tuning large and/or complex queries that perform many joins. Summing up, all of these limitations should not be too surprising, as SQLite is an embedded database meant to serve programs, not users. Thus, while it is faster than most other relational databases for many operations, it's not realistic to say that SQLite is therefore a suitable replacement for those databases. It simply depends on what you are trying to accomplish.

Conclusion

Given its speed, portability, small footprint, easy-to-use APIs, powerful features, language support, and liberal license, SQLite is a tool all programmers should have in their arsenals. It is a unique open-source project that has done much to address the need for simple storage and data management for applications of every stripe, big and small, working in many different environments.


Michael Owens is a chemical engineer turned programmer and coauthor of PySQLite, the Python extension to SQLite. He can be contacted at [email protected].


March 04:

Listing 1: VDBE program for a simple query.

SQLite version 2.8.0
Enter ".help" for instructions

sqlite> explain select projectname_short from project
   ...> order by rating desc limit 10;
0          ColumnName   0          0        projectname_short
1          Integer     -10         0
2          MemStore     0          1
3          Integer      0          0
4          OpenRead     0          3        project
5          VerifyCook   0          1476
6          Rewind       0          13
7          Column       0          3
8          SortMakeRe   1          0
9          Column       0          11
10         SortMakeKe   1          0 
11         SortPut      0          0
12         Next         0          7
13         Close        0          0
14         Sort         0          0
15         SortNext     0          19
16         MemIncr      0          19
17         SortCallba   1          0
18         Goto         0          15
19         SortReset    0          0
20         Halt         0          0




March 04: 

Listing 2: C API example.

#include <stdlib.h>
#include <stdio.h>
#include <sqlite.h>
typedef struct sqlite_vm sqlite_vm;
int main()
{
   const char* db_name = "db";
   sqlite *db; /* The database handle */
   char *sql = "select projectname_full as name, rating, license "
               "from project order by rating desc limit 10";
   const char *tail; /* Points to next SQL statement to process, if any. */
   char *err_msg; /* Last error message, if any. */

   sqlite_vm *pvm; /* Virtual machine for executing query. */
   db = sqlite_open(db_name, 0, &err_msg);
   if(db==0)
   {
      fprintf(stderr, "Can't open database: %s\n", err_msg);
      exit(1);
   }
   /* Compile SQL, allocate a virtual machine for processing. */
   int ret = sqlite_compile(db,sql,&tail,&pvm,&err_msg);
   if(ret != SQLITE_OK)
   {
      fprintf(stderr, "Compile failed: %s\n", err_msg);
      sqlite_freemem(err_msg);
      exit(1);
   }
   int i, ncols;
   const char** fields;
   const char** col_defs;
   ret = sqlite_step(pvm, &ncols, &fields, &col_defs);
   /* Print Column Names */
   printf("%35s %5s %40s\n\n", col_defs[0], col_defs[1], col_defs[2]);
   /* Print Column Datatypes */
   printf("%35s %5s %40s\n\n", col_defs[3], col_defs[4], col_defs[5]);
   /* Print Result Set */
   while(ret == SQLITE_ROW)
   {
      printf("%35s %5s %40s\n", fields[0], fields[1], fields[2]);
      ret = sqlite_step(pvm, &ncols, &fields, &col_defs);
   }
   ret = sqlite_finalize(pvm, &err_msg);
   if(ret != SQLITE_OK)
   {
      fprintf(stderr, "Finalize failed: %s\n", err_msg);
      sqlite_freemem(err_msg);
      exit(1);
   }
   sqlite_close(db);
}




March 04: 

Listing 3: Creating a custom function.

void normal_curve_area(sqlite_func* context, int argc, const char **argv)
{
   char** endptr;
   char result[65];
   double x1, x2, mu, sigma;
   if(argc != 4)
   {
      return;
   }
   x1 = strtod((char*)argv[0], endptr);
   if((x1==0) && (argv[0]==*endptr))
      return;
   x2 = strtod((char*)argv[1], endptr);
   if((x2==0) && (argv[1]==*endptr))

      return;
   mu = strtod((char*)argv[2], endptr);
   if((x1==0) && (argv[2]==*endptr))
      return;
   sigma = strtod((char*)argv[3], endptr);
   if((x1==0) && (argv[3]==*endptr))
      return;
   sprintf(result, "%f", GetNormalCurveArea(x1,x2,mu,sigma));
   sqlite_set_result_string(context, result, -1);
}
double GetNormalCurveArea(double x1, double x2, double mu, double sigma)
{
   /* Maclaurin Series Expansion for exp(-x2/2)
   Michael Owens
   Description: This function takes two random variables, a lower 
   limit (x1) and an upper limit (x2), on a Gaussian distribution and 
   computes the total area between them.
   User Input Parameters: 
   x2: upper limit
   x1: lower limit
   mu: population mean
   sigma: variance
   Nomenclature:
   sz: dummy variable for series expansion
   z = (x-mu)/sig
   cum: the cumulative value of z, or integral
   cum1 is the area from -r1 to 0 while
   cum2 is the area from 0 to r2.
   Limitations:
   The Limiting Values of z: A value greater than z=5 will give exactly 50% of
   the normal curve to four decimal places, and larger values will only
   encumber series convergence, therefore any values greater than 4 will be
   reset to 4.
   */
   double j = 10; // Initialized for priming the while() block.
   double bound = 4.2;
   double z1 = (x1 - mu) / sigma;
   double z2 = (x2 - mu) / sigma;
   if (z1 < -bound)
      z1 = (double)-bound;
   if (z1 > bound)
      z1 = (double)bound;
   if (z2 < -bound)
      z2 = (double)-bound;
   if (z2 > bound)
      z2 = (double)bound;
   double cum1 = fabs(z1);
   double cum2 = fabs(z2);
   // Use absolute values for computing terms
   x1 = fabs(z1);
   x2 = fabs(z2);
   // Computations
   // Maclaurin Series: term by term addition
   // Area of lower limit
   if(cum1)
      SeriesExpansion(x1,cum1);
   else
      cum1 = 0;
   // Area of upper limit
   if(cum2)
      SeriesExpansion(x2,cum2);
   else
      cum2 = 0;
   // Determine the total area:
   double Area;
   if ((z2 + z2) < (fabs(z2 + z2))) // if z2 is negative
      Area = cum1 - cum2; // then z1 must be negative too.
   else
      if ((z1 + z1) < (fabs(z1 + z1))) // z2 is positve and if z1 negative
         Area = cum1 + cum2;
      else
         Area = fabs(cum2 - cum1); // if z1 is positive
   // Limiting area from origin to +infinity
   double CA;
   CA = pow(2*3.1415926535, 0.5);
   // Normalized area
   Area = Area/CA; // Area from origin to lower limit.
   return Area;
}
short SeriesExpansion(double &x, double &cum)
{
   double SeriesTerm;
   double j = 10;
   for (int i = 1; j > 0.0001; i++)
   {
      int f = i;
      double factorial = f;
      if(f-1)
      {
      while(f-1)
         factorial *= --f;
      }
      if(!factorial)
      return 0;
      SeriesTerm = (pow(-1,i));
      SeriesTerm *= (double)1/((2*i)+1);
      SeriesTerm *= (double)pow(x,(2*i+1));
      SeriesTerm *= (double)1/((pow(2,i)*factorial));
      cum += SeriesTerm;
      j = fabs(SeriesTerm);
   }
   return 1;
}




March 04: 

Listing 4: Using a custom function.

int main(int argc, char **argv)
{
   sqlite *db;

   const char *tail;
   sqlite_vm *pvm;
   char *err_msg;
   int ncols;
   const char** fields;
   const char** col_defs;
   db = sqlite_open("db", 0, &err_msg);
   sqlite_create_function(db, "normal_curve_area", 4, normal_curve_area, NULL);
   const char* sql = "select normal_curve_area(-2.35, 2.35, 0, 1)";
   sqlite_compile(db, sql, &tail, &pvm, &err_msg);
   sqlite_step(pvm, &ncols, &fields, &col_defs);
   printf("Area=%s\n", fields[0]);
   sqlite_finalize(pvm, &err_msg);
   sqlite_close(db);
   return 0;
}




March 04: 

Listing 5: Controlling conflict resolution.

SQLite version 2.8.2
Enter ".help" for instructions
sqlite> -- Create table;
sqlite> create table emp(name text UNIQUE ON CONFLICT ROLLBACK);
sqlite> -- Populate;
sqlite> insert into emp values('Larry');
sqlite> insert into emp values('Moe');
sqlite> insert into emp values('Curly');
sqlite> -- generate a UNIQUE constraint violation;
sqlite> insert into emp values('Curly');
SQL error: uniqueness constraint failed
sqlite> -- try to commit, won't work as previous resolution rolled back transaction.
sqlite> commit;
SQL error: cannot commit - no transaction is active
sqlite> -- Set REPLACE at transaction scope.
sqlite> begin on conflict replace;
sqlite> -- try again: this time it will work
sqlite> insert into emp values('Curly');
sqlite> commit;
sqlite> -- Play around with statement level resolution;
sqlite> begin on conflict replace;
sqlite> -- ABORT will stop us, but leave transaction running.
sqlite> insert or ABORT into emp values('Curly');
SQL error: uniqueness constraint failed
sqlite> -- FAIL will stop us, but leave transaction running.
sqlite> insert or FAIL into emp values('Curly');
SQL error: uniqueness constraint failed
sqlite> -- IGNORE will silently fail, but leave transaction running.
sqlite> insert or IGNORE into emp values('Curly');
sqlite> -- default transaction scope is REPLACE, will push it through.
sqlite> insert into emp values('Curly');
sqlite> commit;
sqlite>




March 04: 

Listing 6: Trigger examples.

-- Log deleted projects
CREATE TRIGGER on_delete_proj BEFORE DELETE ON project
FOR EACH ROW
BEGIN
   insert into removed values(old.project_id,old.projectname_full);
END
-- Track version changes. Set conflict to overwrite matching records.
CREATE TRIGGER on_update_proj BEFORE UPDATE OF latest_version ON project
FOR EACH ROW
BEGIN
   insert OR REPLACE into versions
   values( new.project_id, old.latest_version, new.latest_version )
END




March 04: 

Listing 7: Views with triggers.

CREATE VIEW above_average AS SELECT projectname_full, rating
FROM project
WHERE rating > (SELECT AVG(rating) FROM project)
ORDER BY rating DESC;
-- Make the view updatable according to following trigger
CREATE TRIGGER on_update_above_average
INSTEAD OF UPDATE ON above_average
FOR EACH ROW
BEGIN
   UPDATE project SET rating=new.rating
   WHERE projectname_full=new.projectname_full;
END
-- Now the following update to the view will work:
UPDATE above_average SET rating=10 WHERE projectname_full='Gentoo Linux';




March 04: 

Listing 8: SQLite from Perl.

use DBI;
my $dbh = DBI->connect("dbi:SQLite:dbname=db","","");
my $cursor;
my @rec;
my $SQL = "select projectname_full as name, rating "
           . "from project order by rating desc";
$cursor = $dbh->prepare($SQL);
$cursor->execute();
while(@rec = $cursor->fetchrow_array)
{
   print "$rec[0], $rec[1]\n";
}
$cursor->finish;
$dbh->disconnect;




March 04: 

Listing 9: SQLite from Python.

import sqlite

conn = sqlite.connect(db="db", mode=077)
cursor = conn.cursor()
SQL = """select projectname_full as name,
         rating from project
         order by rating desc"""
cursor.execute(SQL)
row = cursor.fetchone()
while row != None:
   print "%14s, %15s" % (row['name'], row['rating'])
   row = cursor.fetchone()
conn.close()


        

Terms of Service | Privacy Statement | Copyright © 2024 UBM Tech, All rights reserved.