Speedy Buffering

Here's a technique for speeding up disk access for many applications without buying faster, more expensive hardware.


March 01, 1991
URL:http://www.drdobbs.com/parallel/speedy-buffering/184408514

MAR91: SPEEDY BUFFERING

Bruce develops and sells software for TRS-80 and MS-DOS computers. He can be reached at T.N.T. Software Inc., 34069 Hainesville Road, Round Lake, IL 60073.


Disk reads and writes can slow nearly any application. It's easy to tell a user to buy a faster hard drive, a better controller, or to move from floppies to a hard drive. Unfortunately, many users have a strange preoccupation with money -- especially their own.

There are ways to appreciably speed disk access times for many applications, without any outlay of cash. One of the easiest is a buffering method I use with QuickBasic.

First, I'll suppose the data file is random-access and is being read or written in record number order. The method I use can be extended to some other cases fairly easily. If there is a reasonable chance that you'll be accessing multiple records from the same buffer, you needn't access the file sequentially -- but what constitutes "reasonable" will depend on the application, the drive used, and the memory available.

Second, I'll suppose you have at least some memory available. That's not always the case, but it usually is.

The method involves two parts. The file you want to buffer is opened with a record length that is a multiple of the true record length. I usually use a record length of about 16K bytes. Next, a dummy file is opened with a record length equal to the true record length of the actual file.

The large record file is fielded a whole record at a time, and the small file is fielded the way you want to read the actual records. An adapted version of this method can be used with binary files and user-defined types; I'll leave that to you.

Now, suppose you want to read record R. You call a subroutine which checks to see if record R is currently in the buffer. If it is, that part of the file is transferred from the large buffer to the small dummy one. If not, the suitable chunk containing record R is read and then the transfer is made. Listing One presents a program that illustrates the method.

I tested the program against a 231K data file read from a 1.44-Mbyte floppy disk and from a Plus Development 40-Mbyte HardCard (28 ms) using various record lengths. Table 1 shows the times I observed while running in the QuickBasic 4.5 environment.

Table 1: Buffering results

  Buffer   Disk   Record Length   Buffered Time   Unbuffered Time
  ---------------------------------------------------------------

    8K     1.44     100 to 500    11.87           92.82
    8K     1.44     1000          11.65           50.64
    8K     1.44     2000          10.60           28.45
   16K     1.44     50 to 500      8.98           92.81
   16K     1.44     1000           9.05           50.64
   16K     1.44     2000           8.85           28.45
    8K     40MB      100           3.56           10.38
    8K     40MB      200           3.02           10.39
    8K     40MB      500           2.96           10.38
    8K     40MB     1000           2.91           10.38
    8K     40MB     2000           2.75            6.16
   16K     40MB      100           3.19           10.43
   16K     40MB      200           2.80           10.43
   16K     40MB      500           2.75           10.38
   16K     40MB     1000           2.70           10.38
   16K     40MB     2000           2.63            6.15
   24K     40MB      100           2.86           10.38
   24K     40MB      200           2.69           10.33
   24K     40MB      500           2.47           10.38
   24K     40MB     1000           2.53           10.38
   24K     40MB     2000           2.47            6.15

The times show a substantial speedup with buffering: As much as a factor of eight on the floppy drive, and a factor of three on the hard drive. Times on the floppy disk with buffering are faster than the hard drive without. The optimal buffer size seems to be about 16K for either drive. The times don't vary much if buffering is used, regardless of record length, but drop substantially for unbuffered reads as record lengths increase beyond 500 (512) bytes on the floppy and beyond 1000 (1024) bytes on the hard drive.

If you use this method to write a file, buffering can append null records to the end. So long as you can use null records to detect EOF in your applications, that shouldn't be a problem.

Buffering is simple, and can give enormous speedups. Watch your applications for opportunities to use it!


_SPEEDY BUFFERING_
by Bruce Tonkin


[LISTING ONE]



DEFINT A-Z
CLS
LINE INPUT "Name of file to read: "; filename$
INPUT "Record length: "; rl
buffersize = (1 + 16384 \ rl) * rl     '16K, rounded up.
blocks = buffersize \ rl
DIM chunk$(1 TO blocks)      'each chunk will be one record.

'open the random file with a record length of buffersize
OPEN "r", 1, filename$, buffersize
filesize& = CLNG(LOF(1)) \ rl

'set up the buffer in record-length sized chunks
FOR i = 1 TO blocks
   FIELD #1, dummy AS dummy$, rl AS chunk$(i)
   dummy = dummy + rl
NEXT i

'open up a dummy buffer to hold the actual records
OPEN "r", 2, "real.$$$", rl
FIELD #2, rl AS all$

t1! = TIMER
FOR i = 1 TO filesize&
   GOSUB getrec    'read the buffered records
NEXT i
t1! = TIMER - t1!  'save the time taken
CLOSE              'close the files and delete the dummy file.
KILL "real.$$$"

OPEN "r", 1, filename$, rl   'now open and read unbuffered
FIELD #1, rl AS all$
t2! = TIMER
FOR i = 1 TO filesize&
   GET 1, i
NEXT i
t2! = TIMER - t2!
CLOSE
PRINT "Buffered time: "; t1!
PRINT "Unbuffered time: "; t2!
END

getrec:
whichblock = 1 + (i - 1) \ blocks
IF whichblock <> lastblock THEN
    GET 1, whichblock
    lastblock = whichblock
    end if
whichchunk = 1 + i MOD blocks
LSET all$ = chunk$(whichchunk)
RETURN


Copyright © 1991, Dr. Dobb's Journal

Terms of Service | Privacy Statement | Copyright © 2024 UBM Tech, All rights reserved.