Dr. Dobb's is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


Channels ▼
RSS

Design

Inside OS/2 Software Motion Video


SP 94: Inside OS/2 Software Motion Video

Inside OS/2 Software Motion Video

Using threads to synchronize audio and video data

Les Wilson

Les is a senior programmer in IBM's OS/2 Multimedia Software group. He was the project leader of the team that invented and developed the software motion-video support in OS/2 2.1. The synchronization algorithm described here was invented by Steve Hancock and Bill Lawton. Les can be reached at IBM Corp., 1000 NW 51st St., Boca Raton, FL 33431.


Until recently, digital video's huge demands for processing power and data storage were major hurdles for PC developers. To a great extent, recent advances in hardware, CD-ROM, and image-compression technologies have enabled us to make gains in the race for realistic digital video. Fully synchronizing audio and visual data, however, is one of the technical challenges yet to be solved. You know the problem: "Out-of-sync" audio and video in foreign monster films where English-speaking voices are dubbed over a non-English-speaking actor's moving lips. For low-budget entertainment, we've been generally tolerant of this lack of synchronization. However, system providers must address the timing and synchronization problems to ensure the serious use and acceptance of their system.

IBM's Multimedia Presentation Manager/2, Apple's QuickTime, and Microsoft's Video for Windows all provide users with the ability to create and manipulate digital video and audio data. To do software motion-video playback, such a system must first locate the data for presentation and transport it from its current location to the playback system. Depending on the way the data was created, the task of locating and transporting the data can be simple or complex. In the simple case, the system opens the file and reads a buffer of data. In the complex case, the system traverses data structures and retrieves the required data using indirect pointers to other files.

Next, the system segregates and moves each type of data to the appropriate processor for that data. Audio data goes to the audio subsystem, video goes to the software decompressor, and so on. Synchronization of the data presentation occurs at the target end of the pipeline; see Figure 1.

Types of Synchronization

There are two types of synchronization: free-running and monitored. The free-running technique cues up the audio and video data at the target processor, kicks them off, crosses its fingers, and hopes for the best. Sometimes it works, sometimes it doesn't. Systems that use this technique alone often exhibit inconsistent audio and video sync. This is especially common when processing high-motion scenes in the video that cause the target video processor to lag behind the audio. Other interference from device contention can also cause a given target processor to lose synchronization. Once a target processor is out of sync in a free-running system, nothing other than chance will bring it back in sync.

Monitored systems add a policing activity to free-running target processors. In these systems, target processors detect when they're out of sync and employ appropriate techniques to resynchronize the processed data. Timing compensation occurs when a target processor is either ahead or behind the desired location in the data. In doing so, target processors adjust their processing speed depending on the complexity of the data, processing power available on the system, and interference from outside activities.

Monitored systems react to the complex interactions occurring in the system. For audio/visual data, this type of system constantly resynchronizes what is seen with what is heard. This doesn't help the foreign actor with the wrong lip movements for the English-language sound track, but it does ensure that the "thud" is heard when the monster hits the ground.

Using Interleaved Data

Both free-running and monitored systems are affected by how the data is organized at the source. When audio and video data is evenly distributed, or interleaved, it flows easily into the system with minimal overhead. That is, as long as there's sufficient data for the target processors, the source data can be read in a single sequential stream. Free-running systems work best when audio and video data are interleaved. Interleaving is also very good when data is on slow devices such as CD-ROM.

While interleaving can help synchronization, it shouldn't be mandatory. Some file formats allow audio and video data to be "clumped" at the beginning or the end of a file. Other file formats allow the data to be distributed in other local and remote files. Either way, multimedia systems must ensure adequate processing time is allocated to prefetch the required data to affect on-time delivery to the target processors.

Using Multiple Threads for Synchronization

The OS/2 data-streaming model employs the concept of source and target stream handlers. Chains of stream handlers process and move data at discrete points in a stream of data. As the stream handler moves data from point to point, it performs the required data-specific timing and processing operations. In this way, the stream handler becomes a convenient place to encapsulate data-dependent and timing-dependent operations.

OS/2's Multimedia Presentation Manager implements this streaming model with several independently dispatched threads. Each stream handler is a thread that controls the processing of data through a certain point in the stream. Additionally, a centralized buffer-management and timing thread called the "synchronization stream manager" (SSM) provides services to each stream handler for handling buffers and monitoring its own processing against that of other stream handlers. Together, these threads join to move, filter, and present data with monitored synchronization. This is illustrated in the data and control flows diagramed in Figure 2.

Here's what happens during playback of a software motion-video file:

  1. The application initiates the playback operation using the media-control interface (MCI) API. These operations identify the source file and the operation(s) to be performed (play, pause, seek, and so on).
  2. The system loads the movie and starts the play.
  3. The digital-video media-control driver (MCD) uses the Multimedia I/O services (MMIO) to find and open the file. The MCD is isolated from the file being local or remote.
  4. MMIO identifies the file format and uses one of its pluggable I/O procedures (IOPROC) to handle any file-format dependent operations. This allows support of additional file formats without modification of the MCD.
  5. The IOPROC opens the file, identifies it, examines the contents, and determines the type of video it contains.
  6. The IOPROC loads and initializes the appropriate software decompressor.
  7. The MCD initiates the required stream handlers, and allocates the buffer management, timing services, and hardware resources required by the contents of the file.
  8. When ready, the multitrack stream handler (MTSH) reads the audio and video data from the file.
  9. The MTSH identifies and splits the data into its output streams.
  10. The system cues the streams.
  11. The MCD starts target stream handlers and controls the playback as requested by the application.
  12. As the video stream handler runs, it tells the pluggable video decompressor to reconstruct and display the frame.
  13. On systems that allow direct access to the display (such as OS/2), the video decompressor reconstructs the image directly into its window.

    Otherwise, the image is reconstructed in system memory and displayed by the video stream handler.

As the system runs, it dispatches threads according to priorities and scheduling algorithms. Each point in the stream performs its part of the entire task to move data from the source to the target. As each stream handler does its work, it records its progress so that the SSM can monitor the data stream and provide synchronization services.

Depending on a hardware platform's display hardware, the time required to display a frame can vary greatly. The less efficient the display subsystem, the less processing power is available for other activities in the system. To solve the problems of inefficient display subsystems, OS/2 2.1 allows its video decompressors to bypass the graphics subsystem and access the display adapter directly. When implemented by the adapter's OS/2 display driver, this bypass dramatically improves the performance of software motion video. (See the accompanying text box "About IBM's Ultimotion" for an example of the performance levels achieved by this technology.) Given such varying video-display performance, the synchronization system must be built to compensate for the variable video-stream performance and still deliver synchronized video and audio data.

Synchronizing Stream Handlers

One of a stream handler's responsibilities is to report its progress to the SSM. In turn, the SSM monitors each stream and provides tolerance checks of a given "slave" stream against a "master" stream. In OS/2, the video stream is a slave stream and the audio stream is the master stream.

When there's sufficient processing power to handle a movie's frame rate, the video stream handler displays a frame, calculates the time to the next frame, and sets a timer for that duration. As long as the system and audio times remain in tolerance, the system behaves like a free-running system. However, this rarely happens for very long and before you know it, you need adjustments in the video output timing. To acheive synchronization between the two streams, the slave stream adjusts itself to the master stream. For video, this adjustment is made in the calculation of when to display the next frame.

Figure 3 shows a flowchart of the algorithm. However, the details of the algorithm may be better illustrated with an analogy. Consider two postal workers with delivery routes containing the same number of mailboxes. Each postal worker tries to deliver the mail to each box at the same time the other worker delivers the mail to the corresponding box. Postal-worker A (audio) delivers mail at a large apartment complex. The mailboxes have a central location and an efficient and predictable means for mail delivery. Worker A calls in his progress to the main post office (SSM) on a regular basis. Postal-worker V (video) delivers mail in a nearby suburb. This route has rural mailboxes, and the worker must drive from mailbox to mailbox. At the end of each block, postal-worker V calls in (to SSM) and reports his current box number.

In general, each stop along worker V's route is predictable. However, as in real life, he may deliver mail too quickly and get ahead of worker A by the end of a block. When worker V realizes he shot ahead or lagged behind worker A, worker V adjusts his delivery rate so that he delivers to the next box at the same time that worker A is expected to deliver to the corresponding box. Should anything keep A from his or her "appointed task," this behavior ensures synchronization of V to A at the end of each block.

Conversely, if worker V lags behind worker A, worker V adjusts his rate of delivery until he catches up. If the difference is small, worker V attempts to catch up by eliminating any unneeded waiting at each box. If worker V is already delivering at the fastest possible rate (that is, he has already eliminated waiting between boxes) and chronically lags behind, a more drastic change in delivery is required. In this analogy, we let postal worker V race down the block and simply toss the mail out the window at each mailbox. Worse yet, we may forget delivery altogether and just drive to the end of the block. (For video, the actual technique used depends on the capabilities of the video decompressor. In either case, the basic idea is to try and drop frames so that the video correlates with the audio.)

Listing One, page 41, is a C implementation of this algorithm using the OS/2 system and synchronization stream manager (SSM) APIs. The routine that calculates the next video-frame decompression time (that is, the next time for worker V to deliver at a mailbox) is called CalcNextFrameIval. The input parameters to this routine are pointers to instance structures. The pointer psib points to the SSM timing information for this thread. The pointer pMovie points to the video stream handler's instance data for the movie being played. Local variables exist for calculating the various error values used by the algorithm as well as flags used for controlling the path through the algorithm.

First, the code gets the current time using the system timer. Based on the movie's authored frame rate, the variable TimeNextFrame is incremented to reflect when the next frame should be displayed. Using these two pieces of information, the fVideoTooSlow Boolean and the VideoTimeError variables are set. These reflect how far off, and in which direction (ahead or behind), the video is relative to the system timer. Next, the algorithm tests if SSM is reporting that the slave stream (video) is out of tolerance. When this test fails, the algorithm drops directly into the last section of code. If the video is not too slow, the thread sleeps for the calculated TimeNextFrame. If the value of TimeNextFrame is less than the system-timer granularity, a quick yield is done to let higher priority threads execute. This helps prevent the file-system threads from starving.

Returning to where the SSM "out-of-tolerance" check is successful (video and audio are out of tolerance), the algorithm goes on to set the error values and synchronization flags.

Using the information about the relative position of the master (audio) and slave (video) streams, the variable TimeNextFrame is calculated. It's important to note that at this point, the algorithm works to force the video in sync with the audio and not in sync with where it computes it "should" be.

The comments in the code detail the precise conditions tested and the way TimeNextFrame is calculated. However, regardless of which stream lags behind the other, the algorithm uses the same scheme. When video lags behind audio, TimeNextFrame is set behind the current time. This has the effect of making the video play faster. Conversely, when the audio lags behind video, TimeNextFrame is set ahead of the current time, which has the effect of slowing down the video. The amount by which TimeNextFrame is set ahead or behind the current time is always calculated to be the difference between the audio time and the system timer. In addition to forcing the video to synchronize to the audio, this cross check also adjusts for possible differences in the timing sources. (For example, SSM and the system timer use different physical timers, and the two clocks drift.)

Once TimeNextFrame is recalculated, the algorithm drops into the same code discussed earlier that takes action based on the local flags and error values. If it isn't time to display the next frame, the thread sleeps for the calculated interval. If the time is past, the algorithm stores this information in the movie-instance structure for use by the frame-dropping algorithm.

Dropping Frames to Catch Up

At first glance, dropping frames to make video "catch up" seems simple and straightforward. However, most software algorithms use temporal compression and only store "delta frames"--those portions of a frame that have changed since the last frame. If these frames are dropped, portions of the movie that should have changed aren't changed from frame to frame and unpleasant visual artifacts are displayed. To counter this, the compression algorithm frequently compresses the entire frame and inserts it into the video stream. This intraframe compressed frame is called an "I-frame." When displayed, I-frames repaint the entire video frame and repair any artifacts.

Since frame dropping depends on

the compression algorithm, the frame-dropping logic in the stream handler defers the actual drop processing to the decompressor. If the compressed data stream cannot tolerate frame dropping, the decompressor simply ignores the information and the system continues as best it can.

Summary

A fully threaded system provides efficient tools to process multimedia data. These tools effectively help address multimedia-related problems in areas such as buffer management, data movement, data filtering, and synchronization. As a platform for multimedia applications, OS/2 provides a rich set of synchronization and video-output services. Through the use of threads, system-timing services and OS/2's multimedia synchronization services, the synchronization algorithm presented here can provide an independent video-timing mechanism for when an audio track is not present. It can also correct for the following: clock drifts in system and audio timers, dynamically and statically varying display subsystem efficiency, chronic video-synchronization loss, and catastrophic audio failure. It is an example of how multiple timing services can be combined to provide a solution to an age-old problem.

Figure 1: Typical data flow in multimedia systems.

Digital Video Compression

Digital-video compression is the act of taking a raw digitized image and reducing the amount of storage required to represent the image. There are two compression domains: spatial (or intraframe) compression, which tries to eliminate redundancies within a given frame; and temporal compression, which tries to eliminate redundancies over intervals of time (frames). An algorithm's resulting compression ratio is determined by the degree to which it is able to exploit redundancies and irrelevancies in each of these domains.

There are also two types of compression: lossy and lossless. As its name implies, lossless video compression compacts the source data without losing any of the information it contains. For video data, this means that the compression retains all the image detail. When the image is decompressed, the result is identical to the original. Lossless algorithms are well suited for compressing computer-generated images and are commonly used for storing video animations. To date, however, lossless video algorithms are still too computationally complex for software playback and have limited applicability when the objective is to achieve high compression and frame rates.

Lossy algorithms compact the source data by throwing away data that is redundant from one frame to the next. Examination of a reconstructed frame from any of today's software algorithms reveals the use of this technique. The advantage of this technique is that a respectable representation of the original is retained and it satisfies the requirement for high compression ratios and low computational complexity.

When examining software-compression algorithms, one must consider several characteristics; see Table 1. Because these characteristics are interrelated, you should avoid stressing one at the expense of another. Consider the frame size, which is determined at movie-creation time. Frame size is the width and height (in pixels) that the compression algorithm stores in the movie. In a video digitizer that uses 16-bit color (65,535 colors), the raw data for each frame of a 320x240 movie is 153,600 bytes. At 15 frames per second, the raw data is 15 times that, or 2,304,000 bytes. Assuming 22 Kbytes per second for an audio track, a video-compression algorithm must achieve an average compression ratio of 18:1 to play back at single-speed CD-ROM rates.

The frame rate, determined at movie-creation time, is usually expressed in frames per second. The frame rate determines how smoothly the motion is perceived by the viewer. Studies show that most people dislike frame rates below 12 frames per second. Most movies made for software motion video are 15 frames per second and higher.

Data rate expresses how much bandwidth is required for a movie. A given movie's frame size, frame rate, and compression ratio determine how much space is required for a given interval of time. Usually expressed in bytes per second, the data rate simply expresses the average number of bytes used for one second of movie. A movie's data rate must not exceed the storage device's data rate. If it does, the device won't deliver the data fast enough, and the movie will break up.

A 16-bit color, 320x240 movie running at 15 frames per second takes 2,304,000 bytes (320x240x2x15) for one second of uncompressed (raw) data. Using a compressor that averages 18:1 compression, one second of video data can be reduced to 128,000 bytes. Adding a 22,000-byte audio track, the resulting movie has a data rate of 150,000 bytes per second.

The compression algorithm determines a movie's computational complexity, which is a measure of how much processing power is required to decompress and display a movie's frames. The density at which an algorithm encrypts the data, the technique required to reconstruct each image, and the volume of data per frame determine an algorithm's computational complexity. As an algorithm's playback computational complexity increases, a given processor's ability to decompress it is diminished. To bring it back in balance, the volume of data must be reduced using a smaller frame size, lower frame rate, or both. In this way, a given algorithm can be evaluated on the basis of how large a frame rate and size can be achieved without compromising the appearance of the image.

Using the previous example, we can calculate that the average number of bytes for each compressed frame is 10,000 (150,000 bytes/15 frames per second). Therefore, the decompression algorithm has 1/15 of a second to read the 10,000 bytes, reconstruct the frame, and display it. The higher a movie's frame rate, the less time there is to do this. The larger the frame size, the more pixels have to be displayed. The more dense or complex the compressed data, the more time is required to reconstruct the frame.

Compression complexity determines how an algorithm is used to make a movie. An algorithm that takes longer to compress a frame than it does to decompress a frame is called an "asymmetric," or "off-line" algorithm. Conversely, one that takes equally long to compress as decompress is called a "symmetric," or "real-time" algorithm.

Off-line algorithms first get raw data from a file or frame-stepped device and then compress it. By their nature, these algorithms usually have the best image quality and compression ratios. Real-time algorithms are useful on live video sources that are not controlled one frame at a time. These algorithms compress the data on the fly as it is digitized. The advantage of these algorithms is that they eliminate the need for vast disk storage of raw data. Real-time algorithms usually trade off frame size and compression to achieve reasonable frame rates.

A movie's author determines frame rate, frame size, and data rate at movie-creation time. If played on a system that lacks sufficient power to handle the movie (for example, if the frame rate is too high), the scalability of an algorithm determines what the playback system can do to compensate. The degree to which a movie's characteristics (frame rate, size, resolution, or color depth) can be scaled down at playback time is called its "playback scalability." Table 2 shows how each characteristic of a movie can scale.

Image quality is largely a subjective measure of how well the algorithm retains the details of a movie. As lossy algorithms, today's software motion-video algorithms are perceived differently by different people. However, in general, as image detail increases, one or more of the other characteristics (such as computational complexity) are affected. Again, the overall quality of an algorithm is a function of how well it balances all of these characteristics.

Each video-compression algorithm tries to balance these characteristics to deliver the best video possible in its environment. Which algorithm is best for you depends on which characteristics and environments are most important to you. To some degree, most algorithms let you trade off one characteristic to improve another (for example, reduce the frame size so that you can increase the frame or data rate). Ultimately, before drawing any conclusions, make sure you see a good representation of movies for yourself.

--L.W.

Figure 2: Architecture of OS/2 software motion-video playback.

Table 1: Characteristics that must be considered in digital-video compression.

<B>Characteristic          Description</B>
Frame size          Width and height (in pixels) of each frame.
Frame rate          Number of frames over a certain interval

(usually seconds).
Data rate          Average number of bytes in a second of video.
Computational complexity          Amount of processing power required to deliver

the video at its authored size and rate.
Compression complexity          Amount of time required to compress a second

of video vs. its decompression time.
Playback scalability          Degree to which video playback can be degraded

when the video is too complex for the system

on which it's played.
Image quality          How well the original detail of the frame

is retained.

Table 2: Characteristics of a movie and how it can scale during playback.

<B>Characteristic          How it Scales</B>
Frame Rate          Frames can be dropped to keep up with audio track.
Frame Size          The output window can be reduced so less

processing is required.
Resolution          Image detail can be skipped so that less

processing is required.
Color Depth          Movies with more colors than are available can

be mapped to displays with fewer colors.

Table 3: (a) Ultimotion's characteristics on a 150-Kbyte-per-second CD-ROM drive for 320x240 frame size; (b) Ultimotion's characteristics on a 150-

Kbyte-per-second CD-ROM drive for 640x480 frame size; (c) Ultimotion's characteristics on a 300-Kbyte-per-second CD-ROM drive for 320x240 frame size.

<B>Characteristic          Description</B>
(a)
Frame Size          320x240
Frame Rate          5 frames per second
Data Rate          150 Kbytes per second
Computational Complexity          25-MHz 386
Compression Complexity          Both off-line (8 seconds per frame) and real-time
Scalability          Scales from 65,535 to 16 colors; frame rate: up

to the authored rate; frame size: half, normal,

and double size
(b)
Frame Size          640x480
Frame Rate          5 frames per second
Data Rate          150 Kbytes per second
Computational Complexity          25-MHz 486
Compression Complexity          Off-line (5 seconds per frame)
Scalability          Scales from 65,535 to 16 colors; frame rate: up

to the authored rate; frame size: half, normal,

and double size
(c)
Frame Size          320x240
Frame Rate          30 frames per second
Data Rate          300 Kbytes per second
Computational Complexity          50-MHz 486DX
Compression Complexity          Off-line (approximately 5 seconds per frame) and

real-time
Scalability          Scales from 65,535 to 16 colors; frame rate: up

to the authored rate; frame size: half, normal,

and double size

About IBM's Ultimotion

Ultimotion is a video-compression algorithm optimized for software playback on a general-purpose microprocessor. From its inception, Ultimotion was designed to break through the "small-video-window" barrier (160x120 pixel window size) and deliver video at four times that size. The resulting algorithm delivers 320x240 movies playable from 150-Kbyte CD-ROM drives. Typical Ultimotion movies need only a 25-MHz 386 processor and SVGA or XGA display adapter.

One of the factors that help achieve these levels of performance is that OS/2 uses a direct video-access technique now beginning to emerge in other systems. When supported by a display adapter's device driver, the video decompressor is given direct access to the display adapter. The system automatically detects the presence of the support and uses it without the knowledge of the decompressor. Using this high-speed access, 486-based machines are able to play Ultimotion's larger 320x240 movies at 30 frames per second.

Ultimotion is a software-only, video-compression algorithm that averages 18:1 compression. It uses both spatial and temporal compression and was designed to run on processors as low as a 25-MHz 386. On a single-spin CD-ROM drive (150 Kbytes per second), it exhibits the characteristics summarized in Table 3(a). Table 3(c) shows the characteristics on a double-spin CD-ROM drive (300 Kbytes per second).

Ultimotion delivers a respectable frame size and frame rate at relatively low data rates and processing power. Its organization of the compressed data enables efficient output display, clipping, doubling, and halving of the movie's frame size. Ultimotion movies can be created using either off-line or real-time algorithms across a wide range of frame sizes and frame rates.

--L.W.

Figure 3: Logic flow of audio/video synchronization algorithm.

[LISTING ONE]


RC CalcNewFrameIval  ( PSIB         psib,
                       PMOVIE_STR  pMovie )
{
   LONG     AudioSynchError;
   LONG     VideoTimeError;
   LONG     lmsTimeError;
   BOOL     fVideoTooSlow = FALSE;
   BOOL     fVideoBehindAudio = FALSE;
   BOOL     fSynchPulse = psib->syncEvcb.ulSyncFlags&(SYNCPOLLING+SYNCOVERRUN);
   ULONG    CurrentTime;
   MMTIME   mmtimeMaster = psib->syncEvcb.mmtimeMaster;

   // get current time
   DosQuerySysInfo (QSV_MS_COUNT,QSV_MS_COUNT,&CurrentTime,sizeof(ULONG));

   // Update frame time
   pMovie->TimeNextFrame += pMovie->FrameInterval;


      //*********************************************
      // Determine if the video is ahead or behind
      // the frame rate specified for this stream.
      //*********************************************
      if (CurrentTime <= pMovie->TimeNextFrame) {

         //*****************************************************
         // Video is ahead according to system clock
         // Compute Video Error
         //*****************************************************
         VideoTimeError = (pMovie->TimeNextFrame - CurrentTime);
         fVideoTooSlow = FALSE;

      } else {
         //***************************************************
         // Video is behind according to system clock
         // Compute Video Error
         //***************************************************
         VideoTimeError = (CurrentTime - pMovie->TimeNextFrame);
         fVideoTooSlow = TRUE;

      } /* endif - who is ahead? */


      //*************************************
      // Is SSM reporting "Out of Tolerance"
      // Check for a Synch Pulse
      //*************************************
      if (psib->fSyncFlag == SYNC_ENABLED) {

         if (fSynchPulse && mmtimeMaster) {
            pMovie->ulSynchPulseCount++;   /* Accumulate count of sync pulses


            /***********************************************************/
            /* Is SSM reporting Video Behind Audio?
            /***********************************************************/
            if (mmtimeMaster >
           (psib->syncEvcb.mmtimeStart + psib->syncEvcb.mmtimeSlave))
            {  // Video is behind audio


               fVideoBehindAudio = TRUE;

               // Compute Audio error
               AudioSynchError = mmtimeMaster -
                    (psib->syncEvcb.mmtimeStart + psib->syncEvcb.mmtimeSlave);

               // Is Video behind according to system clock?
               //
               if (fVideoTooSlow) {
                   if ( AudioSynchError > VideoTimeError ) {
                    // Set the next frame time behind the current time
                    // so that the delta to the cur time = SSM Audio Error
                    // This will cause the video to speed up

                    pMovie->TimeNextFrame -=
                    ( AudioSynchError - VideoTimeError );


                  } else {
                    // Set the next frame time ahead of the current time
                    // so that the delta to the cur time = SSM Audio Error
                    // This will cause the video to slow down
                     pMovie->TimeNextFrame +=
                            ( VideoTimeError - AudioSynchError );

                  }

               } else { // Video OK by System Clock but SSM reports otherwise
                        // Set the next frame time behind the current time so
                        // that the delta to the current time = SSM Audio Error
                        // This will cause the video to speed up
                  pMovie->TimeNextFrame -= (AudioSynchError + VideoTimeError);

               } /* endif */

            } else {  // SSM reports Video ahead of Audio

               fVideoBehindAudio = FALSE;
               AudioSynchError =  (psib->syncEvcb.mmtimeStart +
               psib->syncEvcb.mmtimeSlave) -
                                        mmtimeMaster;

               // Is video behind according to system clock?
               if (fVideoTooSlow) {
                  //*********************************************************
                  // Video is behind according to system time, but video is
                  // running ahead of the audio (the audio must have started
                  // late or somehow broken up, fallen behind and can't get up
                  //*********************************************************
                  // Set the next frame time ahead of the current time
                  // so that the delta to the cur time = SSM Audio Error
                  // This will cause the video to slow down
                  pMovie->TimeNextFrame += AudioSynchError + VideoTimeError;

               } else {  // Video ahead according to system clock and SSM
                  //*********************************************************
                  // Video is keeping up or is ahead according to system time
                  // AND video is running ahead of audio.
                  //*********************************************************

                  if ( AudioSynchError > VideoTimeError ) {
                     // Video is further ahead than system clock indicated
                     // Set the next frame time ahead of the current time
                     // so that the delta to the cur time = SSM Audio Error
                     // This will cause the video to slow down
                     pMovie->TimeNextFrame += AudioSynchError-
                                                    VideoTimeError;
                  } else {
                     // Video not as far ahead as system clock indicated
                     // Set the next frame time behind the current time
                     // so that the delta to the cur time = SSM Audio Error
                     // This will cause the video to speed up
                     pMovie->TimeNextFrame -= VideoTimeError-
                                                    AudioSynchError;
                  }


               } /* endif */

            } /* endif */

            psib->syncEvcb.ulSynchFlags = 0;

            //*************************************************************
            // Recompute video time error based on updated TimeNextFrame
            //*************************************************************
            if (CurrentTime <= pMovie->TimeNextFrame) {
               VideoTimeError = (pMovie->TimeNextFrame -
               CurrentTime);
               fVideoTooSlow = FALSE;

            } else {
               VideoTimeError = (CurrentTime -
               pMovie->TimeNextFrame);
               fVideoTooSlow = TRUE;

            } /* endif - time exceeded. */

         } /* endif - SSM reporting "Out of Tolerance" */
      } /* endif - Listening to SSM */

      /***********************************
      /* Take action based on whether the
      /* video is running ahead or behind
      /***********************************
      if (!fVideoTooSlow) {

         lmsTimeError = VideoTimeError / 3;  //Convert error to milliseconds
         if (lmsTimeError > 32L) {
            // Block till next time to display a frame
            DosSleep(lmsTimeError);
            pMovie->ulLastBlockTime = CurrentTime;
         } else {
            // Too close to next frame time for system clock to be used
            // Be good and yield to higher priority thread if one around
            DosSleep(0);
            pMovie->ulLastBlockTime = CurrentTime;
         }

      } else {

         /********************************/
         /* Drop some frames if behind!  */
         /********************************/
                      :
                      :
       } /* endif - video too slow */

   // Update Frame Count
   pMovie->ulFrameNumber++;


   return (NO_ERROR);
}
End Listing



Copyright © 1994, Dr. Dobb's Journal


Related Reading


More Insights






Currently we allow the following HTML tags in comments:

Single tags

These tags can be used alone and don't need an ending tag.

<br> Defines a single line break

<hr> Defines a horizontal line

Matching tags

These require an ending tag - e.g. <i>italic text</i>

<a> Defines an anchor

<b> Defines bold text

<big> Defines big text

<blockquote> Defines a long quotation

<caption> Defines a table caption

<cite> Defines a citation

<code> Defines computer code text

<em> Defines emphasized text

<fieldset> Defines a border around elements in a form

<h1> This is heading 1

<h2> This is heading 2

<h3> This is heading 3

<h4> This is heading 4

<h5> This is heading 5

<h6> This is heading 6

<i> Defines italic text

<p> Defines a paragraph

<pre> Defines preformatted text

<q> Defines a short quotation

<samp> Defines sample computer code text

<small> Defines small text

<span> Defines a section in a document

<s> Defines strikethrough text

<strike> Defines strikethrough text

<strong> Defines strong text

<sub> Defines subscripted text

<sup> Defines superscripted text

<u> Defines underlined text

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task. However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.