# Creating Music Components in Java

### Important Relationships

To develop electronic music software, it is necessary to understand a number of mathematical/musical relationships. Some important ones are covered here.

Western music is based on twelve tone equal temperament tuning in which every pair of adjacent notes in the scale have an identical frequency ratio. Because an octave interval is defined as a doubling of frequency and there are 12 notes in an octave, the frequency ratio between adjacent notes is equal to the 12th root of 2 or approximately 1.05946309435929... which is an irrational number. Typically, equal temperament tuning is tuned relative to a standard pitch of 440 Hz, called "A 440."

The cent is a logarithmic unit of measure for musical intervals or differences. There are 1200 cents in an octave. Typically, cents are used to specify small intervals and in fact the interval of one cent is too small to be heard. It is generally agreed that 5 or 6 cents is the minimum frequency difference that most people can detect.

Because an octave is a 2:1 frequency ratio and there are 1200 cents in an octave, a semitone (the interval between two adjacent piano keys) or a half step (a one-fret movement on a guitar) is equal to 100 cents.

To sum up, to shift a frequency upward by one octave, you double the frequency. To shift a frequency downward one octave, you halve the frequency.

Sometimes, however, it is necessary to shift a frequency by a small amount. Given a desired frequency shift in cents, a frequency multiplier can be calculated as follows:

``` private static final int CENTS_PER_OCTAVE = 1200;
frequencyMultiplier = Math.pow(2.0, ((float) cents / CENTS_PER_OCTAVE)); ```

MIDI (the musical instrument digital interface) is an industry-standard protocol defined in the 1980's that enables electronic musical instruments to communicate, control, and synchronize with each other. All modern keyboard synthesizers map their keys to MIDI note numbers to guarantee interoperability between devices built by different manufacturers. When it is necessary to determine the frequency of any given MIDI note, the following code snippet can be used. Note, MIDI note number 69 corresponds to A 440.

```private static final int REFERENCE_NOTE_NUMBER = 69;
private static final int REFERENCE_NOTE_FREQ = 440;
private static final int NOTES_PER_OCTAVE = 12;

// Convert a midi note number to a frequency in Hz
public float midiNoteNumberToFrequencyConverter(int mnn) {

float soundOffset = (mnn - REFERENCE_NOTE_NUMBER) / (float) NOTES_PER_OCTAVE;
return REFERENCE_NOTE_FREQ * Math.pow(2.0, soundOffset);
}
```

Modulation is an important process in electronic music and it comes in two basic forms: amplitude modulation (AM) and frequency modulation (FM). AM, as the name implies, manipulates the amplitude of samples. A volume control is an example of AM where samples are each multiplied by a constant factor derived from a volume control. With a factor of 1.0, the samples are unmodified; but as the factor approaches 0.0, the amplitude or volume of the samples is reduced until they are inaudible.

``` newSampleValue = originalSampleValue * factor;
```

In synthesizer applications, AM uses a periodic waveform from a low frequency oscillator (or LFO) to modulate the volume of samples, with the result being the volume going up and down in a controlled manner. In the realm of guitar effects, AM is called tremolo.

FM is a manipulation of the frequency (as opposed to the amplitude) of a signal source by some external device — typically, an LFO. FM in the guitar effects realm is called vibrato. An example of FM is a European siren sound, which can be produced by frequency modulating a signal source with a square wave, which causes the pitch to shift between two discrete frequencies.

### The Java Sound Platform

I think it important to hear the fruits of our labors, so to this end I have written the `SamplePlayer` class (Listing One), which uses the javax.sound.sampled package (part of the Java distribution since v. 1.4.2) to play a sample stream. Here an `AudioFormat` for a mono stream of 16-bit signed, big endian samples with a sample rate of 22050 samples/second is created. From the format, a `DataLine.info` object is created and subsequently a `SourceDataLine` object. Playing samples is then as easy as writing them to the `SourceDataLine` object. Notice the samples are sourced by a provider object and that the playback of samples will continue until either the done flag is set or the provider returns an end of stream indication (-1). `SamplePlayer` is a subclass of `Thread` so it starts when its run method is called and terminates under the conditions stated above. Notice that even though we have defined our sample format as 16-bit integers, the `SourceDataLine` class wants the data delivered in an array of bytes. This is somewhat inconvenient (because of little vs. big endian issues) but you have to work within the framework you are given.

##### Listing One: The Sampler Player Class

```package com.craigl.softsynth.consumer;

import com.craigl.softsynth.utils.SampleProviderIntfc;
import javax.sound.sampled.*;

public class SamplePlayer extends Thread {

// AudioFormat parameters
public  static final int     SAMPLE_RATE = 22050;
private static final int     SAMPLE_SIZE = 16;
private static final int     CHANNELS = 1;
private static final boolean SIGNED = true;
private static final boolean BIG_ENDIAN = true;

// Chunk of audio processed at one time
public static final int BUFFER_SIZE = 1000;
public static final int SAMPLES_PER_BUFFER = BUFFER_SIZE / 2;

public SamplePlayer() {

// Create the audio format we wish to use
format = new AudioFormat(SAMPLE_RATE, SAMPLE_SIZE, CHANNELS, SIGNED, BIG_ENDIAN);

// Create dataline info object describing line format
info = new DataLine.Info(SourceDataLine.class, format);
}

public void run() {

done = false;
int nBytesRead = 0;

try {
// Get line to write data to
auline = (SourceDataLine) AudioSystem.getLine(info);
auline.open(format);
auline.start();

while ((nBytesRead != -1) && (! done)) {
if (nBytesRead > 0) {
}
}
} catch(Exception e) {
e.printStackTrace();
} finally {
auline.drain();
auline.close();
}
}

public void startPlayer() {
if (provider != null) {
start();
}
}

public void stopPlayer() {
done = true;
}

public void setSampleProvider(SampleProviderIntfc provider) {
this.provider = provider;
}

// Instance data
private AudioFormat format;
private DataLine.Info info;
private SourceDataLine auline;
private boolean done;
private byte [] sampleData = new byte[BUFFER_SIZE];
private SampleProviderIntfc provider;
}
```

`SamplePlayer` is meant to operate in what I call a pull architecture. It is called this because, once started, the Java sound subsystem pulls samples from its provider at the rate needed for uninterrupted sound reproduction. A Java class becomes a sample provider by implementing the single method interface `SampleProviderIntfc` shown in Listing Two.

##### Listing Two: The Sample Provider Interface

```package com.craigl.softsynth.utils;

public interface SampleProviderIntfc {

int getSamples(byte [] samples);

}
```

### Conclusion

I have presented background information necessary to understand digital signal processing at a basic level and provided code for playing a stream of samples. In upcoming articles over the next few weeks, I will show how to put this information to use by implementing various modules useful in electronic music. If you have an iPhone or iPod Touch you can play with the PSynth app, which implements the techniques discussed in this article series.

### Resources

The following books contain valuable information on electronic music production and digital signal processing.

1. Computer Music: Synthesis, Composition and Performance, Charles Dodge, Thomas A. Jerse, 1985, 1997. Schirmer Books.
2. Elements of Computer Music, F. Richard Moore, 1990, Prentice-Hall.
3. A Programmers Guide To Sound, Tim Kientzle, 1998, Addison-Wesley.
4. Digital Audio with Java, Craig A. Lindley, 2000, Prentice-Hall.
5. PSynth – An iPhone/iPod Touch synthesizer app available from iTunes.

Craig Lindley is a hardware engineer who, until recently, had been writing large-scale Java applications.

Music Components in Java: The Synthesizer Core (Part Three in this series)

### More Insights

 To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.