Music and programming languages -- again
I've been learning about synthesizers lately.
One thing I've learned is just how pervasive synthesizers are: It's hard to see a movie or television show, or listen to the radio, without hearing at least some music played on synthesizers. So how does this phenomenon relate to programming?
A synthesizer is typically designed as a collection of modules, each of which contributes to the sound in some way, and all of which are connected together. One module generates the sound, another changes its character, another controls its volume, and so on. Even a simple synthesizer will have dozens of parameters that control its modules. Each of these parameters affects the sound--sometimes in obvious ways, other times more subtly.
Because there are so many parameters, just about every synthesizer on the market offers a collection of presets. A preset is a suite of parameter settings that produce a particular sound. Pick a preset that represents the sound you want, dial it in, and you're ready to go.
Some musicians never go past the presets on their particular synthesizers: They use what the manufacturer has provided, and that's it. Others go online and find other parameter settings that other musicians have created for their synthesizer. Still others tweak the parameters themselves, and some of those musicians make their tweaks available to their peers. Finally, the most adventurous souls use modular synthesizers, which let you not only change the parameters but also to reconfigure the modules--to the extent of changing which modules are present.
It occurs to me that synthesizers are like programs. A preset is like a library class: Someone else has done the work needed to make the device do what you might want; all you have to do is use it. Programmers exchange (and sell) library code in much the same way that musicians exchange presets. And modular synthesizers are like template libraries: Not only can you write library code, but you can change the nature of the underlying data structures and how they connect to each other.
However, there is one enormous difference between the synthesizer world and the programming world: An awful lot of programmers start out working at the lowest level, and almost all musicians who play synthesizers start out at the highest level. In other words, a musician who starts using a new synthesizer usually begins by picking an interesting-looking preset and playing music with it. The next step might be to try to adjust the preset, and only after a lot of experience does a musician design new presets from scratch.
In contrast, if the comments on Usenet and the design of so many popular textbooks are any guide, C++ programmers are often still starting out by learning to use new and delete (or even malloc and free) for variable-length arrays before learning how to use vectors. In contrast, most musicians who use (and buy) synthesizers seem to care much more about the variety of sounds available from the presets than they do about the internal architecture.
I wonder why there is such a profound difference in philosophy, and invite comments from readers.