Feature - The 1970s in the 21st century: synthesized music returns (via parallel processing)
Curtis Roads is a professor, vice chair, and graduate advisor in media arts and technology, with a joint appointment in music at the University of California at Santa Barbara. He also was the editor of the Computer Music Journal (published by MIT Press), and co-founded the International Computer Music Association. He is often a featured speaker at conferences such as Supercomputing.
Music is an interactive, concurrent process. A note or a chord sounds, then is replaced, gradually or sharply, softly or powerfully, by the next one. For electronically produced or enhanced music, real-time technical advances are critical to continued progress and exploration.
In the 1970s, I fondly remember learning my first parallel programming language, Burroughs Extended Algol. As a researcher in media arts and technology with a focus on music, I wrote programs that spawned thousands of parallel processes for computerized musical composition.
After this, the sequential computing paradigm began to dominate. This seemed like a step backwards — we had to write sequential loops for algorithms in which the exact sequence of events was irrelevant. Even so, as microprocessors became faster, we were able to overcome many real-time hurdles, including sound synthesis, concert hall reverberation, analysis of sound waves, pitch estimation, and granulation, or dividing sound into many, short snippets that allow shortened or prolonged replay with no change in pitch or quality.
My colleagues, graduate students and I have more recently set our sights back to solving problems in areas where parallel machines could have a major impact. One of these is matrix modulation for control of sound synthesis. With modulation, a control signal is used to change the output of a sound-generating component so as to give it a more life-like quality.
The massively parallel modular analog synthesizers of the 1970s combined signals from multiple components into a common audio output. The Arp (see image above) and other analog synthesizers implemented a matrix modulation control scheme.
The idea of matrix modulation is that any component that emits an output signal can be programmed to control, or modulate, any other component that accepts an input signal. This provides a flexible framework for control of synthesis processes, and enables automatic control of a variety of parameters while a human musician controls other parameters manually.
Inspired by these specialized analog computers, my student David Thall and I developed a software synthesizer called EmissionControl that implements matrix modulation to synthesize granulated sounds. While granular synthesis can be a computationally-intensive task on its own, we found that the matrix modulation subsystem was actually consuming about 80% of the processor cycles.
EmissionControl has only 17 parameters to control, and it is easy to imagine a more complicated synthesis process on the scale of the Arp 2500 with many more parameters. As much as this calls out for a multiprocessor solution, it would require partitioning the algorithm on-the-fly into pieces that can be run independently in order to avoid data dependencies — an interesting challenge.
The vast majority of today’s musical software does not take advantage of multi-core processors, therefore this type of partitioning, also called multi-threading, requires additional manual labor and is prone to human error. Systems that could automatically multithread would be a boon.
—Curtis Roads. This article is adapted from his presentation "The Hungry Music Monster;" he recently gave performances in Zurich, Switzerland and Boston, Massachusetts.