January 1997

The Synth is Dead — Long Live the Synth!

by Paul D. Lehrman

January being NAMM-show month, I thought I’d talk a little about the state of synthesis. But first, I want to take this opportunity to point out, in case you haven’t heard, that next year’s Winter NAMM convention, for the first time I or anyone else I know can remember, will not be in Anaheim. They’re moving it to the Los Angeles Convention Center for two years, a place I studiously avoid, even when there’s an AES there. Why? Because Los Angeles will be, I kid you not, "safer".

It seems that the Anaheim Convention Center is due for a major reconstruction and expansion over the next couple of years. It will be open most of the time, but apparently the NAMM brass were worried that sending conventioneers into a hall full of scaffolding and dropcloths was dangerous. NAMM brass, obviously, have never been in your average modern nightclub, where scaffolds and dropcloths are an essential part of the decor. Regardless, they have decided that a convention center 10 (extremely hostile) blocks from any decent-sized group of hotels and restaurants is preferable to one in a town that, no matter how you feel about its food or accomodations, defines the term "family-oriented". Not to mention "cheap". It’s going to be a long couple of years.

Anyway, onwards. If you take a quick glance at the state of electronic music today, you might think it’s reached a dead end. All of the major synthesis methods of the past 20 years – FM, L/A, subtractive, additive, wavetable, and so on – have pretty much been subsumed by the great sampling monster. All of the major players in the business – Roland, Yamaha, E-Mu, Ensoniq, Kurzweil– are making ever-more powerful music machines whose power comes mainly from the amount of ROM samples loaded into them, and/or the amount of RAM sample memory that can be filled up by the user. Those libraries of clever patches that use various synthesis techniques to emulate old instruments or create exciting new ones, that used to circulate among user groups and in the classified ad pages of the music magazines, are now few and far between. Instead, the magazines are full of double-truck full-color ads touting the latest CD-ROMs full of hundreds of megabytes of samples – screaming guitars, funky basses, four-bar grooves, brass sections falling from airplanes, and exotic instruments from far-flung regions of the world – available either as audio files or pre-formatted for the sampler of your choice.

Granted, there is more to today’s electronic instruments than just their ability to blow samples in and out. Many of them offer a host of "traditional" synthesis features like complex multi-pole resonant filters, cross-modulation, multi-stage envelope control, and multiple modulation paths for real-time control of various parameters, which certainly makes them more interesting than the simple hit-the-key-and-let-it-loop sample-playback engines of the past. But the idea of creating a sound from scratch, using some kind of electronic or mathematical process that lets you create a structure from the ground up, seems to be a thing of the past.

Fortunately, however, that’s not true. There is actually some very interesting stuff going on in several corners of the synthesis world. I‘m not talking about the "retro" analog synths that seem to be flooding the market, along with those combination coffee makers/mic preamps and tube CD players. I’m talking about a totally new approach to synthesis, that has nothing to do with sampling, but which has only become practical in the last couple of years.

You’ve probably heard about "physical modeling", and in fact it’s been around for quite some time–the first demonstration I saw of it was at IRCAM, the French government-funded music and acoustics research center (someone please tell me how come we don’t have any of those), in 1984. Physical-modeling keyboard synths have been on the market for about three years, but it’s only now are they getting past the expensive/experimental stage (Yamaha’s first attempt at FM, remember, was not the DX7!), and are looking like they’re ready to become a major force in the electronic music world.

Physical modeling works by creating mathematical models of the components of real sound-producing physical objects, such as reeds, tubes, plucked or bowed strings, brass mouthpieces, violin bodies, etc., inside a computer processor and have them be "stimulated" and interact with each other in real time. The amount of CPU horsepower needed to pull this trick off is substantial, which is why not long ago you could find it only in places like IRCAM. But Yamaha (who’ve been working on this for ten years now) managed to put a heavy-duty physical modeling engine into their VL1, now approaching its third birthday, and its younger, smaller brother, the VL7. Other companies have been experimenting with the technology, including Korg, with its Wavedrum and Prophecy; Technics, with its WSA1 "workstation" (the MIDI world uses this term even freely than the audio world); and Roland, with its VG-1 guitar synthesizer, but Yamaha’s implementation remains by far the most interesting.

What makes physical modeling so revolutionary is two factors. One is that you can combine elements of different instruments that simply do not exist in the real world, to create new instruments that sound as if they could exist. Some of you may recall that one of the many remarkable instruments that Prof. (?) Peter Schickele, the "discoverer" of the apocryphal classical composer P.D.Q. Bach, brought to the world was the "tromboon": a basoon reed and bocal inserted into a trombone slide. Since I was a bassoonist, and I had a friend who played the trombone, I was able to create one of these myself–and believe me, it sounded no less vile when I played it than when Schickele’s chamber ensemble did. But with a physical modelling synth, you can stick a virtual double-reed into a virtual brass horn and put a virtual slide on it, and you can then easily play it in tune, with appropriate vibrato, a realistic envelope, and maybe even, if you want to get fancy, a rubber plunger. (Of course, the visual element of multi-colored metal pipes emerging from the player’s lap in four different directions, like a set of IV tubes for a dying robot, is missing, but I consider that a small price to pay.)

The other factor is that physical modeling synths are real performance instruments. With most sampled sounds, when you hit the key, what happens next is pretty much pre-ordained, with maybe a little filter or vibrato action under the control of the player. With a physically-modeled sound, everything is up for grabs, and any part of the sound – reed pressure, tongue attack, pickup location, size of the tone holes, stiffness of the string – can be influenced in real time using just a few instructions. These can, of course, be handled by a wheel, a slider, a pedal, keyboard aftertouch, or in the case of the Yamaha synth, what you do with your breath, before, after, or during the keystroke. It means that understanding "matrix modulation", something that has been a big selling point for a lot of synthesizers but which few people bother to use effectively, is now crucial to getting a useful sound out of the instrument, and can lead to modes of expression that older synths can only dream about.

The downside of this, of course, is that these instruments take a while to learn to play. With everything that can be going on, it takes a great deal of practice to coordinate the various controls into coherent performances. You really have to think about what you're doing, and be able to mentally isolate the various physical gestures a wind or brass or string player makes, and translate them into the physical gestures the synth wants. A woodwind player might normally bite hard on the reed and at the same time squeeze the instrument to get a more penetrating sound, but you can't do that on a VL1, because breath control and aftertouch may be doing very different things. Non-wind players will run out of breath until they get used to the idea of playing phrases that stop once in a while. Overblowing a French horn can produce beautiful harmonics, or elephant-like noises, depending on how hard you blow. If that parameter is hooked up to a modulation wheel, then you have to learn how to move the wheel subtly, so as not to destroy the musical effect. It’s a good thing perhaps, that the economics of the technology dictate that most physical modelling instruments are homophonic–they play but one note at a time. (A polyphonic version of the synth was announced when the VL1 was released, but I have seen no evidence of it since. Just as well.)

I doubt that it would surprise anyone to hear that Yamaha’s initial foray into physical modeling was expensive and not easy to program – the technology is great, but the user interface is the same old-same old: you still only get to adjust one parameter at a time. Furthermore, the core technology – how you juxtapose different instrument parts to make new instruments – was not accessible to the user. While the instruments certainly have plenty of parameters to fool with, the basic instrument models were fixed in stone. As one Yamaha engineer put it, "We could let people hit a saxophone with a hammer, but they wouldn’t like how it sounded."

This has now changed, with a new version of the operating system for the VL1 and VL7, and a Macintosh editing program that uses visual icons to create tromboons, as well as tubellos, flurinets, trumpitars, digeridistortolins, and all sorts of things Prof. Schickele (not to mention Dr. Seuss) would be proud of. The program is made to be as easy as possible: the instruments that are created are quite basic in nature, and the performance parameters accessible from them are very limited. For example, you can turn on and off breath control, but you can’t specify what it does. Nevertheless, it’s a great step forward, and a more complex "expert" editing program is reportedly in the works for those with even more courage.

As far as the expense is concerned, Yamaha has recently attacked that front with a small module called the VL70m, which contains most of the functionality of the VL7 in a General MIDI module-sized package. (Of course, it’s not General MIDI, in that it can only play one note at a time, but that didn’t stop Yamaha from putting a computer interface on the back just like their GM/XG boxes. Go figure.) This little sucker lists for $800. It has the bulk of the synthesis features of its older siblings, although not quite the same control features, and by necessity the LCD screen is smaller and more cramped. But it will take a breath controller just like the big boys, and it even has an input for a Yamaha wind controller like the WX11, which the older models don’t, for really interesting performance possibilities.

Conquering complexity and price point, however, are only part of the battle of using an exciting new technology to keep true synthesis alive. The rest is getting people interested enough in learning how to play a new instrument. The more realistic a physical model is, the more physical control can be exerted on it to make it do expressive things, and therefore the more knobs, levers, wheels, pedals, and things to suck and blow into can be called into play. For musicians who are used to just playing keys, adding these other parameters can be a real challenge. While they’ve always been there as part of MIDI, few instruments in the past have demanded that musicians pay attention to them. Physical modeling instruments do just that – and the world of synthesis should be better off for it.

Next month, I’ll talk about the other direction synthesis is going: cheap, cheap, cheap. So cheap, in fact, that soon you may not need any hardware at all.


Paul D. Lehrman has been fooling around with electronic music for 30 years, and is still looking for the right sound.

These materials copyright ©1997 by Paul D. Lehrman and Intertec Publishing