Max Tutorial #8: A Keyboard-Controlled Synth

In this tutorial, we will apply what we’ve covered so far to build a piano keyboard-controlled synthesizer. We’ll start off with a standard synthesis chain as in the previous tutorials, this time using a sawtooth oscillator as our sound source. The frequency of the oscillator will be determined by an object called [kslider], short for “keyboard slider,” which generates a piano keyboard-like interface inside of the patch. Clicking on notes on the keyboard (with the patch locked) causes the corresponding MIDI note number to be passed out the left outlet. We can use a [t b i] object to send a bang to the [function] object, triggering the envelope, and the MIDI note number to [saw~] via the [mtof] conversion object.

If you have a MIDI controller available, you can use that instead of clicking on the on-screen keyboard. To add this functionality, we’ll need two new objects: [notein] and [gate]. The [notein] object detects incoming MIDI notes and passes them along. The [gate] object is used to limit what information passes from the right inlet to the outlet. The left inlet opens and closes the gate: a zero closes it and a non-zero value opens it. In this case, the [gate] ensures that only the start of notes are passed through, and not the ends of notes, which are automatically determined by the envelope we draw in the [function] object.

To explain exactly what’s happening with these two objects requires a brief technical tangent—feel free to skip ahead to the next paragraph if you prefer (especially if you don’t have a MIDI controller). Every MIDI note consists of two parts: a “note on” and a “note off.” The “on” and “off” messages for a single note will have the same MIDI note number; they are only distinguished by what is called their “velocity” value, which corresponds to the volume of the note. Any non-zero value is interpreted as the volume of a “note on,” and a value of zero is interpreted as a “note off.” By passing the velocity values from the middle outlet of [notein] to the left (control) inlet of the [gate] object, only the start of notes, corresponding to non-zero velocity values, will pass through, and velocity values of zero, “note offs,” will close the [gate]. If the “note offs” were passed through directly, we would hear a double attack for each note, since the software would have no way of determining which was a “note on” and which was a “note off.”

Once we have a basic synthesizer setup, we can expand it to include the [lores~] filter object. Instead of using an LFO to modulate the filter as in the previous tutorial, here we’ll use a second [function] object to generate an envelope to modulate the filter. As in the previous modulation-based tutorials, we use the [scale~] object to set a range of values—in this case, a range of frequencies for the filter cutoff from 500 to 1500 Hz. Then we connect the outlet of [scale~] to the cutoff frequency input of [lores~], connect the “b” outlet of the [t b i] object to trigger the second [function] object, and finally, lock the patch and draw an envelope shape for the filter frequency. When we play notes, we can hear the effect of the modulated filter as a change in timbre. Changing the direction and steepness of the envelope changes the way the filter affects the timbre.

Our final step will allow us to customize the range over which the filter sweeps. Choosing a fixed range of frequencies might make sense intuitively, but it results in an inconsistent sound because it means that the filter is indifferent to the specific pitches we play. This means that pitches in different registers will have a dramatically different timbre. To resolve this, we can set the frequency range as a multiple of the frequency played. We do this by using two [* ] multiplication objects (note that we are using the “regular” multiplication object and not the audio rate version [*~]). We pass the frequency from [mtof] into the left inlet of each, and use a floating-point number box on the right to determine the multiples. By trying different multipliers—along with different envelope shapes—it’s possible to get a wide range of sounds from this simple setup.

Max Tutorial #7: The Sound of the Ocean

This tutorial shows you how to recreate the sound of the ocean by modulating filters applied to a white noise source. As in the previous tutorial, we’ll use a low-frequency oscillator, or LFO, as the modulating signal. We’ll begin by setting up a basic subtractive synthesis chain: [noise~] as the sound source, [lores~] as the filter, [gain~] as the volume control, and [dac~] for output. (We call this type of synthesis “subtractive” because we begin with a rich sound source, noise, and subtract energy from it with the use of a filter.)

Instead of using constant values for the filter parameters of cutoff frequency and resonance, we will use constantly changing values to more accurately capture the unpredictable movement of the sea. As in the previous tutorial, we will use [cycle~], a sine wave oscillator, to generate our modulating signal. We then use the [scale~] object to match the range of the output of the oscillator, -1 to 1, to our desired range of frequencies, which in the video is set from 300 to 800 Hz. We can do the same for the resonance value, shifting the output range to 0.3 to 0.9, and even add changes in volume by inserting a [*~] object.

The values chosen for many of these objects are subjective, and can be varied for different sonic and musical results. The frequency of the LFOs—set in the video to 0.11, 0.09, and 0.08 Hz, respectively—can certainly be changed for different results. Increasing these values a little will make the “ocean” sound more intense; increasing these values a lot will result in a completely different sound. One important principle to bear in mind in setting these values is choosing frequencies which are incommensurable—that is to say, frequencies which are not multiples or factors of one another. This principle, exploited by Brian Eno in his ambient music, ensures that the alignment of the different parameters will continually vary in unpredictable ways. This unpredictability is part of what makes the sound of the ocean more lifelike.

The final step in this tutorial is adding depth to the sound by adding a second channel. We can simply copy everything we’ve created so far and paste it to the right. We can leave the ranges within the [scale~] object the same; we just want to make sure the LFO frequencies are different so that the parametric changes don’t line up between the left and right channels. Finally, we will assign the patch on the left to the left channel and the patch on the right to the right channel by connecting each to the respective inlet of [dac~]. We can control the overall volume by linking the two [gain~] objects, from the right outlet of the left [gain~] object to the inlet of the right [gain~] object. This way, when we slide the left [gain~] object, it controls the right [gain~] object as well, like a pair of stereo faders.

Max Tutorial #6: Modulating Oscillators with LFOs

In this tutorial, we’ll begin to apply some modulating techniques using a low-frequency oscillator or LFO. In Max, there is no distinction between regular oscillators and low-frequency oscillators. We use the same objects for both, and adjust the frequency range accordingly. In a voltage-controlled synthesizer, the voltage values output from an oscillator can be sent directly to the input of other modules in order to perform modulation. In digital systems like Max, however, objects don’t interpret the “voltage” or signal correctly unless we specify the range of values explicitly.

We can set the range of values for modulation by using the [scale~] object. The [scale~] object takes arguments for an input range and an output range and—you guessed it—scales them accordingly. We’ll start with a sawtooth oscillator as our sound source and a sine wave oscillator as our LFO. The change in amplitude of the sine wave, which goes smoothly up and down, will modulate the frequency of the sawtooth oscillator. Between [cycle~] and [saw~], we add the [scale~] object to convert from the signal output range of [cycle~], which goes from -1 to 1, to a range of frequencies. In this case, we’ll simulate a wide vibrato-like effect by choosing a range from 400 to 440 Hz. This is called frequency modulation.

We can also use an LFO to modulate the amplitude, or volume, of the signal. At relatively slow speeds, this sounds like a tremolo effect. To make the effect more obvious, we’ll eliminate the frequency modulation and assign [saw~] a fixed frequency. Then we’ll move our [cycle~] and [scale~] objects over and connect them to the multiplication object [*~], which we’ve used in the past to control volume similar to a voltage-controlled amplifier, or VCA. We’ll also adjust the output range in the [scale~] object to 0 to 1, reflecting the way Max designates loudness. Just as before, the frequency of the modulating oscillator, [cycle~], determines how fast the effect is.

The last section of this tutorial demonstrates how to modulate modulators. In other words, instead of a tremolo or vibrato effect with a constant speed, another level of modulation allows us to vary the speed of the effect over time. The architecture is simple: we replace the constant that previously determined the speed (the floating-point number box) with another [cycle~] and [scale~] pair, and adjust the output ranges accordingly.

In the first example, which expands the amplitude modulation technique, the tremolo effect will vary between 3 and 12 pulses per second. The amount of time between these extremes is determined by the number box at the top. The value given, 0.1 Hz, is too low to be heard as an audible frequency (hence “low-frequency” oscillator), but is slow enough that we can hear the change in the tremolo speed clearly. (0.1 Hz means that the oscillator completes one tenth of one one cycle or “period” every second, and therefore a complete cycle between the two extremes once every ten seconds.)

We can plug in the same structure for frequency modulation, again adjusting the output of the [scale~] object as necessary. Once again, 0 to 1 is a good range for amplitude, but not frequency, so we’ll switch back to the frequency range we used before: 400 to 440 Hz. Remember to connect the output of the last [scale~] object into the frequency inlet of [saw~]. Once we’ve made these adjustments, because the floating-point number box at the top is still set to 0.1 Hz, we can hear that every ten seconds the vibrato goes from its fastest speed (12 pulses per second) to its slowest speed (3 pulses per second).

Max Tutorial #5: Expanding the Sequencer with Skips and Slides

In this tutorial, we will expand the sequencer we’ve been developing for additional functionality. We’ll start off with a very similar structure, but instead of eight steps, we’ll double it to sixteen steps. This means that both the [counter] and the [multislider] have to be updated as shown. We’ll use the sawtooth oscillator [saw~] again, as in Tutorial #3.

Once this is up and running, we’ll add a second [multislider] object to control a second parameter, as in the previous tutorial. The second [multislider] object will control the volume of each step. The range for each slider in the second [multislider] should be 0 to 1 with floating-point output, as shown in the video. (You can copy and paste objects by using command+C and command+V, or through the Edit menu.)

We’ll control the volume through the use of a second [*~] object, below the first one. We can think of volume control as a multiplication-based process. Each slider in the volume [multislider] has a value from 0 to 1. 1 is maximum value, 0 is silence, 0.5 is half of the maximum value, etc. So when we change the volume using [*~], we are multiplying the audio signal by the value given by [multislider]. We add the [line~] object to process changes in volume at the audio rate. Note that if you drag one of the sliders all the way down, you can drop that step out of the pattern, creating more rhythmic interest.

There are two other expansions we’ll explore in this tutorial. The first is adding smoothness (also known as bend or slide) between pitches, so that they seem to glide into one another. We can implement this in a very simple way using a message box and an additional [line~] object into [saw~]. As in the previous tutorial, the “$1” passes through the values from the object above (in this case, frequency values). The second number indicates how many milliseconds the synth should take to reach that value. In the video, the message “$1 50” means that the synth will introduce a slide of 50 milliseconds for every pitch. This may not sound like a lot, but the difference is clearly audible.

To make the slide parameter more customizable, we can use the [join] object. The [join] object brings multiple elements together into a single piece of data. In this case, there are two elements, so we add an argument of “2” as shown. The frequency value passes into the left inlet and we can connect an integer number box to the right inlet to set and change the slide time. The text “@triggers 0” tells [join] to output the combined elements only upon input to the left inlet. This allows us to freely change the slide time without causing output, which would disrupt the rhythmic pattern. As you can see in the video, the greater the slide time, the greater the modulation effect on the pitch.

The final expansion in this tutorial allows us to divide the sequence into smaller rhythmic units by choosing the step at which the sequence starts over. We can implement this by creating an integer number box that passes through the message “max $1” into the left inlet of the [counter] object. The “max” message sets the maximum count value for [counter]. As we change the number, the number of steps looped by the sequencer changes, creating distinct rhythmic patterns. We can verify this by adding a number box to show the output of [counter] directly.

Max Tutorial #4: A Simple Drum Machine

In this tutorial we will use the sequencing techniques covered in previous tutorials to build a simple drum machine. The first difference in this tutorial is that instead of using an oscillator as a sound source, we will use a noise generator called [noise~]. We can connect [noise~] to an envelope generator in exactly the same manner as an oscillator, as illustrated in the video.

Next, we will use the [multislider] in order to customize the drum sound on each step. We will use a resonant low-pass filter to shape the noise sound. This type of filter has two parameters we can control: cutoff frequency and resonance. Accordingly, we will use two [multislider] objects so that we can control these two parameters independently. We will set up the first [multislider] (on the top) to control the cutoff frequency in exactly the same manner as in the previous tutorial (using MIDI note numbers).

The second [multislider], on the bottom, will control the resonance. The resonance parameter ranges from 0-1, so in addition to using eight sliders for our steps, we have to adjust the properties in the Inspector so that the range is from 0 to 1 (the Sliders Output Values remains floating-point since we want to use the decimal values between 0 and 1). Once we have created these two [multislider] objects, we can connect them to a [metro] and [counter 1 8] as before, using the “fetch $1” message.

Now it’s time to add the filter. The filter object is called [lores~]. From left to right, its inputs are audio in, cutoff frequency, and resonance. Therefore we connect the [noise~] source to the left inlet, the output of the [multislider] labeled “frequency” to the middle inlet (via [mtof], as before), and the output of the [multislider] labeled “resonance” to the right inlet. (Add labels or comments by pressing "c" when the patch is unlocked.)

Remember to use the right outlet of the [multislider] each time. You can straighten out the connecting wires by clicking on a wire and pressing command+Y (or going to Arrange -> Auto-Align).

We can connect several objects to the [function] object to trigger the envelope generator. In this case, I’ve connected the [counter] directly to the button, but the output of [metro]—or the right outlet of either [multislider], as before—would also work. Once we make these last connections, we can lock the patch, turn on the audio, and start to customize the drum pattern by changing the position of the sliders in the [multislider] objects and shaping the envelope in [function].

Max Tutorial #3: Customizing a Pitch Sequence

This tutorial builds on the concepts in Tutorial #2 by building a sequencer in which the pitch of each step of the sequence can be customized. We begin with the [metro] and [counter] objects to determined how many steps are in the sequence, and how fast we move through the sequence. Instead of using the [sel] object to trigger sound for each step, however, we’ll use a new object called the [multislider]. When you create the [multislider] object, like [function] the name will disappear and you’ll be presented with a dark-colored rectangle.

The [multislider] object allows you to customize the number of sliders it contains. In our patch, each slider will represent a single step. As before, we’ll use eight steps total in our sequence. To set the number of sliders, we need to open the object Inspector. To access the Inspector, unlock the patch, click on the object so that it is highlighted, and then hover over the left side of the object. Click on the yellow circle that appears and choose Inspector. The Inspector will pop up on the right side of the screen.

Scroll down to the bottom of the Inspector, and you will find three parameters that we will change: Number of Sliders, Range, and Sliders Output Values. (If you don’t see these options, click All, next to Basic, Layout, and Recent at the top of the Inspector.) Set the Number of Sliders to eight. We will use standard MIDI note numbers for our pitches, so we change to range to the MIDI standard range of 0 to 127 (separated by a space as in the video). Finally, because we are using MIDI, we are only interested in integers, so we change the Sliders Output Values accordingly.

Once we’re finished with the Inspector, we can return to the patch and resize the [multislider] so it will be easier to see. If we lock the patch and click inside of the [multislider], we see there are eight distinct sliders which can be arranged freely. The vertical position of each slider will represent the pitch of each sequencer step.

In order to connect the [counter] to the [multislider], we have to use a message box (shortcut “m”). Type “fetch $1” into the message box. In this case, “fetch” tells the [multislider] object to output the value of the slider corresponding with the current step as determined by the counter. The “$1” portion of the message is a “dummy variable” that is automatically replaced by whatever value goes into the message box. Since [counter] is sending out the numbers one through eight over and over again, the [multislider] interprets the incoming messages as “fetch 1,” “fetch 2,” “fetch 3,” etc. Each time it receives this message, it outputs the value of that slider, which we will use to control the pitch of our sequencer shortly. We can verify all of this by adding some integer boxes (“i”) and turning on [metro], as in the video.

Next, we will create a simple synthesizer using the same objects as in the previous tutorial. The only change in this video is the use of a different oscillator—instead of a sine wave, we’ll use [saw~], which generates a sawtooth waveform. This doesn’t have any impact on the functionality of the patch—it just gives the sound a different, richer timbre.

The final step is connecting the two halves of the patch. We can think of the left half of the patch as the “control” part of the patch, and the right half of the patch as the “audio” part of the patch. There are two pieces of information we have to pass from left to right: when each note is triggered, and the pitch of each note. We can pass this information along using the [trigger] object, or [t] for short. The [t] object takes as its arguments the kind of information we wish to pass through. In this case, we want to send an integer (“i”) to set the pitch of the oscillator, and a “bang” (“b”) to trigger the envelope. The [t] object also helps us set the order in which information is sent: arguments are sent in order from right to left. So if we type in [t b i], the integer, representing pitch, will be sent first, followed by the bang for the envelope generator. Sometimes the order doesn’t matter, but it is best practice to update the pitch of a sound before actually triggering it.

The final step is to connect the outlets of the [t b i] object to the oscillator and envelope generator. The bang should be connected to the envelope generator with a button as before. The integer, however, must be converted from a MIDI note number to a frequency value (in Hertz), as all of the oscillator objects expect frequency values to determine pitch. Therefore, we use an object called [mtof] which converts from MIDI note numbers to frequency before passing the pitch information through to the oscillator. Once this is complete, we can lock the patch, turn on the audio, turn up the volume, and explore the different possible patterns of our sequencer.

Max Tutorial #2: Making a Clave

This tutorial expands on concepts from Max Tutorial #1 by adding sound to the sequence and creating a clave-like repeated musical pattern. In the first part of the video, we produce a simple, sustained sine tone by using the [cycle~] object, which is a sine wave oscillator. The signal flows through a [gain~] object, which controls the volume, and finally passes to the [dac~] object, a digital-to-analog converter, so that the sound will be audible through your speakers. The frequency of the sine wave is determined by a number box. This time we use “f” as a shortcut to create a floating-point number box so that we can enter decimal values (as opposed to the integer number box we used in the first tutorial).

In order to hear sound in Max, the audio must be turned on in the lower-right corner. The icon will turn blue when the audio is on. Normally the audio should be on when you’re performing or testing, and off when you’re programming. The next step is to build a simple synthesizer by applying an envelope to the sine wave. We can draw the shape of an envelope by using an object called [function], and then passing the output of the second-from-left outlet into an object called [line~]. To draw a shape in [function], the patch must be locked. Then simply click to place points (shift-click to delete a point), and make sure your starting and ending points are all the way to the bottom of the window (the points will turn “hollow”).

In a traditional synthesizer, the envelope generator and sound source are combined using a voltage-controlled amplifier, or VCA. In Max, we can use the multiplication object [*~] as shown in the video. Note that objects whose name ends with a tilde (“~”) are specifically audio objects, whereas other objects do not necessarily relate directly to audio, and so are called control objects. For instance, [function] does not have a tilde, so we have to convert its output to an audio signal by passing it through [line~]. We can trigger the envelope by connecting a button to the top of [function] and clicking on it.

Finally, we can recreate the patch from the first tutorial in order to create an eight-step pattern. We can choose which steps of the pattern should have sound, and then connect them to the envelope generator via the numbered outlets of [sel], as shown. The result is a looping musical pattern. In future tutorials, we will expand this sequencing concept so that we can include different notes.

Max Tutorial #1: Building a Sequence with [metro]

This is the first in an instructional video series for using Max, a visual programming language for music and many other things. When you open up a new patch, note the lock icon in the lower-left corner. This allows you to lock and unlock the patch. Generally, a patch should be unlocked for editing and locked for testing and performing. The keyboard shortcut on a Mac is command+E, and control+E on a PC.

To create new items in the Max environment, first unlock the patch. Then type “n” to create a new object. This first object will be a metronome, called “metro.” Type “metro” to create the metronome object. The next object is a toggle, which we’ll use as an on/off switch for the metronome. Type “t” to create a toggle and connect it as shown in the video.

The next element is a number box, which will determine how fast the metronome clicks. There are two types of number boxes in Max: integer and decimal (or floating point). We’ll use the integer number box (the shortcut is “i”) since we don’t need decimal values. By default, we set the timing of the metronome by specifying how many milliseconds (1/1000th of a second) between clicks. (To change the number you have to lock the patch.)

Then we’ll unlock the patch again and press “b” to create a button. Buttons can be used to trigger things or to show when something has been triggered. We’ll use it to visualize the clicking of the metronome. When the patch is locked, you can type new values into the number box or use the mouse to drag the value up and down. Because the metronome measures time in milliseconds, lower values correspond to a faster tempo (less time between clicks) and higher values correspond to a slower tempo (more time between clicks).

In order to create a musical pattern, as with a sequencer, we have to be able to split the metronome pulse into separate steps or beats. We can do this by running the metronome into an object called [counter]. The arguments “1 8” indicate that the count will loop between 1 and 8, like an eight-step sequencer, at the speed dictated by the metronome. We can add a number box below the [counter] to visualize the steps.

Next, we can divide up the steps by using an object called [sel] (short for “select”). We’ll add an argument for each step (“1 2 3 4 5 6 7 8”). We can expand the size of the object by dragging the lower-right corner—this will help us be able to see all of the different outlets at the bottom of the object. By connecting a button to each of the first eight outlets of [sel], we can visualize the metronome click corresponding to each step. (We’ll ignore the last, rightmost outlet of [sel] for now.)

Finally, we can convert the metronome input control to tempo (in beats per minute, or BPM) instead of milliseconds. To convert from BPM to seconds, we divide 60 by the BPM (i.e. 60/60 = 1 second). To convert from BPM to milliseconds, we divide 60,000 by the BPM (60000/60= 1000 milliseconds). The object we will use to divide is [!/] with an argument of “60000” to specify the numerator. Now we can set a BPM for the metronome using a number box and it is automatically converted to milliseconds. This will allow us to build sequence-based musical patterns in future videos.

Online Resources

A list of online and (mostly) open-access resources for electronic music. Please support these folks if you are able.

Tools

Software Environments

Virtual Instruments

Plug-Ins

Sound Libraries

Programming

Utilities

Guides and Tutorials

General Resources

Production and Mixing

Fantastic Synth Sounds and Where to Find Them

Deep Dives

Blogs

Music Theory

Creative Gating

It’s a simple concept: set a gate at a certain threshold of volume, and only sounds above that volume will pass through. It’s a straightforward way to get rid of persistent, low-volume irritations like hums, ambient sound, and background noise. So straightforward, it seems, that many people seem to think of gates more as “utility” effects than as musical ones. But just as compression (as in dynamic range compression) is now viewed as at least as much an art as a science, there is much to recommend the creative possibilities of the humble gate.

(For a quick primer, check out this two-part series from Sound on Sound.)

Even though gates are generally used to reduce the volume of undesirable parts of an audio signal such as background noise (also known as “noise gates”), they are actually part of a class of effects known as dynamic range expanders. (The term “dynamic” is often used to refer to volume in music and audio.) How does expanding the dynamic range reduce volume? Well, by making quiet sounds even quieter, expanders actually increase the distance between the loudest sounds (i.e. those above the threshold and therefore unaffected by the gate) and the softest.

Imagine standing next to a drummer performing in a large, empty acoustic space. They play a drum beat that contains both loud sounds (for example, played on the kick and snare), and soft sounds (played on a closed hi-hat). The dynamic range is defined as the range or distance between the loudest sound (let’s say the thwack of the snare) and the softest sound (the tap of the hi-hat).

To expand the dynamic range, we physically move the hi-hat farther away from where we stand, so it sounds even quieter than before (let’s assume our drummer has extraordinarily long arms). If the softest sound before was the sound of a hi-hat a few feet away, now the softest sound is that of a hi-hat, say, fifty feet away. The loudest sound is unchanged (the snare is still right next to us), but there is more of a difference–a wider dynamic range–between the snare and the distant hi-hat, than between the snare and the close hi-hat.

To extend the analogy, compressors work in just the opposite fashion: we would take the loudest sound–the snare drum–and move it away from where we stand until it was closer in volume to the hi-hat right next to us. This rearrangement reduces or compresses the dynamic range, meaning that there is less of a difference between the volume of the snare and the hi-hat than before.

The most extreme version of the compressor, called a limiter, prevents any sounds above a given threshold from passing through (normal compressors decrease the volume of sounds above the threshold but do not impose a hard limit). Similarly, the most extreme version of the expander is the gate: instead of just making soft sounds softer, it makes them completely inaudible. In our analogy, it would be like taking the hi-hat out through the doors at the opposite end of the space and down the block.

That said, gates–and by extension, all expanders–are about much more than just changes in volume and dynamic range. One of the most famous creative gating effects is gated reverb, an important part of the huge sound of 1980s-era snare drums on many records. A conventional reverberation effect simulates the gradual decay in volume of a sound in a resonant space. Gated reverb involves aggressively cutting off the reverb before it has naturally decayed, resulting in a larger-than-life burst of energy that fades out quickly. Check out the snare on almost any record from the 1980s by Prince, Bruce Springsteen, or Phil Collins:

The “gated” in gated reverb refers to the use of a gate to dramatically cut off the reverb after it passes below a certain threshold (but well before it would naturally fade out). However, even though the gate is affecting the reverberated version of the audio, it’s actually responding to (or detecting) the volume of the drum before reverb is applied. The much shorter duration of the dry signal causes the gate to close on the reverberated signal before it has itself decayed, giving us the characteristically abrupt cutoff.

The principle of using one signal to control another is at the heart of many more familiar techniques, such as a modulation using an LFO or side-chaining. When gates are involved, the process is often described as “envelope following.” Envelope following means that changes in the volume of a signal (such as the attack and decay of individual notes or drum hits) are linked to the opening and closing of a gate. The gate is then applied to other tracks, essentially allowing one to apply the rhythmic pattern of one track to another.

For example, in “Upside Down” by Diana Ross, the rhythm of the strings is triggered (or “keyed”) by Nile Rodgers’s guitar:

Another great example is “Everybody Dance” by Chic. Listen to the solo section, beginning around 4:00:

In electronic music, this is often achieved by applying a gate to a sustained sound (such as a synth pad) and triggering it with a more rhythmic track, such as a drum track or step sequencer. This control layer may or may not be audible, just as kick sounds used for aggressive side-chain compression in a mix may be completely different from the actual kick track. Sometimes the gate is even inverted to produce complementary rhythms: whenever the first layer is playing, the second layer is not, and vice versa.

And what about using a reversed gate on its own: how might one use a gate to pass only sounds below a certain volume threshold? This is not an application that comes up often, but it’s an interesting question. Let’s say one wanted to create a sustained pad-like background layer from a dynamic source source (for instance, turning a drum beat into a wash of metallic cymbal sounds). One approach might be to use such a “reverse gate” to cut out the loudest attacks from the source, and then fill in the gaps with heavy reverb effects (and probably some compression as well). Although there exist specialized plug-ins that can do this, a bit of cleverness can easily flip the functionality of a standard gate.

Recall that combining two versions of the same signal with opposite phase cancel each other out. We can use this principle to cancel out sounds above the threshold of the gate, leaving only those below the threshold. Simply copy the same audio to two tracks and invert the phase of one version. Then apply a gate to the inverted version. When the sound is above the threshold the gate will open, and the two out-of-phase versions of the track will play simultaneously, resulting in silence. When the audio is below the threshold the gate will remain closed, meaning that the only audio we hear is the original, in-phase audio without the gate applied. Unlike compression, this technique will introduce silences where the loud sounds once were, but the attacks will be removed completely.

Gates can even be used to toggle between different inputs. The most famous example is the lead vocal on David Bowie’s “Heroes,” using a technique known as multi-latch gating devised by producer Tony Visconti. On this track, Bowie’s vocals are captured simultaneously by three microphones at varying distances, but the volume at which Bowie sings determines which microphone captures the sound. Each microphone input passes through a gate whose threshold is proportional to its distance from the singer. When Bowie sings quietly, the closest microphone captures his voice, but as his voice rises in intensity, the gate on a more distant microphone is opened, and the gate on the closer microphone is closed.

What ends up happening is that Bowie has to practically yell to be picked up by the farthest microphone, but the physical distance prevents the microphone from being overloaded while simultaneously adding more room ambience. In the context of the song, it allows for a powerful, emotional performance that is also somehow remote and alienated. It’s hard to describe, but it works–and at the center of it all is an elegant application of one of the simplest and perhaps most underrated tools.