## Music with Markov Chains

This is a quick tutorial showing how to make music using Markov chains in Python. (If you’re not familiar with Markov chains, this page gives a great, interactive introduction.) The general idea is that we can input melodies of a certain style, and our model will output similar melodies that reflect that style. We’ll use a sequence of integers representing notes in a melody to train our model (i.e. build a transition table), and then compare the output.

First, start up Python and import the following libraries:

``````import numpy as np
import pandas as pd
import random``````

Next, we need melodies to use as input to train our model. You can compose them yourself, transcribe them, or use a dedicated toolkit like music21 to extract them from digital score files. Here I’ll just use the first few notes of a couple of familiar melodies. The integers are MIDI note numbers (transposed to C major):

``````brother_john = [60, 62, 64, 60, 60, 62, 64, 60]
little_lamb = [64, 62, 60, 62, 64, 64, 64]``````

We want to keep the melodies in separate lists to avoid introducing “false” patterns that never actually occur in the melody between the last note of one melody and the first note of the next. For example, the last note of the first melody is 60 and the first note of the second melody is 64. But the pattern 60 64 never occurs in either melody, so we keep them separate!

A Markov chain can be represented through what’s known as a transition table. The transition table specifies how likely we are to move from one state to another. This function will build our transition table:

``````def make_table(allSeq):
n = max([ max(s) for s in allSeq ]) + 1
arr = np.zeros((n,n), dtype=int)
for s in allSeq:
for i,j in zip(seq[1:],seq[:-1]):
ind = (i,j)
arr[ind] += 1
return pd.DataFrame(arr).rename_axis(index='Next', columns='Current')``````

Then we call the function with whatever melodies we’d like to include as items in a list (any number of melodies). This builds the transition table, essentially “training” the model:

`transitions = make_table([brother_john, little_lamb])`

The next step is to generate a new sequence based on the table. So we need a new function:

``````def make_chain(t_m, start_term, n): # trans_table, start_state, num_steps
chain = [start_term]
for i in range(n-1):
chain.append(get_next_term(t_m[chain[-1]]))
return chain``````

Inside of which we use the following nested function for each step:

``````def get_next_term(t_s):
return random.choices(t_s.index, t_s)[0]``````

And finally, we’re ready to create our chain by calling the function using three arguments (transition table name, starting value, and length of sequence):

`make_chain(transitions, 60, 10)`

And we get something like this:

`>>> [60, 60, 62, 64, 60, 62, 64, 60, 60, 60]`

Try it a few times to see what kind of results you get. Switch up the starting value and train it on more melodies! Have fun!

## Experimental Genre Associations

This post summarizes one part of a larger digital humanities project on the use of the term "experimental" to describe music. For more on this project--including the data and code--see my Github.

The term “experimental” is used in discussions of musical genre in two contradictory ways: (1) to describe music that does not fit into any existing category, and (2) as a qualifier to describe music that occupies an aesthetically marginal position within a category (similar to the term “avant-garde”). To better understand the latter usage, I designed this quantitative project to identify the genre associations of musicians considered to be “experimental” using data from Wikipedia. I found that experimental musicians were most likely to also be categorized as rock music.

Because “experimental” is not consistently recognized as a genre in and of itself, instead of using genre labels I used a list of 184 experimental musicians from this page.

I used BeautifulSoup to parse the list and obtain the web addresses for each musician’s Wikipedia page. By scraping each musician’s Wikipedia page, I generated a list of 554 genre labels comprising 159 unique entries. I removed 93 results of “None,” in addition to one entry that was an editorial indication rather than a genre (“Edit section: Genres”), resulting in a dataset of 460 labels, of which 157 were unique.

As a final step, I consolidated the 157 unique labels into a handful of larger genre categories using substring matching. I began with well-established genre labels including hip hop/rap, pop, classical, rock, jazz, electronic, and dance. For hip hop/rap I combined the results of the substrings “hip hop” and “rap”; for electronic I used the substring “electro” (rather than “electronic”) so as to encompass words like “electroacoustic.” Next, I analyzed the list to determine if other terms were especially prevalent, and therefore warranted consolidation so as to be compared with the larger categories. I found that the terms metal, punk, and industrial were especially prevalent, and added these as well for a total of ten categories for comparison.

Musicians considered to be experimental were most likely to be associated with rock music subgenres by a wide margin, followed by electronic, pop, and metal. Remarkably, the number of rock-related labels (88) exceeded even the number of instances of the label “experimental” (70).

Of course, it bears mentioning that the sample size for this project is extremely small, and was drawn from a list that was generated manually, rather than automatically. Consequently, the list may be especially vulnerable to systemic biases of Wikipedia editors. Nevertheless, this brief study is a starting point for better understanding the application of an especially ambiguous term.

## Max Tutorial #10: Randomness as Control

In the previous tutorial, we used [random] to generate a random stream of notes. As you’ll recall, the possible outcomes were a function of two elements working together: a range determined by an integer into the right inlet of [random], and an offset determined by the [+] object. Because the random numbers represent the actual pitches (via MIDI note numbers) of the output, we can say that we are using randomness to generate musical material. Consequently, we would characterize this approach as “generative.”

Another way of using randomness is to “control” other parts of the patch. Instead of generating material, we use randomness to point to specific values that are already stored in other parts of the patch. One of the ways that we have previously stored values is in the sequencer patch, using the [multislider] object. We’ll start off the video by recreating the simple sequencer that we built in an earlier tutorial: using [counter] to control an eight-step sequencer where the pitches are determined by the values of [multislider]. Note that although this patch is technically simple and very similar to previous tutorials, the switch to thinking in terms of “control” functions involves some conceptual explanation (hence the length of this write-up).

We’ll recall that [counter 1 8] will always count up from 1 to 8 and then loop back to 1 again. This pattern ensures that we always move through the sequencer in exactly the same way, from step 1 to step 8. By using randomness, however, we can jump between steps unpredictably, meaning that we play the same notes as before, but in a constantly changing order. We’ll start off with a [random 8] object in place of the [counter]. However, since [random 8] gives us values between 0 and 7, we need to add a [+ 1] offset for the correct range of 1 to 8 (this is the range that [multislider] is expecting).

If we start up [metro] again, we hear a scrambled version of the original pattern that is constantly changing. Again, the notes are the same, but the order is different. We can take things a step further by customizing the “kind” of randomness we want. For example, when we think of something that sounds random, we often think of something that doesn’t repeat. The [random] object has no constraints on repetition: each random value is generated independently of the previous value, so it is perfectly possible to produce the same number twice (or, rarely, even more times) in a row. In musical terms, this means we hear the same note multiple times in a row—something we hear several times in this example.

If we want to specify random number generation without repetition, we can use a different object called [urn]. The [urn] object is similar to the [random] object, except that once it outputs a value it never outputs that value again. For [urn 8], that means that we’ll hear exactly 8 notes, and then silence. In order to continue to hear more notes, we must reset the object. Luckily, [urn] is built to make this easy.

When [urn] has gone through all of its notes, it sends a bang out its right outlet. This is a signal that [urn] needs to be reset or there will be no more output. To reset [urn], we have to send a message comprising the word “clear” to the left inlet. Therefore, to smoothly reset [urn] each time it runs through its entire range of numbers, we want to connect the right outlet with a clear message sent to the left inlet. This is one of the few exceptions to the rule that patch cables should never go “up” on screen. Here, the output of the [urn] object is actually feeding back into the input (albeit in a very specific way so as not to produce a feedback loop).

In making this feedback connection, we actually have to do two things: we have to reset the object with “clear,” but then we also have to send a bang if we want the rhythm to remain consistent. If we don’t send a bang here, we will have a silent beat each time we reset the object, since [urn] sends a bang out of the right outlet only after it has finished outputting all of its numbers from the left.

This is also a moment where the order of operations is very important. In the space of a single step of the sequencer, we must first clear the object, and then send the bang through. If the order were reversed, we would create a feedback loop in which the bang out the right outlet was continually passed into the left inlet to create a new bang out the right outlet, never actually resetting the object. For this reason, we use the trigger object [t] to force the order.

Recall that the [t] object executes from right to left: whatever we want to happen first must be to the right, and whatever last must be to the left. Therefore, we want our clear message on the right, and our bang on the left. One handy thing about the trigger object is that, in addition to passing data through (by using letters like “i” and “b”), the [t] object can also send one-word messages, like “clear.” Therefore, instead of connecting [t] to a separate message box, we can actually just type the word “clear” into [t] as shown.

The final object should be [t b clear], with both outlets connecting to the left inlet of [urn]. This ensures that each time [urn] runs through its range, it is automatically reset and a new sequence of eight random values begins again. Note that these values refer to the steps of the sequencer, meaning that they are “controlling” the sequencer, rather than generating musical material such as pitches. This will also prevent (almost) any possibility of repetition amongst the notes.

## Max Tutorial #9: Making Music with Randomness

This tutorial is a brief introduction to making music using randomness. First of all, it cannot be overstated that there are many different ways to use randomness to make music, and each method can yield incredibly different results. Randomness can be used to create a sense of chaos and unpredictability when applied in a certain way, but it can also be used to recreate the feel of human presence when employed as a balance against rhythmic quantization.

We’ll start with the [random] object, which is the main random number generator in Max. It has two inlets: on the right, it takes an integer to set the range, and on the left, it takes a bang to output a random number within that range. The range set by the integer has one important caveat: [random] begins counting from 0, not 1! This means that if we set the range to 10, as in the video, the possible numbers are 0 through 9, not 1 through 10. Likewise, by setting the range to 12 (as in the 12 notes of the chromatic scale), we get the numbers 0 through 11, and never 12 (though as we are counting from 0, there are still 12 possible results.)

This is a little counter-intuitive at first, but it’s not actually a limitation. By using what’s called an “offset” we can generate random numbers in any range we like. We establish an offset by using the addition object, [+]. The [+] object adds two numbers together. The number that triggers the calculation goes in the left inlet, which in this case is our random number. It is added to another number that should already be loaded in, either by typing after the “+” (as I have done, preceded by a space), or by sending an integer into the right inlet. (Note that even if you have typed a number into the object itself (like “60”), if you later send a number into the right inlet, the new number will replace 60 in the calculation even though 60 doesn’t disappear.)

In the video, the range of [random] is set to 12 with an offset of [+ 60]. This means that the random numbers will be in the range of (0+60) to (11+60), or 60 to 71. In musical terms, this corresponds to the MIDI note numbers in the octave above middle C (MIDI note number 60). For all of the notes in the first two octaves above middle C, we can simply double the range of [random] to 24 without making any changes to the offset. By plugging this simple combination of objects into the synthesizer we have already built in previous tutorials, we can make a simple random pitch generator. Finally, we can add a [metro] to generate random numbers automatically instead of manually clicking on the button each time.

As an aside, you may have noticed that as the patches get more complicated, I have started paying more attention to the layout of the objects on screen. In general, it is good practice to design patches that are easy to read, and that depict the flow of information as clearly as possible. Take these three principles into consideration as you move forward:

1. Straighten patch cables by clicking on them and pressing command+Y. You can continue to drag cables around after straightening them, but straight lines are generally visually easier to follow than curved ones. In general, short cables are easier to follow than long cables (though sometimes long cables are unavoidable).

2. Connect objects in sequence from top to bottom. When patching, think about how information flows: it’s usually from the outlets (on the bottom) of one object to the inlets (on the top) of another object. This means that for the cleanest layout, objects that receive information should be placed below the objects sending information. If we reflect on the patch in the video, we can see a clear cause-and-effect relationship from top to bottom: (1) the toggle turns on the metronome, (2) the metronome outputs a bang, (3) the bang generates a random number, (4) the random number has an offset applied, (5) the random number triggers an envelope and changes the frequency of the oscillator, (6) the oscillator and envelope are multiplied together, and (7) the resulting sound is sent to the speakers.

3. Finally, also consider expanding objects horizontally to make your patch more legibile by shortening cable length. When the patch is unlocked, you can drag the corners of any object to make it wider or narrower. In the patch in the video, because the [function] object is large, I prefer it to be separated off to the right from other elements. By dragging the [t i b] object to be wider, I ensure that the patch cable going to the function object is straight and as short as possible. I expand the [*~] object below in complementary fashion below as well.

## Max Tutorial #8: A Keyboard-Controlled Synth

In this tutorial, we will apply what we’ve covered so far to build a piano keyboard-controlled synthesizer. We’ll start off with a standard synthesis chain as in the previous tutorials, this time using a sawtooth oscillator as our sound source. The frequency of the oscillator will be determined by an object called [kslider], short for “keyboard slider,” which generates a piano keyboard-like interface inside of the patch. Clicking on notes on the keyboard (with the patch locked) causes the corresponding MIDI note number to be passed out the left outlet. We can use a [t b i] object to send a bang to the [function] object, triggering the envelope, and the MIDI note number to [saw~] via the [mtof] conversion object.

If you have a MIDI controller available, you can use that instead of clicking on the on-screen keyboard. To add this functionality, we’ll need two new objects: [notein] and [gate]. The [notein] object detects incoming MIDI notes and passes them along. The [gate] object is used to limit what information passes from the right inlet to the outlet. The left inlet opens and closes the gate: a zero closes it and a non-zero value opens it. In this case, the [gate] ensures that only the start of notes are passed through, and not the ends of notes, which are automatically determined by the envelope we draw in the [function] object.

To explain exactly what’s happening with these two objects requires a brief technical tangent—feel free to skip ahead to the next paragraph if you prefer (especially if you don’t have a MIDI controller). Every MIDI note consists of two parts: a “note on” and a “note off.” The “on” and “off” messages for a single note will have the same MIDI note number; they are only distinguished by what is called their “velocity” value, which corresponds to the volume of the note. Any non-zero value is interpreted as the volume of a “note on,” and a value of zero is interpreted as a “note off.” By passing the velocity values from the middle outlet of [notein] to the left (control) inlet of the [gate] object, only the start of notes, corresponding to non-zero velocity values, will pass through, and velocity values of zero, “note offs,” will close the [gate]. If the “note offs” were passed through directly, we would hear a double attack for each note, since the software would have no way of determining which was a “note on” and which was a “note off.”

Once we have a basic synthesizer setup, we can expand it to include the [lores~] filter object. Instead of using an LFO to modulate the filter as in the previous tutorial, here we’ll use a second [function] object to generate an envelope to modulate the filter. As in the previous modulation-based tutorials, we use the [scale~] object to set a range of values—in this case, a range of frequencies for the filter cutoff from 500 to 1500 Hz. Then we connect the outlet of [scale~] to the cutoff frequency input of [lores~], connect the “b” outlet of the [t b i] object to trigger the second [function] object, and finally, lock the patch and draw an envelope shape for the filter frequency. When we play notes, we can hear the effect of the modulated filter as a change in timbre. Changing the direction and steepness of the envelope changes the way the filter affects the timbre.

Our final step will allow us to customize the range over which the filter sweeps. Choosing a fixed range of frequencies might make sense intuitively, but it results in an inconsistent sound because it means that the filter is indifferent to the specific pitches we play. This means that pitches in different registers will have a dramatically different timbre. To resolve this, we can set the frequency range as a multiple of the frequency played. We do this by using two [* ] multiplication objects (note that we are using the “regular” multiplication object and not the audio rate version [*~]). We pass the frequency from [mtof] into the left inlet of each, and use a floating-point number box on the right to determine the multiples. By trying different multipliers—along with different envelope shapes—it’s possible to get a wide range of sounds from this simple setup.

## Max Tutorial #7: The Sound of the Ocean

This tutorial shows you how to recreate the sound of the ocean by modulating filters applied to a white noise source. As in the previous tutorial, we’ll use a low-frequency oscillator, or LFO, as the modulating signal. We’ll begin by setting up a basic subtractive synthesis chain: [noise~] as the sound source, [lores~] as the filter, [gain~] as the volume control, and [dac~] for output. (We call this type of synthesis “subtractive” because we begin with a rich sound source, noise, and subtract energy from it with the use of a filter.)

Instead of using constant values for the filter parameters of cutoff frequency and resonance, we will use constantly changing values to more accurately capture the unpredictable movement of the sea. As in the previous tutorial, we will use [cycle~], a sine wave oscillator, to generate our modulating signal. We then use the [scale~] object to match the range of the output of the oscillator, -1 to 1, to our desired range of frequencies, which in the video is set from 300 to 800 Hz. We can do the same for the resonance value, shifting the output range to 0.3 to 0.9, and even add changes in volume by inserting a [*~] object.

The values chosen for many of these objects are subjective, and can be varied for different sonic and musical results. The frequency of the LFOs—set in the video to 0.11, 0.09, and 0.08 Hz, respectively—can certainly be changed for different results. Increasing these values a little will make the “ocean” sound more intense; increasing these values a lot will result in a completely different sound. One important principle to bear in mind in setting these values is choosing frequencies which are incommensurable—that is to say, frequencies which are not multiples or factors of one another. This principle, exploited by Brian Eno in his ambient music, ensures that the alignment of the different parameters will continually vary in unpredictable ways. This unpredictability is part of what makes the sound of the ocean more lifelike.

The final step in this tutorial is adding depth to the sound by adding a second channel. We can simply copy everything we’ve created so far and paste it to the right. We can leave the ranges within the [scale~] object the same; we just want to make sure the LFO frequencies are different so that the parametric changes don’t line up between the left and right channels. Finally, we will assign the patch on the left to the left channel and the patch on the right to the right channel by connecting each to the respective inlet of [dac~]. We can control the overall volume by linking the two [gain~] objects, from the right outlet of the left [gain~] object to the inlet of the right [gain~] object. This way, when we slide the left [gain~] object, it controls the right [gain~] object as well, like a pair of stereo faders.

## Max Tutorial #6: Modulating Oscillators with LFOs

In this tutorial, we’ll begin to apply some modulating techniques using a low-frequency oscillator or LFO. In Max, there is no distinction between regular oscillators and low-frequency oscillators. We use the same objects for both, and adjust the frequency range accordingly. In a voltage-controlled synthesizer, the voltage values output from an oscillator can be sent directly to the input of other modules in order to perform modulation. In digital systems like Max, however, objects don’t interpret the “voltage” or signal correctly unless we specify the range of values explicitly.

We can set the range of values for modulation by using the [scale~] object. The [scale~] object takes arguments for an input range and an output range and—you guessed it—scales them accordingly. We’ll start with a sawtooth oscillator as our sound source and a sine wave oscillator as our LFO. The change in amplitude of the sine wave, which goes smoothly up and down, will modulate the frequency of the sawtooth oscillator. Between [cycle~] and [saw~], we add the [scale~] object to convert from the signal output range of [cycle~], which goes from -1 to 1, to a range of frequencies. In this case, we’ll simulate a wide vibrato-like effect by choosing a range from 400 to 440 Hz. This is called frequency modulation.

We can also use an LFO to modulate the amplitude, or volume, of the signal. At relatively slow speeds, this sounds like a tremolo effect. To make the effect more obvious, we’ll eliminate the frequency modulation and assign [saw~] a fixed frequency. Then we’ll move our [cycle~] and [scale~] objects over and connect them to the multiplication object [*~], which we’ve used in the past to control volume similar to a voltage-controlled amplifier, or VCA. We’ll also adjust the output range in the [scale~] object to 0 to 1, reflecting the way Max designates loudness. Just as before, the frequency of the modulating oscillator, [cycle~], determines how fast the effect is.

The last section of this tutorial demonstrates how to modulate modulators. In other words, instead of a tremolo or vibrato effect with a constant speed, another level of modulation allows us to vary the speed of the effect over time. The architecture is simple: we replace the constant that previously determined the speed (the floating-point number box) with another [cycle~] and [scale~] pair, and adjust the output ranges accordingly.

In the first example, which expands the amplitude modulation technique, the tremolo effect will vary between 3 and 12 pulses per second. The amount of time between these extremes is determined by the number box at the top. The value given, 0.1 Hz, is too low to be heard as an audible frequency (hence “low-frequency” oscillator), but is slow enough that we can hear the change in the tremolo speed clearly. (0.1 Hz means that the oscillator completes one tenth of one one cycle or “period” every second, and therefore a complete cycle between the two extremes once every ten seconds.)

We can plug in the same structure for frequency modulation, again adjusting the output of the [scale~] object as necessary. Once again, 0 to 1 is a good range for amplitude, but not frequency, so we’ll switch back to the frequency range we used before: 400 to 440 Hz. Remember to connect the output of the last [scale~] object into the frequency inlet of [saw~]. Once we’ve made these adjustments, because the floating-point number box at the top is still set to 0.1 Hz, we can hear that every ten seconds the vibrato goes from its fastest speed (12 pulses per second) to its slowest speed (3 pulses per second).

## Max Tutorial #5: Expanding the Sequencer with Skips and Slides

In this tutorial, we will expand the sequencer we’ve been developing for additional functionality. We’ll start off with a very similar structure, but instead of eight steps, we’ll double it to sixteen steps. This means that both the [counter] and the [multislider] have to be updated as shown. We’ll use the sawtooth oscillator [saw~] again, as in Tutorial #3.

Once this is up and running, we’ll add a second [multislider] object to control a second parameter, as in the previous tutorial. The second [multislider] object will control the volume of each step. The range for each slider in the second [multislider] should be 0 to 1 with floating-point output, as shown in the video. (You can copy and paste objects by using command+C and command+V, or through the Edit menu.)

We’ll control the volume through the use of a second [*~] object, below the first one. We can think of volume control as a multiplication-based process. Each slider in the volume [multislider] has a value from 0 to 1. 1 is maximum value, 0 is silence, 0.5 is half of the maximum value, etc. So when we change the volume using [*~], we are multiplying the audio signal by the value given by [multislider]. We add the [line~] object to process changes in volume at the audio rate. Note that if you drag one of the sliders all the way down, you can drop that step out of the pattern, creating more rhythmic interest.

There are two other expansions we’ll explore in this tutorial. The first is adding smoothness (also known as bend or slide) between pitches, so that they seem to glide into one another. We can implement this in a very simple way using a message box and an additional [line~] object into [saw~]. As in the previous tutorial, the “\$1” passes through the values from the object above (in this case, frequency values). The second number indicates how many milliseconds the synth should take to reach that value. In the video, the message “\$1 50” means that the synth will introduce a slide of 50 milliseconds for every pitch. This may not sound like a lot, but the difference is clearly audible.

To make the slide parameter more customizable, we can use the [join] object. The [join] object brings multiple elements together into a single piece of data. In this case, there are two elements, so we add an argument of “2” as shown. The frequency value passes into the left inlet and we can connect an integer number box to the right inlet to set and change the slide time. The text “@triggers 0” tells [join] to output the combined elements only upon input to the left inlet. This allows us to freely change the slide time without causing output, which would disrupt the rhythmic pattern. As you can see in the video, the greater the slide time, the greater the modulation effect on the pitch.

The final expansion in this tutorial allows us to divide the sequence into smaller rhythmic units by choosing the step at which the sequence starts over. We can implement this by creating an integer number box that passes through the message “max \$1” into the left inlet of the [counter] object. The “max” message sets the maximum count value for [counter]. As we change the number, the number of steps looped by the sequencer changes, creating distinct rhythmic patterns. We can verify this by adding a number box to show the output of [counter] directly.

## Max Tutorial #4: A Simple Drum Machine

In this tutorial we will use the sequencing techniques covered in previous tutorials to build a simple drum machine. The first difference in this tutorial is that instead of using an oscillator as a sound source, we will use a noise generator called [noise~]. We can connect [noise~] to an envelope generator in exactly the same manner as an oscillator, as illustrated in the video.

Next, we will use the [multislider] in order to customize the drum sound on each step. We will use a resonant low-pass filter to shape the noise sound. This type of filter has two parameters we can control: cutoff frequency and resonance. Accordingly, we will use two [multislider] objects so that we can control these two parameters independently. We will set up the first [multislider] (on the top) to control the cutoff frequency in exactly the same manner as in the previous tutorial (using MIDI note numbers).

The second [multislider], on the bottom, will control the resonance. The resonance parameter ranges from 0-1, so in addition to using eight sliders for our steps, we have to adjust the properties in the Inspector so that the range is from 0 to 1 (the Sliders Output Values remains floating-point since we want to use the decimal values between 0 and 1). Once we have created these two [multislider] objects, we can connect them to a [metro] and [counter 1 8] as before, using the “fetch \$1” message.

Now it’s time to add the filter. The filter object is called [lores~]. From left to right, its inputs are audio in, cutoff frequency, and resonance. Therefore we connect the [noise~] source to the left inlet, the output of the [multislider] labeled “frequency” to the middle inlet (via [mtof], as before), and the output of the [multislider] labeled “resonance” to the right inlet. (Add labels or comments by pressing "c" when the patch is unlocked.)

Remember to use the right outlet of the [multislider] each time. You can straighten out the connecting wires by clicking on a wire and pressing command+Y (or going to Arrange -> Auto-Align).

We can connect several objects to the [function] object to trigger the envelope generator. In this case, I’ve connected the [counter] directly to the button, but the output of [metro]—or the right outlet of either [multislider], as before—would also work. Once we make these last connections, we can lock the patch, turn on the audio, and start to customize the drum pattern by changing the position of the sliders in the [multislider] objects and shaping the envelope in [function].

## Max Tutorial #3: Customizing a Pitch Sequence

This tutorial builds on the concepts in Tutorial #2 by building a sequencer in which the pitch of each step of the sequence can be customized. We begin with the [metro] and [counter] objects to determined how many steps are in the sequence, and how fast we move through the sequence. Instead of using the [sel] object to trigger sound for each step, however, we’ll use a new object called the [multislider]. When you create the [multislider] object, like [function] the name will disappear and you’ll be presented with a dark-colored rectangle.

The [multislider] object allows you to customize the number of sliders it contains. In our patch, each slider will represent a single step. As before, we’ll use eight steps total in our sequence. To set the number of sliders, we need to open the object Inspector. To access the Inspector, unlock the patch, click on the object so that it is highlighted, and then hover over the left side of the object. Click on the yellow circle that appears and choose Inspector. The Inspector will pop up on the right side of the screen.

Scroll down to the bottom of the Inspector, and you will find three parameters that we will change: Number of Sliders, Range, and Sliders Output Values. (If you don’t see these options, click All, next to Basic, Layout, and Recent at the top of the Inspector.) Set the Number of Sliders to eight. We will use standard MIDI note numbers for our pitches, so we change to range to the MIDI standard range of 0 to 127 (separated by a space as in the video). Finally, because we are using MIDI, we are only interested in integers, so we change the Sliders Output Values accordingly.

Once we’re finished with the Inspector, we can return to the patch and resize the [multislider] so it will be easier to see. If we lock the patch and click inside of the [multislider], we see there are eight distinct sliders which can be arranged freely. The vertical position of each slider will represent the pitch of each sequencer step.

In order to connect the [counter] to the [multislider], we have to use a message box (shortcut “m”). Type “fetch \$1” into the message box. In this case, “fetch” tells the [multislider] object to output the value of the slider corresponding with the current step as determined by the counter. The “\$1” portion of the message is a “dummy variable” that is automatically replaced by whatever value goes into the message box. Since [counter] is sending out the numbers one through eight over and over again, the [multislider] interprets the incoming messages as “fetch 1,” “fetch 2,” “fetch 3,” etc. Each time it receives this message, it outputs the value of that slider, which we will use to control the pitch of our sequencer shortly. We can verify all of this by adding some integer boxes (“i”) and turning on [metro], as in the video.

Next, we will create a simple synthesizer using the same objects as in the previous tutorial. The only change in this video is the use of a different oscillator—instead of a sine wave, we’ll use [saw~], which generates a sawtooth waveform. This doesn’t have any impact on the functionality of the patch—it just gives the sound a different, richer timbre.

The final step is connecting the two halves of the patch. We can think of the left half of the patch as the “control” part of the patch, and the right half of the patch as the “audio” part of the patch. There are two pieces of information we have to pass from left to right: when each note is triggered, and the pitch of each note. We can pass this information along using the [trigger] object, or [t] for short. The [t] object takes as its arguments the kind of information we wish to pass through. In this case, we want to send an integer (“i”) to set the pitch of the oscillator, and a “bang” (“b”) to trigger the envelope. The [t] object also helps us set the order in which information is sent: arguments are sent in order from right to left. So if we type in [t b i], the integer, representing pitch, will be sent first, followed by the bang for the envelope generator. Sometimes the order doesn’t matter, but it is best practice to update the pitch of a sound before actually triggering it.

The final step is to connect the outlets of the [t b i] object to the oscillator and envelope generator. The bang should be connected to the envelope generator with a button as before. The integer, however, must be converted from a MIDI note number to a frequency value (in Hertz), as all of the oscillator objects expect frequency values to determine pitch. Therefore, we use an object called [mtof] which converts from MIDI note numbers to frequency before passing the pitch information through to the oscillator. Once this is complete, we can lock the patch, turn on the audio, turn up the volume, and explore the different possible patterns of our sequencer.