Blog

Balloon Drum Music #30

After completing the previous version of this piece on January 4, I thought it would be neat if I added the ability to slide some of the notes in the bass flute. Little did I know it would open up dozens of bugs in my python code. It took me a month and a half to sort out the issues. But now I can include slides in arbitrary note strings.

I had already created the ability to enter specific streams of notes in a text string.

inputs_array_64 = "n0o4u0e1v69d12 n3d4 n4 n5 n6o0 n7o4 d6n0 n1 n3 n2 n4 e8v78d1n6 n7"

This is similar to the way my old Pascal code worked. I specify notes with letters for each of the six features each note possesses:

  • n: note number, a number from 0 to 7 for the scale degree, which is later translated into one of the 256 note numbers in my 214 note tonality diamond array. In this example, we start with n0, for the first note in a scale, move to n3, the fourth note in the scale, and so forth. If I change the root key of a scale, then the notes choose different notes in the 256 note diamond.
  • o: octave, a number from 1 to 8. Zero is used to indicate that the note is silent. The fifth note is silent in the example above.
  • u: up sample, which causes the program to use a different sample than the one calculated based on the equivalent MIDI number. For example, each sample set has six distinct samples per octave. C4 uses one sample, and D4 uses the next one. Sometimes I want to pick a higher or lower sample for a more sharp or mellow sound. More on that later.
  • e: envelope, which is where I pick the envelope to be used. I pre-made several envelopes in csound and then specify which to use. e1 is on-off, e8 is fast decay, and e2 is very fast decay. I use the latter for the baritone guitar samples.
  • v: velocity, which is used to indicate how hard the key is pressed. This is used to choose a sample that matches the preferred pressure. I use that if the sample set supports different samples for different loudness levels. This is important with piano samples especially. It also affects the loudness in the absence of samples that don’t provide different loudness levels.
  • d: duration, which is how many time steps the note should be played.

My latest enhancement was to add a new feature, the glissando. The value for g indicates how many time_steps should be combined to have a continuous glissando between notes. See it’s use in the next example:

inputs_array_64 = "n0o4g0u0e1v69d12 n3d4g12 n4 n5 g0n6o0 n7o4 d6n0 g18n1 n3 n2 g0n4 e8v78d1n6 n7"

In this example, we start off with g0, which means no slide. Then on the second note, we have a glissando over 12 time steps. In this case that includes the notes n3, n4, and n5. I then create a function table that transits those notes over 12 time steps. Then in note five, we reset to g0 for no glissando. Note 8 starts another glissando over 18 time steps. I build a slide that transits n1, n3, n2, since each note has a duration of 6 time steps. The result is relatively simple to slide wherever I want in the scale. My code calculates the closest note automatically, changing octaves to do so when it would result in a smaller slide.

One of the consequence for glissandi on samples that already have vibrato is that a slide up by a whole step results in a speed up of the vibrato. This can get annoying if the distance traveled is several steps. My code automatically compensates for this “muchkinization” effect by changing the up sample feature in the background to avoid this effect. In this way, rising or falling glissandi don’t have mistimed vibrato. If I want, I can deliberately set the up sample feature to higher or lower samples. In the case of this piece, the bass flute part is played by four different flutes, with four different up sample values: -1, 0, +1, +2 from the calculated sample.

Glissandi can have many other controls, including how long to spend on the slide compared to the destination note, different durations to stay at different notes, different velocities for different notes. I’ve not yet implemented these controls in the text input file, but I have them buried in the code for later exploitation.
Listen here:

Balloon Drum Music #12

Today’s version is for an ensemble of finger pianos, balloon drums, baritone guitar, bass flutes, clarinets, oboes, french horn, and bassoons, with their bass versions. It’s tuned to the 15-limit of the Partch Tonality Diamond.

I wrote it using a python notebook and code that’s available on github here. It has everything you need to duplicate the results. Or actually, since the piece is probabilistic, you can create one that is not exactly like any other.

The structure is in the form of a vamp and a bridge, each around a minute and half in length. The vamp features the woodwinds pretending to be a horn section, with the bass flutes playing slides and trills. Everyone plays tetrachords based on the 4,5,6,7/8 to 9,11,13,15/8 overtones. The bridge has the woodwinds playing one long sliding chord through a set of changes. I assume circular breathing. The bass flute plays the melody, such as it is. Throughout the bass line and percussion is playing the same tetrachords on finger pianos, balloon drums, and a baritone guitar. Tetrachords are four note chords. During the bridge the chords go through some changes that I’ve used in the past.

You can listen to the results here:

Balloon Drum Music Take 2

I’ve been working on some python code to explore the tonality diamond to the 31-limit. This piece is one of the first to result in some “music”. It’s derived from a set of chord changes based on the 15-limit diamond. It starts out with a vamp on the otonality of 16/9, then proceeds to a bridge made up of nine chords:

# mode root rank inversion
bridge_keys = np.array([["oton","16/9","A", 1],
["oton", "8/7", "A", 3],
["uton", "9/8", "A", 3],
["oton", "16/15", "A", 4],
["uton", "1/1", "A", 2],
["oton", "1/1", "A", 1],
["uton", "7/4", "A", 4],
["uton", "15/8", "A", 4],
["oton","16/9","A", 3]])

I divide the 31-limit diamond up into what I call “ranks”. Rank “A” in the otonality is 8,10,12,14/8. Rank “B” is 9,11,13,15. I’ll get to the other ranks after I am more comfortable with the tools I’m using these days. Instead of using my old standby Pascal code to translate text into Csound, I’ve written a collection of python functions, dictionaries, and data structures. It’s a pretty steep learning curve. But I think there is potential here. Take a listen.

Balloon Drum Music for Small Ensemble #17

This piece is made up of 17 short pieces based on a journey through the tonality diamond. They each consist of a vamp and a bridge. The vamp is in the same key for several measures, and the bridge follows a set of rapid chord changes as shown in the list below.


vamp:
otonality on 16:9 - B♭

bridge:
otonality on 8:7 - D+
utonality on 3:2 - G♮
otonality on 16:15 - D♭
utonality on 4:3 - F♮
otonality on 1:1 - C♮
utonality on 7:6 - E♭
otonality on 16:9 - B♭
utonality on 5:4 - E♮

I’ve split the 16 notes in the tonality diamond scale into four tetrads (four note chords).


Tetrad otonal ratios
A 1:1 5:4 3:2 7:4
B 9:8 11:8 13:8 15:8
C 17:16 24:16 25:16 29:16
D 19:16 23:16 27:16 31:16

Tetrad utonal ratios
A 8:7 4:3 8:5 1:1
B 6:15 16:13 16:11 16:9
C 32:29 32:25 32:21 32:17
D 32:31 32:27 32:23 32:19

The piece is is based on the vamp and bridge on each of the four tetrads, one at a time. The first one is based on the four notes of the ‘A’ tetrad, 1:1, 5:4, 3:2, 7:4 relative to the root key of otonality on 16:9 – B♭. The bridge is then played on the 8 keys in the list above. The next time, I play the vamp on the ‘B’ tetrad, 9:8 11:8 13:8 15:8, and the bridge is based on the same chord changes listed above, but on the ‘B’ triads on those o/u tonalities. Then we move to the ‘C’ and ‘D’, and back to the same series for a total of 17 short pieces across the cycle of tetrads. Each piece might start with a short set of chords to introduce the tonality, and end with another to close the piece out. Or it might just start right up at a faster pace. When it has a short intro, it sets the mood for that piece, based on chords in the chosen tetrad. There is never a time when more than four notes are played at the same time, except for different octaves. Some of the pieces are as short as 17 seconds, others are as long as 90 seconds or so. They all have a different rhythmic structures based on masking and arpeggiations.

The piece is scored for different collections of instruments. Each of the 17 pieces can include a woodwind quintet, finger pianos, and balloon drums in different combinations.

Five Dances based on TonicNet Chorales #46

Today’s work results from trying to isolate the variables that create the most interesting set of five dances based on the TonicNet Chorales. In this version the keys are F# minor, B minor, E major, A major and D major, tuned in Kellner’s Well Temperament. As before, I chose them because they have many segments where the notes are not in the root key of the chorale. There are dozens of variables in the creation of the dances from the original synthetic chorale material, but the main control points can be found by scanning the logs using grep:

egrep "mask.shape|factors|repeat_count|mask_zeros|midi_file" open_samples3-t46.log

This produces the following output, which includes the key values for important variables chosen by the algorithm from a set of ranges and probabilities:

Values

The dances are scored for finger piano, balloon drum, and Ernie Ball Super Slinky Guitar strings on an old Gibson humbucking pickup.

Five Dances based on TonicNet Chorales #25

This one is based on chorales in a circle of fifths: The chorales start out in F# minor and then B minor, E major, A major, and the final one in D major. It has a nice bouncy feel. I added some Balloon Drums.

Five more Preludes based on TonicNet Chorales #22

I’ve been using Pandas dataframes to analyze the synthetic chorales I’ve created. I started by generating 500 of them. Then I use lots of python code to learn more about the chorales. Each chorale produced by the TonicNet GRU model consists of a variable number of time steps, each consisting of Soprano, Alto, Tenor, & Bass voices expressed in MIDI note numbers. Some of those time steps have notes that are not in the dominant root key of the chorale. These are often leading tones, or suspensions, or diminished chords, augmented, and so forth. Bach used these chords to produce tension, that was always resolved in a cadence of some sort. The TonicNet model was trained on hundreds of real Bach chorale, which are further augmented by transpositions of the existing chorales to all twelve keys, so that 1,968 chorales were used as input to the model. What is amazing to me is that the final model is only 4.9 MB in size. The Coconet model ended up as 1.6 GB in size.

I like those interesting sections where the chorale uses notes not in the root key for many time steps. I wrote some python code that builds a list of the number of voices that are not in the root key of the chorale, one value for every time step:

zero_one_q = np.array([not_in_key(time_step, root, mode) for time_step in chorale_tr])

It basically calls another python function that looks at each time step and reports the number of voices not in the root key of the chorale. Once I have that array, I can build a list of sections that have notes not in the root key. I run every chorale through that routine so that I have information about each chorale that can be used to select for certain characteristics, such as lots of steps in a row not in the root key, or many sections of notes that are not in the root key.

Today’s results used a Pandas data frame to find chorales that met these characteristics:

pandas

I did the same for A minor. The value of these measures is that it produces a final result of five preludes that in total lasts about 15 minutes. That is in contrast with an earlier version that had many more challenging steps and went on for an hour and ten minutes. I used Kellner’s Well Temperament.

A Very Long Version of Five Preludes from TonicNet chorales for Finger Piano #21

The algorithm I use for most of my pieces is that first I find all the time_steps that contain notes not in the key of the chorale. Then I go about making those sections longer using a variety of elongation techniques. The code looks like this:

probability = ([0.2, 0.1, 0.6, 0.1, 0.15, 0.04, 0.05, 0.1, 0.1])
if type == 'any': type = rng.choice(['tile', 'repeat', 'tile_and_reverse', 'reverse_and_repeat',
'shuffle_and_tile', 'tile_and_roll', 'tile_and_repeat', 'repeat_and_tile',
'tile_reverse_odd'], p = probability)

I let the system choose which type of elongation. But over time, it’s certain to choose ’tile_and_reverse’ about 60% of the time. That is accomplished with this line of code:

clseg = np.flip(np.tile(clseg, factor), axis = 1)

This basically repeats a section of the piece, consisting of a certain number of voices performing over time_steps, then reverses the repeated sections. Retrograde.

All that is to say that the music is predetermined probabilistically, but not explicitly.

But if a certain chorale is dominated by ranges of the chorale that include notes not in the root key of the chorale, then there are a lot of repetitions. For this piece I chose five chorales that have that condition. So the extensions go on for a long time. In fact, in this prelude, each chorale lasts about 20 minutes, and there are five of them. You can do the math. I wouldn’t recommend this unless you like repetitive sounding music. Maybe skip around.

Tuned in Kellner’s Well Temperament.