Not the Star Spangled Banner #15

I used to play in a woodwind quintet in college, and it was a lot of fun. Sometimes a professor would sit in if someone wasn’t available, and we could really get cooking then. My instrument was the clarinet at the time. I wrote some music for the group, but it wasn’t very good. Some of the ugliest music I’d ever heard. It made sense on paper, but when it was played, you could tell that that was the first time it had actually been heard. That’s one reason I really like playing with samples and a laptop. I can instantly hear how terrible my music is sounding at the time I think of it.

Original Score

Today’s music started out as a MIDI file of the Star Spangled Banner, scored for four voices. The music for what is now the U.S. national anthem was written by John Stafford Smith, who wrote it for a musical social group of which he was a member. Later, the brother-in-law of Francis Scott Key mentioned that the poem Scott had just finished, originally titled “Defence of Fort M’Henry”, matched the rhythm of the Smith tune. Amateur musician meet amateur poet, and the rest is history. “The Star-Spangled Banner” is very challenging to sing, but that hasn’t stopped many people from making the effort regardless of the challenge.

I pulled a MIDI file of the song, and quickly discovered that it was written in 3:4 time. All the inputs to the deep neural network Coconet must be in 4:4 time, and 32 1/16th notes long. So I set about to double the duration of the first beat of each measure. There’s some precedent for this. Whitney Houston performed it in 4:4 at the 1968 Super Bowl. It’s charming, in a very relaxed way. I had to do this to continue my technique of feeding existing music into Coconet, and then having the deep learning model generate its own harmonizations.

After obtaining around 100 synthetic Banners, I then selected a few to go through an algorithm that extends the durations of time steps that include notes not in the root key of the song. This process stretches out the interesting parts and rushes through to conventional cadences. Unless they cadence in a chord whose notes are not in the key of C major. All these alterations create something quite unlike the original tune.

I scored it for nine instruments: flute, oboe, clarinet, french horn, bassoon, piccolo, english horn, Bach trumpet, and contra bassoon.

Sacred Head #115

I’ve been trying out lots of modifications to the Sacred Head Fantasia on a Synthetic Chorale. Today’s post is number 115. It’s more dense than before, starting with 24 voices, and then selectively trimming some voices in each of nine sections.

Oh God, Look Down from Heaven #38

I increased the potential number of voices, and added Balloon Drums, Long Strings, and a few more finger pianos. Now it sounds like an orchestra of zithers. Big ones, and giant bass kalimbas. These are all samples from instruments I’ve built over the years. There are times that remind me of Hawaii Slack Guitars. I adjusted the tuning to Victorian Rational Well Temperament on A♭, since that has a nice B♭ major, and this piece is in D minor, until the final chord with a Picardy third.

What I really like is that this version sounds less like Bach than any of the others.

Oh God Look Down

Oh God, Look Down from Heaven (BWV 2.6, K 7, R 262) Ach Gott, vom Himmel sieh darein #15

This is based on the coconet model transforming another Bach chorale. Ach Gott, vom Himmel sieh darein (Oh God, Look Down From Heaven). This chorale uses a lot of notes outide the primary key of D minor. Coconet did his best to harmonize it. I scored it for some samples that I made myself, and don’t have any licensing issues. The instruments are two different finger pianos, one full, and another just for bass notes, plus some Ernie Ball guitar strings, and other assorted strings sounds.

I used the same basic manipulation techniques on this one: stretch out the interesting parts, and repeat them in different ways. This version includes code to randomly choose among several different manipulations:

if final_length > 10: probability = ([0.1, 0.05, 0.05, 0.1, 0.1, 0.05, 0.15, 0.05, 0.15, 0.2])
else: probability = ([0.2, 0.1, 0.06, 0.1, 0.1, 0.15, 0.04, 0.05, 0.1, 0.1])

if type == 'any':
type = rng.choice(['tile', 'repeat', 'reverse', 'tile_and_reverse', 'reverse_and_repeat',
'shuffle_and_tile', 'tile_and_roll', 'tile_and_repeat', 'repeat_and_tile',
'tile_reverse_odd'], p = probability)
print(f'time_steps: {clseg.shape[1] = }, {factor = }, {type = }')

So I control the likelihood of picking different techniques for longer repetitions, favoring ’tile_and_roll’ and ‘repeat_and_tile’. The former tiles the section, basically repeating it note for note, but each time it rolls the notes, so it starts at a different point in the array. Repeat and tile takes half the voices and tiles them, and the other half it just makes them longer. It all works out in the end.

Oh God Look Down

Sacred Head in the Wolf’s Lair

This one was an experiment with a tuning that works well in the primary keys of the chorale, but goes into strange territory with notes outside those triads. It’s designed to sound good with the following triads: D major, A major, G major, B minor, F# major, and E minor. But there is more to the chorale than some triads. The scala file is here:

! adams_on_D.scl
!
Tuning for O Sacred Head Now Wounded: works well on D maj, A maj 7, B min, E min, G major, F# maj
12
!
15/14
8/7
32/27
9/7
4/3
10/7
32/21
11/7
12/7
25/14
40/21
2/1

And the result is here:

Sacred Head #44

I built a machine that cranks these out by the dozen. This one is interesting. I also implemented a few new routines, one that flips sections horizontally, and another that tiles sections. The former reverses the direction of a short segment, the latter repeats a section over and over. With all the masking going on, the results are sometimes subtle.

Fantasia on an Artificial Chorale that sounds a lot like “O Sacred Head Now Wounded”

This is another fantasia on the output of the coconet Deep Neural Network Model, scored for a primarily percussion ensemble. I fed a real chorale into the model, with the exception of one of four voices, and it predicts the missing voice. I repeat the process until I have 4 chorales all made up by coconet. I then manipulate the chorale using python functions.

An important factor in the way coconet processes the input data, is that is starts with a MIDI file, but immediately translates it into a piano roll structure. The piano roll consists of 32 data points representing notes that are played by each of four voices during a four measure, 32 1/16 note intervals. If a note lasts longer than a 1/16th note, it appears in the next time step. The note ends when a different note appears or a zero appears in a subsequent time step.

For example, here is the first four notes of Bach’s chorale, “O Haupt voll Blut und Wunden”. This is used in several sections of his St. Matthew Passion. I know it in English as “O Sacred Head Now Wounded”.

Each time step is equal to a 1/16th note. The first is the representation in MIDI note numbers, the second is translated into note names.


MIDI:
[66 66 66 66 71 71 71 71 69 69 69 69 67 67 67 67]
[62 62 62 62 62 62 62 62 62 62 62 62 62 62 64 64]
[57 57 57 57 62 62 62 62 57 57 57 57 59 59 57 57]
[50 50 50 50 55 55 55 55 54 54 54 54 47 47 49 49]
Note Names:
['F♯' 'F♯' 'F♯' 'F♯' 'B♮' 'B♮' 'B♮' 'B♮' 'A♮' 'A♮' 'A♮' 'A♮' 'G♮' 'G♮' 'G♮' 'G♮']
['D♮' 'D♮' 'D♮' 'D♮' 'D♮' 'D♮' 'D♮' 'D♮' 'D♮' 'D♮' 'D♮' 'D♮' 'D♮' 'D♮' 'E♮' 'E♮']
['A♮' 'A♮' 'A♮' 'A♮' 'D♮' 'D♮' 'D♮' 'D♮' 'A♮' 'A♮' 'A♮' 'A♮' 'B♮' 'B♮' 'A♮' 'A♮']
['D♮' 'D♮' 'D♮' 'D♮' 'G♮' 'G♮' 'G♮' 'G♮' 'F♯' 'F♯' 'F♯' 'F♯' 'B♮' 'B♮' 'C♯' 'C♯']

So even though a note appears in the time step, it may not be played. Instead the machine knows that if the note in a voice doesn’t change, it holds the note instead of striking it again. The convention in piano rolls is that if a time step in a voice is 0 then that ends the note. If I pass a mask of zeros at locations in a matrix against the piano roll, it turns held notes into arpeggios. I discovered that by mistake as I was playing with the data. But I soon began exploring what matrices could produce the most interesting results.

As before, I searched for those time steps that contained notes not in the root scale of D major, and then extended their duration by 5, 10, or 15 times so that they were heard much longer than the time steps that only contained notes in the root scale. This had the interesting effect of lingering on the passages in the chorale that contained a different key, suspensions, or diminished chords.

This version uses some of my favorite instruments from the music I was making five years ago: harp, classical guitar, marimba, double bass martelé, xylophone, piano, vibes, and bass finger piano. These eight instruments start all playing, then over time they form groups of four, five, or six instruments, until they all come back together at different times.

The tuning is Victorian Rational Well Temperament in C, which seems to work well with the D major that this chorale is written.

Fantasia on an Artificial Chorale #31

I fed another Bach chorale through the coconet model. This one is Wachet doch, erwacht, ihr Schläfer (BWV 78.7, K 188, R 297), which is the same as BWV 353 Jesu, der du meine Seele. I call it “Wake Up, Wake Up, you Sleepers”.

I removed one of the SATB voices, and had the model recreate that voice using the deep learning model in coconet. Then removed a different voice, over and over, until I had 16 voices all created by the model. I then ran those artificial chorales through some conventional algorithmic music functions that I’ve built in python. If this piece could be divided up into four parts, the first one uses the first four synthesized voices, the next uses the next four, and so on until all 16 have been revealed. I played around with a lot of different modifications. More here.