Blog

Fantasy on an Artificial Chorale #10

This is a piece that originated as Bach’s BWV 180 Schmücke dich, o liebe Seele, reprocessed by the coconet deep learning model. I split the chorale up into 32 1/16th note segments, which is what the model was built to handle. I zero’d out one of the voices, and had the model remake that voice to create a four voice chorale similar to the Bach. Then I did it again, masking a different voice each time.

First, I searched for generated chorales that the highest pitch entropy using the muspy framework. Muspy is a library that reads a variety of musical structures, and includes many functions that help convert music files to other formats. It includes a set of metrics that measure a variety of qualities of the music. In my case, I searched the 100 16 voice chorales for those that had the most entropy, which in my case was the 12th through 15th voices of chorale #90.

Then I took that four voice chorale and stretched it out to three times its length. Next, I looked for parts of the longer chorale that had interesting segments. This was done using a measure of pitches outside the root F major scale, but on small subsets of the chorale. I stretched the most interesting segments by up to 8 times their original length. This was done in order to linger on the leading tones, suspensions, and other Bach tricks to add suspense and interest.

Next, I arpeggiated the entire chorale using a mask, not unlike the deep learning technique of convolution, but in my case just masking parts of notes, so that it created an arpeggio effect.

Finally, I rendered the chorale using Csound and my microtonal slide Bosendorfer in George Secor’s Victorian Rational Well Temperament. I added a convolution with an impulse response file from Teatro Alcorcon in Madrid from Angelo Farina. I think it sounds sweet, kind of like a finger-picking guitar.

It’s not Bach. But then, neither am I.

A new approach to artificial chorales – #8

This one was done a different way than the previous versions. Previously I had created 16 voice chorales by preserving just the bass part and letting the model figure out the other notes. What I created were four different versions of harmonizing a bass line. Each knew nothing of what the other ones had created, which led to chaos. I had to surreptitiously go back and blot out the notes that were in foreign keys. This was a rather crude way to compensate. Anyway, I though of a different way.

In this version, I went through the process in a very methodical way. I found a way to mask one voice and keep the others, letting the model use it’s own judgement for what that new voice should be. I then went through all the voices in the four part chorale until they had all been replaced. I kept doing that until I had a total of 16 voices. The new voice is usually pretty close to the original, matching 80% of the original notes.

Imagine a chorale with voices S A T B. The first time through it S becomes S prime, which creates a chorale with S’ A T B. Then it makes S’ A’ T B, followed by S’ A’ T’ B, finally making S’ A’ T’ B’. It saves that generated chorale (4,32) in a slot in a (4,4,32) array. Then it does it again, gradually shifting from the original chorale to one that includes some odd notes. S’ A’ T’ B’ becomes S” A’ T’ B’, then S” A” T’ B’.
Each is stored in the (4,4,32) array. At the end, it reshapes that into a (16,32) array and returns it to the calling program.

I end up with a 16 voice chorale. The first four are very close to the original chorale (Schmucke by Bach). They are in the far left of the audio stereo field. The ones towards the right are mutations of that, until on the far right it has gone into strange areas. The ear can’t really separate them out, so you end up with a strange mess of notes that include many that don’t belong. Maybe if I slow it down it would make more sense.

I should mention that I’ve been using the George Secor’s Victorian rational well-temperament (based on Ellis #2) in F for these realizations. It does a pretty good job in that key.


secor_vrwt.scl
!
George Secor's Victorian rational well-temperament (based on Ellis #2) on F
12
!
19/18
598/535
1088/917
1179/941
4/3
545/387
626/419
421/266
1510/903
185/104
325/173
2/1

Yet another artificial chorale #8

This set of variations have all been the result of taking four different chorale renditions and mashing them together to make a 16 part chorale. The problem is that chorale #1 has no knowledge of what chorale #2,3,or 4 are up to. That means that each may find a different path to their solution and end up clashing with one another. I wrote some code to remove some notes that were particularly out of whack, but is very primitive. I will work on a better solution next time. This one is charming in it’s own way.

Another Artificial Chorale – with arpeggios – #7

Today’s addition includes some arpeggios based on the outputs of the coconet chorale building model, as modified by me. I made this by building a set of masks that would silence some of the notes in the arrays. My current data structure consists of 16 voice lines. Each line contains 264 slots, each a 1/16th note in length. If the slot contains a non-zero number, then that note will play. If it was just played a slot ago, then it is held over. If I mask a line, some of the positive values become zero. If I silence voice lines in sequence, such as silencing the soprano, alto, and tenor, then they don’t play. By carefully timing the masks, I create some arpeggios.

I also spread the pianos out in the stereo field, so that you can each each one more clearly.

I created a bunch of chorales, and ranked them by various metrics from a python library called muspy. The one called saved_chorale329 had the highest scale consistency score.


Here is the arpegiation code in python.

def arpeggiate(chorale,mask):
for i in range(0, chorale.shape[1]// mask.shape[1],3): # skip every third one.
start = i * 8
end = (i+1) * 8
chorale[:,start:end] = mask * chorale[:,start:end]
return(chorale)

numpy_file = 'numpy_chorales/saved_chorale329.npy'
chorale = np.load(numpy_file)

mask = np.zeros((16,8))

# 1st part
mask[0,] = [0,0,0,1,1,0,1,1]
mask[1,] = [0,0,1,1,0,1,1,1]
mask[2,] = [0,1,1,0,1,1,1,0]
mask[3,] = [1,1,1,1,1,1,0,1]
# 2nd part
mask[4,] = [0,1,1,1,0,1,1,1]
mask[5,] = [0,0,1,1,0,0,1,1]
mask[6,] = [0,0,0,1,0,0,0,1]
mask[7,] = [1,1,1,1,1,0,1,0]
# 3rd part
mask[8,] = [0,0,1,1,0,1,1,1]
mask[9,] = [0,1,1,1,0,0,0,1]
mask[10,] = [0,0,0,1,0,0,1,1]
mask[11,] = [1,1,1,0,1,0,1,0]
# 4th part
mask[12,] = [0,0,0,1,1,0,1,1]
mask[13,] = [0,0,1,1,0,1,1,1]
mask[14,] = [0,1,1,0,1,1,1,0]
mask[15,] = [1,1,1,0,1,1,0,1]
np.save('arpeggio7.npy',arpeggiate(chorale,mask))

Artificial Bach – Schmücke dich, o liebe Seele #4

This is another in the series of Artificial Bach Chorales created by the coconet deep learning neural network. I take the output of the neural network and combine four predictions on top of each other. Imaging four piano players in the four corners of the room told to improvise a chorale based on the bass line from the Schmucke chorale. They all faithfully execute a reasonable Back chorale, discarding the top three lines and keeping the bass part. But each has his own take on what Bach would have done. One goes off into Eb major at one point, while the other moves to A minor. They all come together at key points in the phrasing, especially at the end. I had to chop the chorale up into segments that fit the model’s expectation of a phrase of two 4:4 measures. Bach in the case of Schmucke used 2 1/2 measure phrases for the first four, then two 2 measure phrases, and a final chord of a F major held for two measures. The prediction still tried to create a chorale out of a single chord.

Schmücke dich, o liebe Seele BWV 180 by J.S. Bach transformed

I’ve been working with deep learning algorithms lately. One that I’ve found interesting is coconet by the Magenta group at Google. It was originally made with Tensoflow, but I struggled to make it work with the current versions of that framework. So I worked on a version using PyTorch here. With a few fixes, I was able to make that work.

Coconet is described in this paper. The basic idea is they take lots of Bach 4-part chorales, break them up into 4 measure segments, and use them as input to a deep learning neural network. The key insight of the paper, and there are many, is their decision to drop out some notes from each segment, and reward the network if it figures out how to add notes back in that match those chosen by Bach. In this way, they train the network to restore missing notes. If the chosen notes don’t match Bach’s choices, then that creates a “loss”. Neural Networks work by altering their weights until the loss is as low as it can be. That results in a model that can intelligently re-harmonize a Bach chorale that might be missing many notes, or perhaps missing all the notes.

It took around 30 hours to train the model on my humble HP800 x86 Linux box. Once I had it trained, I could use it to harmonize from an existing chorale, from some random notes, or from nothing at all. The model did it’s best, and many times created a reasonable chorale.

Once I had the model trained, I tried different ways to use it to make music. In the case of today’s performance, the I started with bass line from BWV 180 Schmücke dich, o liebe Seele, pictured above, and threw out the soprano, alto, and tenor lines.

The model only works with two measure segments in 4:4 time, 32 1/16th notes per segment. I had to split my input chorales into similar length segments. This chorale has 2 1/2 measure phrases of 40 1/16th notes. Which were rejected by the model.

I then compressed these into 40 slots down to 32 by compressing the last 16 time slots into 8 slots, resulting in a 32 beat segment. I divided Schmucke into six 32 slot segments, representing the 5 phrases and the one final chord of the original chorale.

I then passed the bass line into the coconet model, and created a four part chorale. I did that four times and ended up with a 16 voice piano part. To match the timing of the Schmucke chorale, I expanded the 32 slot phrases back to 40 slots so the timing would match the original.

I fed segments into the model, which dutifully harmonized them as the model thought Bach might. The result is sort of musical, but not too different from Bach.

Along the way I’ve made many discoveries that I hope to exploit more as I go on. One was that some of the most interesting segments came from the final chord. It’s just a F major held for 32 1/16th note duration. But the model created many interesting variations on it. I hope to be able to use the model to harmonize each quarter note or the original chorale as a separate 32 time slot segment. That’s next on the agenda.

or download here:
Schmucke #1

Machine 7: William Schuman – Three-Score Set – Theme & Variations #42

This is a complete performance of Schuman’s Three-Score Set, with each set played straight, and followed by several variations. The variations transform the themes in a variety of ways. The tuning is all scales derived from otonal scales in the tonality diamond to the 31-limit. The specific scales change frequently. In the scores shown below, you can see in the middle the otonal scale for each measure segment.



or download here:
Machine7 – Three Score Set – with variations – #42

Machine 7: William Schuman – Three-Score Set – variations on set 3 – #37

This the theme and a set of variations on set III from Schuman’s Three-Score Set. The theme is played straight. For the variations I take each segment of a measure and play it, then make many alterations to it quickly and comprehensively, then move on to the next measure segment. For example, I might play the first half of measure 2, then make changes to the tempo, rhythm, notes, and other characteristics, until moving to the second half of measure 2. And so on. So it keeps coming back to the theme, but intersperses variations as it goes on.

In the graphic of the score below, I’ve added the otonal scales for each segment. In measure 0, you can see the C#. That’s an otonality based on 16:15 above C. Then, the third part of measure 2 I switch to the 1:1 otonality. Measure 4 uses notes from the G# otonality, with G# meaning 8:5 above C, followed by D+, which is 8:7 above C.

There are many slides, usually from one note to the next in a sequence. Other slides are within a chord, as at the end of measure 2, repeated at measures 19 & 20.

or download here:
Machine 7: William Schuman – Three-Score Set – variations on set 3 – #37