I want to talk about the piece that I created for Jin’s animation. For me it is a departure from the types of work that I have done previously, and I like the direction it has taken me in.
The initial piece was composed of a low pad sound, a vocal choir sample, and a piano. The piano part is an improvisation over the top of two chords played on the pad. I didn’t adjust the timing or force quantisation. The reason for this is that I wanted to capture both the organic texture of the animation and embody the concept of life and death as a cycle (again, very organic) that is depicted in the animation. Making the notes quantised in this style of music wouldn’t match the feeling that I was aiming for.
After further development and feedback from Jin, I detuned the G in the piano part to break away from Equal Temperament to something a little closer to the harmonic overtone series (Figure 1). The setting used on the G is -29 cents. This resulted in making the piece feel more organic and ‘live’.
Figure 1: At top the original piano piece and then a doubled piano with the G removed and at the bottom, a piano with just the detuned G notes.
I’m fascinated by how such a small detail has such a dramatic effect on the feel of the final piece. In combination with the panning of the piano and the reverb, the melody line in this piece really comes alive.
I made minimal changes to the original sketch. Some levels were changed to order to hear the dialog better.
The additional part that I worked on is when Dawn walks down to Joyce. I created a variation of the main theme that reversed the timing of the notes for Dawn, so she had her own theme. I also added when the vampire wakes up. Here, I wanted to create a feeling of dread and suspense. I utilised Linear Chromaticism in the strings to create tension (Figure 1) and added a bassoon to add timbral texture and variety. It is subtle but you notice when it isn’t there.
Figure 1: Using Linear Chromaticism in the strings.
Because this clip was already edited, I used the existing cuts to mark changes in the music. When the vampire smiles, I move to more dissonant notes (Figure 2), and again I do this when the vampire steps forward. This punctuates what we see on screen to make the overall feeling more sinister. By matching the action, it heightens the emotional tone of the music, making the sum of the parts greater.
Figure 2: Change in notes when the vampire smiles to sync with the action and increase the feeling of dread.
I really enjoyed creating this score, it has challenged me to think outside of diatonic centric music and focus more on the feeling that I’m trying to evoke. I also think that this will be a focus for my creative practice going forward as I see both creative development and growth from these projects and can see even further development in more exploration.
I decided to learn a technology that we may use for the installation. I’m inexperienced with Pure Data (PD) so, I decided I’d start.
I’m going to create a timer to get time divisions:
FIGURE 1: Creating the grid for the piece.
This gives a metronome at 60 BPM. I send this to the multiplier. This then divides the number to give me whole, half, quarter, and 16th notes.
I want the Bass to move between the tonic (I) and subdominant (IV). I’ll keep this simple and do two measures of each. To do this I utilise the modulo (%) operator.
FIGURE 2: Making two measures of the tonic notes and two measures of the subdominant.
This sketch doesn’t make sound yet. So, we a sound object. For the bass I’ll use a [phasor~].
FIGURE 3: getting the bass note and creating sound for output (not yet connected to the DAC).
The “Chord” is sent via an output of 0 or 1 it selects the frequency that I’m looking to play and into the [phasor~]. The right-hand side of the sketch uses a [vline~] to adjust the Amplitude, Decay, Sustain, and Release (ADSR). At the end I send this for output to the DAC.
I wanted to do a melody and do a randomisation of notes in a sub-patch. So, I used the [inlet] and [outlet] objects. I wanted to create the scale notes and just cycle randomly.
FIGURE 4: the Chord_Scales sub-patch to generate random notes in a scale.
The next two patches look similar, but sound different as I swapped the [phasor~] for [osc~] and made the bleeps to be three octaves higher.
FIGURE 5: The Pad and Bleeps sounds.
This then leads to the final output.
FIGURE 6: Final output to the [dac~] object.
The instruments use reverb (freeverb) where I mix the levels to taste. It’s taken hours to do something I could do in a DAW in minutes. However, I can’t use a DAW with sensors easily.
The final output sounds like:
Is this great? Not compared to using a DAW. The sounds are basic, and the progression limited. However, it’s the beginning of a journey and opens the possibilities of using physical computing and sensors.
Bibliography:
Floss Manuals (1991) Pure Data. Available at: https://archive.flossmanuals.net/pure-data/ (Accessed: 24 November 2023).
Puckette, M (2011) Miller Puckette MUS171 Videos. Available at: http://pd-la.info/pd-media/miller-puckette-mus171-videos/ (Accessed: 25 November 2023).
Really Useful Plugins (2020) Pure Data Tutorials – Rich Synthesis. Available at: https://www.youtube.com/playlist?list=PLqJgTfn3kSMW3AAAl2liJRKd-7DhZwLlq (Accessed: 25 November 2023).
After sitting with Olly I’ve come up with a list of Music Cues that should work. I may need to revise this after I’ve had a chance to see this in-situ and may also need to extend some of the “blocks” to make it less repetitive. I have a feeling that the Maze Area Theme loop will be a bit too boring.
To make the Cue List I’ve copied the list template from “Composing Music for Games – The Art, Technology and Business of Video Game Scoring” (Thomas, 2016). Even though for this I’m the “client” so it’s not all needed, but it will be good practice to have the columns and to fill them out as needed.
Cue List
Music Cue
Date WIP Sent
Client Feedback
Date Revisions Sent
Date Approved
Starting Room Theme (30 sec loop)
Room 2/3 Theme (30 sec loop)
Ramp Loop (10 sec loop)
Boss Theme (15 sec loop)
Maze Area Theme (30 sec loop)
Altar Loop (10 sec loop)
Guitar One Shot (Triggered)
Siren One Shot (Triggered)
I’ll also go through the instructions to the developer (thanks Olly) for each of the cues and how I want them to trigger.
Instructions to the developer:
Starting Room: A loop of the Starting Room Theme (File name: StartRoom-v1.aif)
Rooms 2 and 3: A loop of the same Theme (File name: Room2_and_3.aif), if possible, overlay the Guitar (File name: GuitarOneShot-v1.aif) when first entering the room)
Ramp to Boss Level: Fade out the Room 2/3 music over 2 seconds and have the Ramp Loop run (File name: Ramp-v1.aif).
Boss Level: This needs to be spatial audio. The sphere needs to completely cover the boss room and half of the ramp. Have a linear fall off so that as you come up to the boss area you hear the music for the boss level fade in (File name: BossFight-v1.aif). As you go into the Boss Room you can stop the Ramp loop.
Maze Area: Loop the same theme (File name: MazeArea-v1.aif). Trigger the siren, if possible, on entry (File name: Siren-v1.aif).
Altar: This needs to be spatial audio. It should fade in over the top of the Maze Area Theme and not replace it. It should loop (File name: AltarLoop-v1.aif) and have a linear fall off that allows it to fade in as the users goes up the stairs.
Final thoughts after playing the game with sounds added:
I think that the music works, but I’ll want to swap the Rooms 2/3 and the Maze areas so that the build-up is better. I may also try to get a little more differentiation in the cues in terms of build-up, so they are more different from each other. I also think that the triggered one-shots will also help.
It was interesting playing the game. As a player, it’s difficult to listen to the music rather than hear the music. In a way I think that it heightens the experience of the music. You hear the music, and you respond to the emotional overtones. You don’t listen to it as you may if you were being passive while the music is being played. I think I may need to listen to rather than play a few of my games to better understand how they have impacted on the gaming experience.
References:
Thomas, C. (2016) Composing music for games: The Art, Technology and Business of video game scoring. London, UK: Taylor & Francis Group.
After looking at the three games we had for class I decided to score for the FPS game. I felt that scoring the RPG would be a little close to the more traditional orchestrations that I have done for the first two parts of the assignment.
For a genre I chose to do electronica/hardcore. My first step was to look for a few guitar samples. I was after something that was heavy and distorted but had a good metal riff. Not quite Djent, but also not 90s metal, I couldn’t find it so, I picked up my guitar and plugged it into my effects unit and dialled in a tone and came up with a quick riff.
I had in mind that I needed a cue for the main boss and at least two levels of different energy. One for the start room and one for the rest of the rooms. I wanted to have a break between the main rooms and the boss room.
I also wanted to include some glitchy, more electronic sounds, to make it feel game like. Think 80s video games on Commodore64. There are several 8-bit like effects when the music is going full-on. While at odds with the heavy guitars, but the synths being full spectrum support the guitars limited frequency range, so it works from a timbre perspective.
I used a 909 for the kick as I wanted this to be EDM-like. It didn’t feel aggressive enough, so I added a very distorted drum sample. It may sound like overkill, using both kicks resulted in a better attack but also had the aggressive distortion that I was after.
I’m going to revisit this in a few days’ time to make sure that it’s right, but overall, I’m very happy with it.
This week I decided to extend the in-class learning with generative music in Ableton Live.
Here is the finished project:
I started with the techniques from class, like the arpeggiator set on random. I wanted to see how far I could push creating an identity for a piece that had no set length while being non-linear.
I wanted to keep the bass going almost the entire time. I used a C minor triad with an additional C above. I selected the notes and changed the “Chance” value to 29%. This means that every time it loops there is a 29% chance that each note will play.
View of the “Chance” parameter for the bass instrument.
To give variety I set up a Drum Rack that played samples randomly. Using the same idea as the bass part I set all the drum pads to a 12% chance to play every loop.
View of the Drum Rack with all of the notes set to a 12% chance of playing.
For more variety, I made a few blank clips in the Session View utilising “interior pathing” between blocks within a track:
Session View with all of the clips, note that the white clips are blank clips with Launch actions to randomly choose another clip after they play.
The main “melody” is interesting. I decided that to create theme and variations by manipulating the “Chance” value to allow notes to play or not.
Modifying the Chance values for the main melody line to create more variation. notes that I wanted to have more chance to being heard were increased.
I used another feature to generate more interest, this patch uses physics to generate keyboard presses. By adding this we created more interest outside of the main melody line.
I used randomness in the filtering and effects to create tension:
The LFO (set to random) controlled the feedback of the echo to create glitchy effects.
While I’ve used lots of randomisations, it is within a few defined rules, so it’s familiar yet different with each iteration. I can see value in creating versions with different start values.
I feel that this type of composition would be suited to RPGs because it can create an infinite track, and you never hear the same thing twice. Good for a genre of game that can have 80+ hours of gameplay.
I’ve been thinking about cliches and wanted to explore them. I wanted to explore as the piece that I created for Buffy (Buffy the Vampire Slayer – The Body, 2001) is cliched, but I’d argue, to good effect.
For example:
Figure 1: Example of what an “Emotional Drama / Tearjerker / Tender / Tragedy” Theme could be.
In “An introduction to writing music for television”, Krug (2019) lists a set of “Palettes” that help composers create feelings. These are a collection of: Melody, Accompaniment, and Rhythmic elements, that combine. They are cliches, but used to get to a score that works. For Figure 1, you can hear that in my piece that I’ve stuck to it. While writing I didn’t use Krug’s work. I chose instrumentation for what sounded right to me. They happen to be close.
Let’s separate the three sections Krug describes and how they apply to my score.
Melody: We use a different instrument for the melody line. Here I’ve moved away from Krug’s suggestion using the B Flat clarinet. Later when I use the melody for Dawn, I use a violin. The clarinet sounded more plaintive than I could get with a violin.
Accompaniment: I’ve stuck with Krug’s suggestion. I used the piano as a full spectrum instrument, and as I wanted to create a point in time where it was solo.
Rhythm: What I have written remains true to Krug’s suggestion. There is no complicated rhythm in the score. Though the idea of using a synth pad with violins and piano for tragedy would be very dependent on the timbre of the synth pad, not something that I would usually do.
All three add up to a score that does hit the “Emotional Drama/Tearjerker/Tender/Tragedy” theme, even though I move away from his suggestion for the melody.
References
Buffy the Vampire Slayer – The Body (2001) The WB Television Network, 27 February.
Kruk, M. (2019) ‘Chapter One: Creating a Palette’, in An introduction to writing music for television: The Art and Technique of TV music writing with contributions from Emmy Award winning composers. London, UK: Fundamental Changes, pp. 7–17.
I choose “The Body” (Buffy the Vampire Slayer – The Body, 2001). As this is a long clip, I chose to do from the start to where Dawn sneaks in to see Joyce. I wanted to highlight the humanity and fragility in the scene.
I decided to score as if it had been scored by Thomas Wanker, who was the Buffy composer from 2000–2002 (IMDb, 2023). He used specific techniques in other episodes including solo violins playing high notes to create tension, and cellos for mark changes in feeling or scene. I’ve used both and the effect feels right for the visuals.
I did this in a minor key. I kept the pace slow, starting with sparse instrumentation. Strings, and a piano with a lot of reverb to invoke a reflective feel. I added a leitmotif with a B-Flat clarinet playing variations. The theme is played by a flute when Dawn walks towards Joyce. I striped back the instrumentation until just the piano is playing, this is to higher the feeling of isolation in the scene.
At the start of the leitmotif the B-Flat clarinet uses a staccato before swapping to a legato. I did this as this suited the timbre of the instrument and the feeling I was evoking. At the beginning of the piece there is a single note played on the violin. Played as legato, it didn’t have the impact that I was looking for, so I swapped to tremolo, the timbre of the instrument fits the emotional tone.
When the vampire wakes up, I use Linear Chromaticism to build even more tension before the scene cuts.
One other thing I wanted to highlight is that there are two main tempos used with a gradual shift between the two parts (Figure 1):
FIGURE 1: The tempo shift between the first scene and Dawn walking towards Joyce.
Overall, I’m very happy with how this turned out and believe that this is a good homage to Thomas Wanker.
References
Buffy the Vampire Slayer – The Body (2001) The WB Television Network, 27 February.
IMDb (2023) Thomas Wanker. Available at: https://www.imdb.com/name/nm0911173/?ref_=ttfc_fc_cr17 (Accessed: 23 October 2023).
For this post I wanted to concentrate on articulation, dynamics, and tempo. And, how important it was to the overall feeling generated. As the underscore that I was creating for the Howl’s Moving Castle (Howl’s Moving Castle, 2004) clip was an orchestral piece, I needed to create something more realistic.
“If you’re not using articulations (a lot!), then your orchestral samples (not just your strings) are not going to sound anywhere near as realistic as they could.”
(Kruk, 2019)
I added staccato to the initial melody line for both the piccolo and the trumpets (Figure 1), until the last note, where I switched the articulation to legato so that the last note would be held.
FIGURE 1: Highlighted notes with Staccato articulation selected for the Trumpet.
For me the use of staccato at the beginning created a bouncier melody that I felt fitted better than using all legato.
Strings in Logic have a lot of articulations. These really breathe life into the score. From pizzicato just before Calcifer speaks, to whole note trills to build up a sense of tension at the end.
When Sophie crashes into castle, I used tremolo rather than trills as this sounded better. I also slowed down the tempo as we get to the crash (Figure 2). While it felt counterintuitive to slow down the score, it heightened the tension in the scene.
FIGURE 2: Slowing down the tempo to increase the tension in the music.
I used quite a lot of articulations across all the instruments. When combined with changes in tempo and dynamics you get a sense of realism. To get this a step closer to sounding realistic I’d probably record a real strings and horns to layer over the track.
The idea of generating a realistic orchestral performance is twofold. One, I want my work to sound as good as it can. Real players will add nuance to a performance that sampled instruments lack. Two, and this happens more with modelled instruments, is that without articulations, the computer-generated performance can sound sterile and flat.
References
Howl’s Moving Castle (2004) Directed by Hayao Miyazaki [Feature film]. Tokyo, Japan: Studio Ghibli.Lehman, F. (2018)
Kruk, M. (2019) ‘Chapter Eight: Articulations are a must’, in An introduction to writing music for television: The Art and Technique of TV music writing with contributions from Emmy Award winning composers. London, UK: Fundamental Changes, pp. 92–93.
I chose “Howl’s Moving Castle” (Howl’s Moving Castle, 2004). It had dialog, which would be a challenge, and I thought it would be interesting to do an orchestral piece.
Two sections that stood out to me. When Sophie was about to crash into the castle, and when everyone is sleeping. I spotted these in Logic to set the tempo of the piece. Spotting is the process of breaking the piece up into different emotional states.
I chose to create a leitmotif for the main characters. I did this for contrast between when the main characters are interacting and when we have a different emotional cue.
For the crash, I wanted it to sound hopeful at the beginning, and then create a sense of impending doom as she realises that she cannot land. Using Pantriadic Chromaticism (Lehman, 2018) and shortening note lengths as we begin to crash, I created drama by moving outside of the diatonic. This resolves into a variation of the diatonic theme to let the audience know that no one was hurt.
Final version of the music.
There were a few challenges while I was working. One was working with the tempo changes created when spotting. It became an issue as I created a few too many points. This created periods of pronounced changes in the tempo. To make these tempo changes less obvious, I wrote in a way that a single instrument could play over the end of one tempo and into another to make them less obvious.
I found Pantriadic Chromaticism has helped me to break the structures that I have relied upon and concentrate on the feeling that I’m trying to elicit. By not relying so much on what key I’m in and the chords in that scale, I can concentrate on the emotional tone that I’m after.
References:
Howl’s Moving Castle (2004) Directed by Hayao Miyazaki [Feature film]. Tokyo, Japan: Studio Ghibli.Lehman, F. (2018)
Lehman, F. (2018) ‘Pantriadic Chromaticism’, in Hollywood harmony: Musical wonder and the sound of Cinema. New York, NY, NY: Oxford University Press, pp. 66–69.