There are many different interpretations of how music should be composed and implemented into a video game. After years of composing, studying and teaching video game composition this is my personal reflection on the goals that every video game composer should be thinking about when composing for video games.
For a video game composer, the challenge is to present the player with a musical underscore which provides an emotional complement to the player which reacts to the game state changes. These state changes evolve dynamically over the gameplay experience driven by the player. In video games, a composer generally can’t predict when these state changes are going to happen.
Linear Vs. Interactive
Unlike linear media, the player controls the pace of the story telling in a video game. This player can experience a wide range of different states throughout that experience. These may include simple states such as winning and losing, or solving a puzzle, to much more complex game states like emotional states or character interaction, to dramatic storytelling. In linear composition the composer can write the music in such a way that the music will evolve and shape into a new emotional context based on the movie he’s working with. The linear media is static, so the picture will consistently arrive at the same scene at the same time on each viewing.
In video games on the other hand, the structure is never fixed in time. The timing of the structure is always related to player action. Because of this dynamic gameplay, the music will need to transition from one state to another on demand based on those player actions.
Player Control and State Changes
There are an incredible array of state changes happening all the time in video games. Even at the user interface level and game structure, a player may move from a menu screen to a cut-scene, then into gameplay, then to a map screen, then a character interaction, or a to a help screen. That list doesn’t even include transitions that are happening as part of the story, or puzzle. How does the composer navigate these state changes, and still bring a seamless musical experience to the player?
If the music changes throughout the game are seamless, then the player is participating in a single experience and never pulled out of that experience by distraction; the music does not shift abruptly from one musical section to another distracting the player. Moving from one musical cue to another requires a musical transition, and the best video game composers seamlessly move from one state to another by writing seamless transitions.
Because the composer can never be completely in control of the music, he has to make sure that when one music cue transitions into another, this gap between these two cues is seamless. This is why I believe the best video game composers are the ones that take the time to write transitions immersing that player in the experience without distraction.
In this article I use the word transition as the method for a video game to advance from one musical cue to another. There are many different ways to transition between musical cues in a video game from fading-out a cue while fading-in a new cue, to writing a custom piece of music to bridge two pieces of music. It is my belief that the secret to great video game composition is how the composer navigates player instigated state changes within a video game.
The ability of the composer to completely immerse the player in the experience through its many state changes is of utmost importance. This requires the composer to create seamless transitions allowing the music for flow given the player decisions and input into the pacing of the game.
Below, I’ve included two excellent examples of modern transitions used in video games. In both clips, the composers often use crescendos and fades on specific instruments to great advantage to help transition between musical cues.
In the first example, Ori and the Blind Forest (2015), the composer Gareth Coker primarily uses crescendos to move from one musical cue to another. The clip has text annotations describing the music to help you analyze what is going on. In addition, the sfx and dialog have been faded during gameplay to better illustrate how the music is working within the game.
One of the things to really watch out for in this example is when the player is in control of a character. When the player is in control of a character, the player can choose to wait in a specific place for as long as he would like. This means that the underscore during these moments of player control needs to loop indefinitely until the player gets to the next area or cut-scene.
When the player arrives at the next area or cutscene from gameplay, that is when the transitions occur. During cut-scenes, the player is not in control and the score can sync just like a linear video. But, when the player has control, the synchronization has to be able to lengthen or shorten based on the progress of the player.
Our second example is Batman:Arkham Knight (2015), with audio direction by Nick Arundel. Nick co-composed the score with David Buckley. The music is more complex and driven by the Wwise middleware engine. Wwise is aware of the tempo, and where the downbeats will fall so it can synchronize the transitions and stingers to fall in musically appropriate places. This allows the composer to write more sophisticated transitions. This clip is also annotated so that you can follow along with what is happening.
NOTE: The music in the following clip does not begin until 42 seconds. This first clip from Batman is from the end of the game, where there is less underscore in between melee combat scenarios. This allows us to more easily show how the music combat system works. The music starts when you enter into a combat scenario, then finishes when the last enemy is taken out. Note how the ending stingers are synchronized to land on the downbeats.
In the second example from Batman:Arkham Knight (2015), we look at the beginning of the game which has melee combat as well as cut-scenes. This scene and score are more complex, and it has underscore running throughout it. When Batman engages in a battle scene, the underscore is then heightened with a percussive stinger, then the battle music will play out until the last enemy is taken out. A stinger will emphasize the end of the combat which lands on the downbeat, and the underscore will continue.
The transition to the cutscene where Batman meets with Commissioner Gordon (4:44) is also a point of study. The underscore is built primarily of ambient swells, allowing any rhythmic elements to be layered easily on top making it a beautiful transition point into the cutscene. In addition, the use of negative space in the score when Batman leaves the Commissioner is incredibly effective before transitioning back to the underscore.
In addition to the use of crescendos in the examples, another one of the reasons the transitions work so well is that the music cues contrast one another (ambient verses rhythmic). Transitions between music cues work much better when going from two contrasting pieces of music. You can create a much more seamless transition if the music transitions from an ambient, non-rhythmic piece to a rhythmic cue, or vice versa – rhythmic cue to non-rhythmic ambient cue.
When transitioning between similar music cues, you’re more likely that you’ll run into problems at the transition point. If you transition from an ambient piece (e.g. legato strings, or pads) to another ambient piece, it’s likely that the harmonies may clash on the transition point. Likewise when moving from one rhythmic piece to another – the tempos and meter may not line up over the transition point and you’ll hear flam’ing of the rhythms. Paying attention to the sequence of music cues can lead to better transitions between music cues.
Many modern game developers are really paying attention to player immersion in the experience by minimizing loading screens, and transitioning between cutscenes and gameplay seamlessly. These improvements have also helped composer’s ability to leverage the importance of music and it’s ability to immerse the player through game state transitions.
There are many different options in the composer’s toolbox for shaping the overall mood or bridging cues within games. In future articles, we hope to explore many different adaptive music techniques that are used within games.
For more insight into Ori and the Blind Forest (2015), composer Gareth Coker speaks about his approach to the video game in the following links:
For additional information into the Batman: Arkham Knight (2015) score, audio director Nick Arundel discusses the musical approach for the game in the following articles: