In this article, we explore the different types of data the game engine can send to the music engine. Video games must keep track of all kinds of data. As composers, we help choose which parameters and data from the video game affect and change our interactive music scores. Some examples of video game data may include:
- The locations of the player and enemy within the game
- The progress that has been made on a particular puzzle
- Whether the player is paused on a menu screen, or actually playing the game
- The season or time of day depicted in the game world
- The player’s health in the game
This data can be used to control and change the music in different ways. For instance if the health of a player is low, we may use DSP (digital signal processing) to filter the high end by applying a low-pass filter. Or, if the player moves from one location to another the music may switch to a different theme. In this article, we refer to the parameters that control the music as control-inputs. Control-inputs have two basic properties:
- What is the unique parameter which is going to change the music?
- How is the music going to react based on changes to this parameter?
Control-inputs are not unique to video games. In fact, music has had many different types of control-inputs for altering and changing music in real-time for many centuries. For example, Mozart would use dice games (Musikalisches Würfelspiel) to determine the order of phrases in a piece of music. Earle Brown used the movements of a Calder mobile to determine the notes that were played by an ensemble in . Circus bands would provide accents based on the movements of the performers. Orchestral conductors control the tempo of the orchestra amongst many other things. James Brown decided when it important for his band to ‘take it to the bridge’.
Similarly, the game itself has many control-inputs which control various aspects of the game. Keyboards, mice, game controllers, cameras, and microphones are examples of control-inputs where players are able to send data to the game engine. That data is then interpreted by the game itself to react in some way, e.g. move the player avatar, or open an object that has been clicked on.
A interactive score for a video game can change in reaction to video game controllers or data too. Composers, together with the programmers and game design team work on which data should change the music, how the music should change, and how to implement it into the final product.
Types of Video Game Data
Although there are many different types of video game data, we’ll focus on the three primary types of data that composers have to deal with: state changes, momentary switches, and ranged data. It is up to the game development team and the composer to decide which of this data should be used to change and control the music.
– State Changes
State changes represent the condition of the player, the game world, or something within the game. For instance the player may have many states within a game such as exploring, solving a puzzle or attacking. The game player may also have several states at the same time such as running and attacking. Similarly the game itself may have states such as when the player is at the main menu screen, or actively involved in gameplay. Although each thing in the game may have multiple states, each individual state has a list of values in which the thing can only be in one of those states.
A state is similar to a key-switch for a virtual instrument. A violin always has a single articulation at any given moment whereas the violin can change articulations or states. For instance, the violin state is pizzicato until they play legato. Additional examples of states within games include player location, enemy location, enemy awareness, type of character interaction (exploring, battling, conversing), npc perception of the player (friendly, suspicious, angry), player actions (flying versus running), environmental factors such as weather, current vehicle, and many more.
In the following example of simple state changes in World of Warcraft, the control-input to the music engine is the player’s location. The player moves between these different locations and the music reacts by crossfading to a new location theme.
In this second example from Bioshock Infinite, there are several examples of state changes which effect the music. In the beginning of the clip we’re in an explore state where the character Elizabeth whistles a tune, then that tune is played by a violin within the scene. After which, the main player-character Booker begins a hold-up which changes the overall game state to battle which cues a new music track. This is then followed by a stinger when the battle is over. It’s also interesting to note that in Bioshock Infinite there are many examples of interesting interplay between the diegetic music (source), to the extra-diegetic music (underscore).
– Momentary Switches
Momentary switches are singular events that happen within a game. These switches trigger event in game music engine, most commonly a stinger. Examples of momentary switches within games include opening a treasure chest, entering a room, solving a puzzle, finishing a level, killing an enemy, placing or moving an element on the screen, picking up an object, and many more.
In the following example from Russian Squares, momentary switches are used to trigger musical stingers when a line of blocks is removed from the game.
In the second example, Middle Earth: Shadow of Mordor, the player can battle war chiefs which are similar to bosses in other games. During these battles each regular attack on the war-chief uses a momentary switch to trigger a corresponding musical stinger in the music score. The war-chief battle begins at approximately at 1:56. The clip also represents state changes in the music score from exploration, to enemy alerted (or suspense), to active combat.
– Ranged Data
For the composer, ranged data refers to a variable which has a dynamic (typically linear) range of values. An example would be a number which has a range between 1-100. Within games, examples of this include player and enemy health, proximity to enemies, percentage of level or task completed, number of enemies, speed in a racing game, linear time of day, and many more.
Frequently the composer uses fuzzy logic to interpret the data within the interactive music system. For instance if the composer wants to have different music for the time of day, he may have to interpret a linear 24h clock by parsing the data down to a simpler state change – morning, afternoon, and night.
In the following example from SSX, there are many different control-inputs that are used to change and affect the music. Ranged data is used when the player carves back and forth across the track (1:54 in the video) to pan the music left and right.
Mapping the Data To Music Changes
If a composer is going to use this data, they’ll need to know how to interpret the data. The composer along with the game development team decide how the music will react based on data that it receives from the game. Composers have many choices with which to adapt the music in real time based on this data. Below is a list of possible outcomes for changes that the music engine receives. This is the composer’s creative toolset when choosing interactive music techniques.
- Play a musical stinger (a stinger is a short phrase of musical notes)
- Change the musical form by branching to a different section (sometimes called horizontal re-sequencing)
- Modulate to a different musical key or mode, or change the overall harmonic context (e.g. major to minor)
- Change the tempo or underlying rhythm framework
- Alter the mix by adding or removing layers of music (sometimes called vertical re-mixing)
- Alter the melody to create variation
- Use DSP (digital signal processing / effects) to alter parts of the mix
Working with the Programming and Design Teams
Developing an interactive music system for a game requires a lot of moving parts. Interactive music within games takes additional time, platform resources and more money to implement. The creative team needs to support the idea, the production team needs to find money to fund it, and programming team needs to have room in their schedule to implement it. Adaptive music often requires more hardware resources (memory and CPU) as well since you need multiple variations of the music. In many cases, a game development team may not be familiar with more sophisticated types of interactive music. The composer or audio director may need to build a case to present to the creative and production leads to adequately persuade them to invest the time and money required to build such a system. Frequently composers and audio teams will build demonstrations of how the music engine can work to help win support from the other members of the team.
As a quick side-note, when I first started to work with teams, I realized that it would be of great advantage to me if I created a set of simple tools to demonstrate interactive techniques. I built many of these tools in MaxMSP, which have now turned into teaching tools that I now use at Berklee College of Music. Today, many of those techniques now can be demonstrated to game development teams using middleware solutions such as WWise, Fmod, Elias or Fabric.
About the Author
Michael Sweet leads the development of the game scoring curriculum at Berklee College of Music. Over the past two decades in music, Michael is an accomplished video game audio composer and has been the audio director for more than 100 award winning video games. His work can be heard on the X-Box 360 logo and on award winning games from Cartoon Network, Sesame Workshop, PlayFirst, iWin, Gamelab, Shockwave, RealArcade, Pogo, Microsoft, Lego, AOL, and MTV, among others. He has won the Best Audio Award at the Independent Games Festival, the BDA Promax Gold Award for Best Sound Design, and has been nominated for five Game Audio Network Guild (GANG) awards.