This entry was created by a student in Stanford’s Rhetoric of Gaming class. For more about the class and the assignment, click here.
We are all familiar with the rapid development in gaming technology in both areas of interactivity and performance. Modern games on the Xbox 360 or PS3 have such realistic graphics thati it’s getting a bit scary, and the Nintendo Wii has introduced a new level of immersive interactivity with their acceleration-sensitive controllers. My intention is to look at how the implementation of music in video games is also changing and developing as technology allows more and more freedom.
Scoring for a game is a much different process than scoring for, say, a movie. In a move, the sequence of events is fixed, and thus each second of music is tailored to fit the visuals. Since a game’s motion is determined by the player, writing one fixed score would not meld seamlessly with the gameplay. Most games in the past two decades have done a decent job of working around this by having different pieces, or musical elements that trigger during different scenes: the intro will be some epic orchestration, while the boss fight will have heart-stopping drum breaks with adrenaline-fueled synth lines. While this doesn’t disrupt the flow of the game so much, it does draw attention to repetitive play, as each similar scene is accompanied by a very recognizable theme.
For my initial source, I looked at an interview by Create Digital Music (CDM), an online periodical, with professor, and game composer Troels Brun Folmann. Among other things, Folmann (referred to from here on as TBF, as in the interview) wrote the music for Tomb Raider Legend, which was highly praised for its innovation in game music. He is interested in advancing the concept of adaptive music in games. The idea, similar to the original strategy of having scene-triggered music sequences, is to break the music down even further so that every player action results in aural feedback. He briefly explains the historical obstacles facing developments in this field,
“I would not say that adaptive music is a new concept, but the problem is that it’s never really been working. One of the main limitations is the fact that current generation of consoles like the PS2 and Xbox have very little RAM allocated for audio. Typically, sound designers have a mighty 2MB to play around with. True adaptive music needs be generated in real-time, and even the next-generation consoles like Xbox 360 and PS3 will not have resources enough to do this on a larger scale.
However, there are ways to work around the limitations. This usually involves the creation of custom technologies. I invented a methodology known as “micro-scoring”. It’s basically the idea of chopping your score down to very small components and triggering them in a way that compliments the game experience.”
Micro-scoring is really an extension on the original gameplay-triggered paradigm, but taken further so that in a way, the player is also a composer. By creating musical themes and ostenatos that can synergize no matter how they are combined (by composing them in similar keys and rhythmic schemes), each player action can add another layer of musical complexity, even if only for a second.
In theory, such tangible feedback to player actions should deepen the immersion and emotional impact of a game on its player. It also allows for more intricate musical systems to be implanted. Though these are like black box machines in that most people don’t care how music is made so long as it sounds good, new technology combined with paradigm shifts has a way of producing ground-breaking new sounds that will eventually seep into the mainstream perception of “good music”. For the most part, the interview deals more with overcoming the difficulties of adaptive composition, but I would like to extend these ideas with other research to look at the notion of the player as a composer.
Kevin Dade, Stanford