I had the chance to catch up with Stephan Schütze and his wife, Anna Irwin-Schütze, co-authors of this pioneering text, at GDC this year. I also spoke with Chanel Summers, Vice President of Creative Development at VRstudios, who wrote an informative section of the book about location-based VR audio and story telling with audio. In this post, both Stephan and Chanel answer a wide range of detailed questions about the nature of VR audio and some novel approaches to this young and exciting field.
The book is an extremely comprehensive and also highly approachable volume on the dark arts of XR spatial audio. It covers the basics, but also does a very deep dive into the technical aspects of creating, recording and implementing audio in real 3D spaces that are brought to us in XR.
XR is, of course, shorthand for the 3 new types of reality – AR, VR and MR. I like to think of VR as a 100% digital experience, whereas AR and MR are some mixture of digital and objective reality, roughly 50% of each.
Perhaps the ultimate goal of XR is to create a reality indistinguishable from our normal, objective reality. In order to accomplish this, we need to have an audio experience that would be indistinguishable as well. This leaves stereo and even surround sound in the dust, and we need to extend sound to individual objects in the 3D space. Birds in the trees, frogs in the water, lightning in the distance should all be coming from their sources. Imagine an orchestra where each individual instrument has sound coming directly from it, obeying the laws of physics each instrument is constrained by. We are a long way from this, but the technology is appearing that approximates this, and I would say that we are closer to modelling real 3D acoustics than we are of modelling real 3D graphics in VR.
Also at GDC, I had a chance to give a short presentation at the Google booth about a VR experience created by Runaway Play called Flutter VR which is set in the Amazon Rain Forest. This game is designed to run on Daydream, therefore on mobile devices, and I was able to demonstrate that we could have more than 50 point source audio emitters and only use roughly 30% of the audio CPU. Theoretically we could have had more than 100 point sources without affecting the game’s frame rate thanks to the power of Google’s Resonance Audio. The codecs for playing back 3D audio are getting better, but are our techniques to create sound and music in those environments keeping up?
This book features a who’s who of audio professionals in addition to Chanel Summers of VRstudios. Martin Dufour, CTO of WWISE, Simon Goodwin of DTS, Sally-anne Kellaway of Microsoft, Viktor Phoenix of The Sound Lab, Jay Sheen of Criterion Games and independant sound designers Robert Rice and Garry Taylor.
In order to level up your abilities in this brave new world of spatial audio, I recommend this book highly – in fact I would go so far as to say it is a must read for all composers and sound designers who are working in XR or who will be working in XR, and if things keep going the way I think they will, that will be all of us!
Designing Music NOW (DMN): Why is 3D audio important for XR games and experiences?
Audio helps convince an audience that they are surrounded by a living forest, they are experiencing the wind and weather all around them, they are moving from a small room to a huge cavern. Visual may depict all of these things, but it is the audio that reaches our subconscious and convinces us that we are really experiencing these locations. So audio is a key and critical element in creating convincing and engaging environments, entities and situation across all of the new reality formats.
DMN: What is the primary difference for the listener between stereo, surround and 3D audio?
SS: Stereo provides us with sound-field that represents objects as they were captured when recorded. So musical instruments panned to the left or right to make us feel like we have instruments on either side of us. But stereo is a fixed sound field. It is a snapshot of how a band was arranged when recorded, the sounds on our left hand side will always be positioned on the left hand headphone for example. If we turn our head the sound in the left will always remain the same.
Surround sound is taking this one step further. We still have a fixed snapshot of how our sounds are set out, but with surround that snapshot can include more positions, so as with stereo we have left and right, but surround will add back and maybe side potions as well. Those positions are fixed, even if we pan an object from left to right or front to back in a video, it is a set and locked experience, the sounds are “trapped” within the predefined paths we create for them.
3D sound refers to objects in a 3D game space or virtual environment and sounds attached to those objects. So a 3D model of a ticking clock will have the clock sound emanate from the location of the clock model. If we move close or away from the model the sound sound react as it might in the real world, getting louder as we get closer, if we rotate the camera we are viewing the 3D world through the perspective of the sound will shift as we turn the camera back and forth. So the sound exist in the 3D world and we can adjust our position to it in the 3D world.
When you experience a 3D world through a VR or AR device it allows the audience to rotate their head independent to the avatar they may be controlling, this is one of the principle aspects that makes new reality content so immersive. As I walk my avatar down a virtual city street I can turn my head left and right and my perspective of the world shifts just as it would in the real world, this can make virtual content much more immersive than a traditional screen experience
DMN: Are there important differences between AR and VR 3D audio? What is an example of each that is particular for each?
SS: As a general rule:
VR is when we place an audience into a virtual world
AR is when we place virtual objects into the real world
So for VR we have to create an entire world that is convincing and satisfying for the audience, with AR we have to try and make the virtual object sound like it really does exist in our world.
So for AR the trick is trying to match the behaviour of sound int he space the audience is in. Is it reverberant, is there a lot of background noise are there other elements that are effecting real world sounds in that space right now. Then we must try to simulate those real world elements onto the virtual objects otherwise they will stand out as not “fitting” as we expect. This is a huge challenge and there are still lots of very smart people trying to work out how we can do this.
In VR as I have previously mentioned we need to convince the audience that they really have been placed into a virtual space, so now the space needs to behave as we would expect if we really were in that world.
DMN: In your book, you define AR as “Alternate Reality”. I have usually heard it referred to as “Augmented Reality.” Can you please help to clear up the confusion?
But either or both are suitable as far as I am concerned. If we create a virtual robot in a room in which we are sitting and it starts to tell us funny stories, it is both an augmentation of our real reality as well as it is an alternate version of our real reality.
DMN: What are the biggest challenges to bringing immersive, 3D audio to XR?
SS: I think the biggest challenge is still educating developers that the quality and design of audio for XR is not something to be treated lightly.
We often advocate for good quality audio across various media, but as I have said, for many XR experiences it is not optional. I have tested some fairly good VR and AR experiences that have fell down very badly because the audio was not up to standard. Instead of just making the experience “less good” it can completely undermine the experience, break any sense of realism and drag the audience out of the experience in a big way.
The second issue is the technology. It is still a long way from being a magic bullet for creating good quality experiences. There is some outstanding tech out there, but it still requires very careful planning and a creative process that takes advantage of every possible trick we have to create content that will grab our audience and transport them to somewhere magical. To create a convincing spatial audio environment I would suggest that you may often want to combine, mono, stereo, ambisonic content, you may want to utilise HRTF plugins, but also still use traditional interactive audio techniques in a combination to achieve your end goal.
DMN: What are some of the best examples of 3D audio today?
In general because of the work Sony has done most PSVR games have pretty good spatial audio as the Sony tools take normal 3D sounds and spatialize them pretty well.
Specifically the X Wing VR experience that is a free download as part of Star Wars Battlefront one is excellent, the team at criterion did an outstanding job in really working within the space and supporting the narrative of that experience with good audio
The team at Skywalker Sound also did a good job with the Star Wars Droid Repair Bay experience that is available on Steam, and also on Steam the Valve Steam Labs content is good.
In general the experiences created by the VR hardware teams is all well produced. This is possibly not surprising but it is a good thing.
Resident Evil Biohazard won best VR Audio at this years GANG awards and it has some noteworthy spatial audio techniques in it, so that one is also worth checking out.
And finally, putting my money where my mouth is, I can suggest people take a listen to VR Regatta, which is available for Vive on Steam and may come to other VR platforms.
This is a simple sailing game, but I put considerable time and effort into creating a real sense of being in an open and dynamic world so you can tell which direction the wind is blowing by turning your head and listening and all the water sounds respond dynamically to boat speed, boat tilt angles, your position within the boat. I think that creating a convincing environment is the first part of creating any effective virtual environment so this is an example of how I approach things
DMN: You have an entire chapter devoted to music in XR. What are some approaches that composers have taken to make their music stand out in XR?
We have had surround capability for game music for many years and yet not a lot of projects make use of it. I think we can implement music into a spatial environment but we need to consider carefully WHY we might want to do that and what exactly we hope to convey to the audience.
I have created tests where I rendered out an orchestral piece of music into about 20 different parts and then positioned those parts almost spherically around the listener. I then experimented further with how we might lay out an ensemble in spatial audio space. Our current layout for a real orchestra is based on the best established techniques to facilitate both communication between players and presentation of a wall of sound to the audience in an auditorium. But VR allows us to place the audience right in the middle of an orchestra as though they were sitting on the lay of the lead clarinet player. This is a very different experience to the audience’s wall of sound. We can even take it further and suspend the violins above us up int he rooftops and place the basses below our feet. Having done this I can say that the experience is quite interesting, but is it useful to VR experiences? Sometimes maybe.
One important aspect to keep in mind is human expectations. Though thousands of years of exposure to natural sounds we have several expectations. Birds, which are high frequency sounds we expect to be higher up, overhead. rumbles of the earth or industry tend to come to us as vibrations through the ground. As a result humans expect high frequency sounds to be higher than eye level and lower frequencies to be low down. So we must keep that in mind before we decide to place the first violins underfoot as this can confuse the listener as it breaks expectations.
I would love to work on a project where there WAS reason to open the music up completely across the entire 360 spherical space, but I think projects like that are going to be rare. As a general rule, most times the music will probably be stereo and locked to the headphones. However, I have found there is benefit in positioning the music just slightly to the rear space as this opens up the front hemisphere (our peripheral vision area) for more sound content, and keeps the music nicely spaced behind that.
DMN: In general, for games and VR experiences, should non-diagetic music be head locked stereo? Should it be a mix of positional and head locked?
SS: As with the previous question, I think it depends greatly on the project. I think the only real answer is to try out different ideas early in a project and discover what works best for the specific project you are working on. Like all things relative to VR and AR we are all still very new to this and there are no hard and fast rules. Be brave and try out things that may seem really crazy, they may work out brilliantly
DMN: What are the best tools for composers when creating a spatial mix?
SS: This one depends on if you are creating content for a game or for a linear 360 video.
For linear I would just use the DAW I normally use and add the Facebook 360 Audio Workstation plugins. These tools work with Nuendo, Protools and Reaper and allow you to pan spatially and automate positioning so you can just create a normal session and then arrange the positioning as you would any mix.
There are other tools and in some ways this is the problem. There are so many tools and plugins and there are new ones coming out almost every week. And frankly it does not matter how good they are because you could waste a great deal of time just testing and trying to evaluate which ones might be good, or which might be slightly better than what you are using. Over the next few years the dust will settle and we will have access to some good tools, but for now I think our creative approach to our work is as important if not more so than exactly which plugin we use. So do a quick search on which tools work with your workflow and then which might be in the top 5 and just select one and start using it. Do not get bogged down evaluating all of them because you will never get anything done.
For interactive content it depends on what tools you are using for the project. Unreal engine has its own audio engine and tool set and it is getting quite powerful in regards to spatial audio. Wwise has its own inbuilt spatial audio plugin and can utilize others if you licence them, and other middleware solutions will allow their own set of spatial audio plugins and solutions. So when using these it is part of the implementation stage and as such if I wanted to arrange music in a spatial mix I would need to render out the individual tracks and then position them into the game engine and use a spatial plugin to calculate the HRTF information.
The real challenge with interactive new reality projects is that the final mix occurs when the audience is actually viewing the experience, so you need to build and implement and then test over and over to refine your mix to try and achieve the best results for your audience. It can be very complex, but I am finding that refining my spatial audio environments is one of the most enjoyable processes I have undertaken in years. I am always looking for tiny and subtle tweaks that I can make to add just a bit of extra realism or effect to the audience’s experience.
Prior to writing this answer I have just spent three hours adjusting values to achieve a better result for the sound of water moving past the side of a sailboat. So when you lean your head down closer to the water it gets louder than when you sit upright. This is one layer among dozens that combines to the overall effect, but it is this attention to detail that really adds significantly to a VR experience. This applies to sound, music, VO and any other audio element that contributes to your audience’s experience.
DMN: What considerations do composers need to take into account if they want to record 3D music? What are some examples?
SS: Ooh, this is a can of worms, and I am likely to upset some people here.
Again, it depends if you are creating content for interactive or linear projects.
For linear where your audience’s perspective is static, I think it would be a suitable example to record an ensemble with ambisonic equipment and create a nice spatial snapshot of the space. So an orchestra or choir or live ensemble in a nice acoustic space could be nicely recorded in a spatial manner with good ambisonic microphones.
However for anything where the audiences position changes or if the project is interactive I would personally always record the ensemble individually (or individual microphones per instrument or section) and then arrange and position each sound source manually. I would also likely combine mono, stereo, ambisnoic and binaural content however was needed to get the best result. As I have said previously, there is no single magic bullet that will provide us with a great result every time. So it really does depend on exactly what kind of end result I am wanting to achieve, and also the makeup of the instruments I have.
A single ambisonic mic might capture a nice spherical image, but it limits you to using that one mic. It is possible to use an ambisonic recording as the basis of your mix and then layer other elements on top, but if you are doing that then why not record all the instruments individually and position them manually?
So again, I have to say, each project will require its own solutions and tiny tweaks to get just right, in all things trust your ears over any tech or plugin or device. There is no right way to do this, but there are many wrong ways.
DMN: What is the best platform to experience XR today?
Vive and Oculus are both very strong platforms, but the downside is you need a pretty powerful PC to run some of the VR projects.
Also Vive can require a dedicated space which can be tricky to setup and maintain, but then gives you more choices as far as movement within the virtual space
PSVR is nice in that it plugs into a playstation so if you have that console you do not need anything else (apart from the camera) I find the PSVR headset very comfortable as someone who wears glasses, it is far easier to deal with and just sits on my head nicley. It does have lower resolution than the PC based headsets, but then it is cheaper and can be used in the comfort of your lounge room where most people have their consoles setup. So there is much more to it than just which device has the most pixels. In all cases good headphones make a huge difference.
Gear VR is interesting as a mobile platform, but is also of course limited in what it can output. I consider the GearVR to be more of a device for demonstrating content t people on the go rather than a device I would want to spend any real time using. As the technology improves I suspect the mobile devices will become better in their output quality and so more viable as a device for extended viewing. The gear VR is quite comfortable, but I find the entire process of getting the phone into the device with headphones plugged in a little fiddly.
As far as AR is Concerned, I think that format is really waiting for a generation of new devices to arrive. Microsoft Hololens is quite remarkable, but seriously lacks any content at this time, both Apple and Magic Leap have AR devices due very soon and I have seen what these devices are capable of and I think it is going to be groundbreaking, but we are not quite there yet.
So playing Pokemon Go on your smartphone is still mostly where AR is at right now.
DMN: What does the future hold for 3D audio experiences?
SS: This is the big one, and this is what has me really excited.
I think trying to guess what the future holds for XR experiences and 3D audio is like trying to ask someone in the years where moving pictures had only just been invented what they thought an IMAX experience would be like. We have no point of reference for what the future brings because I honestly think there is almost no one alive today who has yet really dug down into what these experiences can be. Some of the VR experiences I have had so far have redefined my idea of digital entertainment and make it really hard to go back to normal screen games on PC and console. And it is often not the things you would expect that really impress you in a virtual experience. Human’s have very unusual methods of perception and it is quite easy to trick our brains into thinking and experiencing all kinds of things. I think XR has such incredible potential that it is going to be a wondrous format to work in and I am already enjoying creating spatial audio environments so much that I think the future is going to be full of fun and surprises.
I do think that the technology will continue to get stronger and more useful, but I also think that our creative skills and processes will never diminish in importance and that it is how and what we create that will define so much of what the audiences of the future are going to enjoy.
Working on the bleeding edge of any technology can be hectic and intimidating, but it can also be so incredibly satisfying as we get to decide and define processes that future generations will get to work with. I guess we better consider carefully how we approach all of this so our future selves and others can have a good foundation on which to work.
DMN: What are the difference between location-based VR experiences vs. at-home consumer ones?
Chanel Summers: Location-based entertainment (LBE) VR experiences are typically more social than at-home consumer content and can provide much more elaborate and immersive experiences not possible in an average consumer’s home. They can also be a powerful partner to in-home gaming VR and AR experiences and even possibly accelerate the adoption of VR by consumers by exposing consumers to VR content, perhaps for the first time, and acclimating them to the new technology. These LBE VR or out-of-home experiences utilize many of the same audio techniques as consumer VR software, but there are also significant differences to consider when designing and implementing audio for LBE VR. I delve deeply into these in my chapter entitled, “Creating Immersive & Aesthetic Auditory Spaces for Location-Based VR Experiences”.
DMN: How important is audio to the overall VR experience when compared with other game elements?
CS: It has been my experience that those elements that make a VR environment so immersive and compelling are created more in the audio space than in the visual space, for several reasons.
First, audio can represent all of space rather than just what the viewer is seeing, including sounds that emanate from behind the user, not in their field of view, or ones where sources have not been graphically rendered.
Second, the perceptual complexity allowed for by the human body in audio reception is greater than that allowed by the eyes for the visual reception. The auditory system can simultaneously process multiple frequencies with a variety of amplitudes whereas visually each bit in an image corresponds to a specific color, but cannot represent multiple colors simultaneously.
Audio can be received vibrationally, and therefore physiologically, by several parts of the body simultaneously in addition to the ears. Thus, audio can affect us on a subconscious, psychological, and physiological level.
Very early in the design process, we need to consider how we approach the creation of the spaces we are building and throughout the process we need to create spaces that are coherent, consistent, and cohesive within the story and game space.
The narrative of the game and the desired gameplay set the foundation for the visuals of each setting within the game, but adding the audio brings them alive, making the experience effectively real in the player’s mind.
DMN: Why do you think spatial audio is important in VR and in gameplay? Should everything be spatialized?
CS: As you are inside the experience and not just detached from it, you need some things to be spatialized in order to give the world depth, with individual ambient world sounds always positioning as you rotate your head, and therefore, you always feel a sense of direction and depth within the world. Also, if you are creating a VR game, it will be crucial to have spatialized audio cues in order to play the game effectively. The effectiveness of the gameplay is greatly reduced if the players don’t look where they need to look at any point in time during play.
But I want to add that not all audio in a VR environment needs to be spatialized. I believe that in a VR experience, audio must take kind of a hybrid approach, where some audio is spatialized while others can be in simple stereo. For instance, there may be sounds that are static or head relative. If you are going to have user interface sounds, those most likely should be 2D. Same goes for a musical score; this would probably be best in 2D unless there is a physical source of the music in the game world. And with low-frequency sounds, it’s harder to tell where they are emanating from. Low end is good for the feeling of a sound and affecting physiology and great for giving an object weight, presence, and size! Sounds that are primarily low frequency like an energy pulse or a rumble are well suited to being stereo sounds.
One example of a hybrid approach occurred on “VR Showdown in Ghost Town”, an LBE experience that I worked on for Knott’s Berry Farm. We mixed in 2D looped stereo ambiances (such as general ambiance that we did not feel required spatialization) with individual 3D mono spatialized sounds which worked quite well and felt very natural. The wind and general ambiance were not designed as a quad-array with emitters placed around the listener in all directions, as we didn’t find that necessary with this experience.
DMN: What are a few tips and tricks that you use when designing the audio and music to help tell the story in LBE VR?
CS: First off, audio needs to be an integral design element from the start, conveying elements of narrative, characterization or gameplay by itself and in concert with other game elements. Audio must be more than a list of assets to be compiled and assembled like the items on a shopping list. Rather than just coupling each of the visual elements of a game with a corresponding functional sound element, audio should always further the goals of story, characterization, and the creation of a holistic ecosystem. Well-executed sounds and a brilliantly composed soundtrack have minimal value when accompanied by nothing more than surface meaning.
You need to consider the choreography of the audio, creating a cohesive, holistic unit of all the elements as well as a “rhythm” within a game experience. If you are creating a fictional world, you will need to make this really immersive environment where people feel like they are there, in the story. For instance, in the Knott’s Berry Farm project, players are transported to a future to defend a western town called Calico. Very early in the design process, we needed to consider how we would approach the creation of the space we were building so that throughout the process we could create a space that would be coherent, consistent, and cohesive within the story and game space. We needed to be able to use audio to create a truly immersive environment and make the players believe they were actually transported to this future western town. But we also had to supply players with very strong audio cues so that they could play the game in this super sound-rich environment. Therefore, we had to strike a balance between creating effective audio that satisfied basic gameplay requirements and building a soundscape that worked well and was also cohesive within the world. We needed to supply the players with auditory cues so they could play the game while also making them feel like they’re truly in some futuristic world that was somehow transported from the Wild West.
A guiding philosophy that the team discussed was this idea of dramatic divides and departures in which it would be essential to aurally capture the melding of a world of futuristic technology with the dusty grind of the Wild West. Perhaps somewhat similar to HBO’s Westworld, there would be two different “worlds” in existence here between players starting in the futuristic Lobby scene and then teleporting to the futuristic version of Calico: the Lobby world being similar to the fictional Westworld laboratory with clean, light, minimalistic ambiences and Calico being similar to the fictional Westworld theme park: gritty, grimy, and very sound-rich. This philosophy even applied to the music in which the team from Cedar Fair and Knott’s wanted us to think about having traditional western topes meet hi-tech electro stylings. But with all of this in mind, it was absolutely Imperative that the audio give the “vibe” and familiarity of Knott’s Berry Farm’s actual, real world Ghost Town area.
DMN: What are your thoughts on the use of non-diegetic and adaptive music in VR experiences?
CS: In VR, the use of underscore is still being debated, with many disagreeing how this should be treated in VR. Some argue that non-diegetic music breaks immersion while others put forth that it actually helps to create immersion by guiding the players’ emotional states and aiding in the interpretation of the actions and events they see unfolding before them. I strongly believe that how you approach music in VR will be based on the scenario you are creating and what your objectives are for your project.
My team has incorporated non-diegetic and dynamic music into several LBE VR experiences. These scores became very important aspects of these games as they ended up being major contributors to the soundscapes and environments — heightening emotional impact, setting the mood, tone, and pacing of the environment, and as a game design mechanic with transitioning intensity layers as indicators of player success.
In the Knott’s experience, as there would be no physical source of the music in the game world, the music would be non-diegetic underscore and sit as a 2D stereo send that enveloped the world. This left room for the “appropriate” environmental audio to be spatialized. Also, the team from Cedar Fair and Knott’s preferred the music not to emanate from an in-game source like the saloon, as they didn’t want distracting and potentially irritating volume attenuations based on player movement.
On another project, “Barking Irons,” the developer wanted to experiment with the idea of having a fader dial attached to the players’ heads which would cause the music to change based on ducking activity. The audio team built a real-time “Head Volume Fader” RTPC (player_ducking_musicvolume) In Wwise, but we ended up not using it as we deemed it could sound ”wrong” and disruptive to the experience if we kept changing the music based on players bobbing up and down to dodge enemy bullets.
DMN: I just read that China is opening up a full VR theme park. Also, there is a company who is adding VR experiences to rollercoasters. Do you think VR is going to be a predominant attraction for US based theme parks?
CS: That’s just the tip of the iceberg! Most, if not all, US-based theme parks are either looking at adding VR or have already done so. Look to see this expand over time, particularly with the introduction of high-quality augmented and mixed reality solutions.
About Stephan Schutze
Stephan Schutze has been an audio creator in the game industry for close to 20 years. In that time he has written music for everything from chip tunes to live orchestral scores, he has created a collection of sound effects libraries used by studios around the world such as Disney, EA, Warner Brothers and Skywalker Sound. He has created sound content on nearly every game platform over the last two decades and is now heavily involved with new reality design and production. Having worked for Magic Leap, the Facebook Spatial Audio team, Oculus and many smaller VR developers made Stephan perfectly suited to write the first book on audio concepts and production for new reality media. He continues to be both an advocate and practitioner for spatial audio and is incredibly excited about the potential of these new formats. He is always on the lookout for new challenges in audio production so he can continue to do what he loves most.
About Chanel Summers
Chanel Summers joined VRstudios in 2017 as its first Vice President of Creative Development, in which capacity she is responsible for delivering the kind of breakthrough content experiences the industry has come to expect from the leading provider of VR-enabled attractions for location-based entertainment operators. A pioneer in the field of interactive audio, Chanel has been a respected game producer and designer, Microsoft’s first audio technical evangelist, and a member of the original Xbox team, having helped to design and support the audio system for that groundbreaking console and creating the first ever support team for content creators.
Prior to joining VRstudios, Chanel was an accomplished touring drummer and founder of the highly regarded audio production and design company Syndicate 17, specializing in sound design, music production, and audio implementation for location-based attractions and virtual, augmented, and mixed reality products.
Some of Syndicate 17’s recent work includes audio for MediaMation’s REACTIVR, first shown at IAAPA 2015; Intel Labs/5D Global/USC WbML’s Leviathan, featured and was an Official Selection at Sundance New Frontiers Festival 2016; VRstudios/VRcade’s Barking Irons, which debuted at CVR 2016 and planktOs: Crystal Guardians which showcased at Immerse Technology Summit 2016; the VR Experience (“The Repository”) for Universal Studios Orlando’s 2016 Halloween Horror Nights; the large-scale VR installation “VR Showdown in Ghost Town” for Knott’s Berry Farm, the first permanent free-roaming VR experience at a U.S. theme park; and Terminal 17, an intense, multiplayer (up to 8 players) adventure game specifically designed for the VRcade Arena.
Chanel has consulted for a number of organizations and innovative technology companies, and lectured and educated around the world on subjects as diverse as the aesthetics of video game audio, world-building, and secondary-level STEM education for young women. Chanel is also a lecturer and director of the Experimental Audio Design Lab at the University of Southern California School of Cinematic Arts, was recently artistic director at Forest Ridge School of the Sacred Heart and primary faculty advisor at the Sony/USC Summer Associate Virtual Reality Innovation Program, and serves as a member of the World Building Institute, an associate member of the BAFTA VR Advisory Group, a member of the AudioVR Board of Advisors, and a member of the Virtuosity Entertainment consortium.
In 2016, Chanel contributed “Making the Most of Audio in Characterization, Narrative Structure, and Level Design” to the CRC Press book Level Design: Processes and Experiences. In 2018, Chanel contributed “Creating Immersive and Aesthetic Auditory Spaces for Location-Based VR Experiences” to another CRC Press book, New Realities in Audio: A practical Guide for VR, AR, MR, and 360 Video.