Anthymn is a game that requires a high num­ber of dynamic and inter­ac­tive of audio assets dur­ing game­play. The most effec­tive form of imple­ment­ing such audio is by using a mid­dle­ware audio tool such as Wwise or FMOD.

These tools give the sound designer full con­trol of audio play­back in-game, includ­ing all real-time para­me­ter con­trols (RTPCs), states of objects and ambi­ences, bus effects, and oth­ers. For this pro­to­type we, chose to use Wwise with a license allow­ing us to use more than 200 assets, the Sound­Seed AIR plu­g­ins, and all Wwise dynamic and con­vo­lu­tion reverb plu­g­ins, cour­tesy of Audiokinetic.

Our pro­to­type pipeline for audio works in a very straight­for­ward way:

The updated spread­sheet has more colour-coding to facil­i­tate imple­men­ta­tion by both free­lancers and theCDM team. click to enlarge

I start by ana­lyz­ing the cur­rent iter­a­tion of the game (and I re-analyze every time a change is made) and from the result, I cre­ate a spread­sheet sim­i­lar to the one from last post. Here is a sam­ple of an updated spreadsheet:

Music cre­ation for the game’s core mechanic is han­dled by Jessie with my sup­port, and all back­ground music comes from Nick Mor­ri­son, the client-side composer.

While music is being cre­ated, I cre­ate the sound effects in Pro­Tools using var­i­ous ses­sions, one for each pur­pose. When an asset is done, it is marked on the spread­sheet. Then I grab the gen­er­ated assets and import them into Wwise.

In Wwise, I need to make sure every event is played prop­erly, and that every RTPC,  state change, and bus effect is work­ing the way I intended and then I route them to the high dynamic range (HDR) audio bus.

HDR works lim­its the dynamic range of a project to indus­try stan­dards for each plat­form (in our case, Win­dows). HDR lets the sound designer pri­or­i­tize which events play louder and which events are ducked while many sounds are being played at the same time. This tool is very nec­es­sary, since unlike in film, we never know what the player is going to do and when. So I need to mix and re-mix every event, which will work in any — or at least the most likely — of the pos­si­ble situations.

Every iter­a­tion of the mix is exported in a header file to be plugged into Unity and shared on ourSVN server. We then divide into two imple­men­ta­tion teams: the CDM team fea­tur­ing  Palle and Jonathan, and the String The­ory team of Dan Coburn. and Mike Peredo. The CDM team imple­ments the sounds that are tied to our ver­ti­cal slice while all other assets are imple­mented by the String The­ory team. All the details of who needs to imple­ment what and how are listed on the spread­sheet to assist with off-site communication.

Anthymn has its own blog, where you can check out more details about our process including paper prototyping, creating the pipeline for the company and dealing with clients. To check out the full blog, please click here.