Anthymn is a game that requires a high number of dynamic and interactive of audio assets during gameplay. The most effective form of implementing such audio is by using a middleware audio tool such as Wwise or FMOD.
These tools give the sound designer full control of audio playback in-game, including all real-time parameter controls (RTPCs), states of objects and ambiences, bus effects, and others. For this prototype we, chose to use Wwise with a license allowing us to use more than 200 assets, the SoundSeed AIR plugins, and all Wwise dynamic and convolution reverb plugins, courtesy of Audiokinetic.
Our prototype pipeline for audio works in a very straightforward way:
I start by analyzing the current iteration of the game (and I re-analyze every time a change is made) and from the result, I create a spreadsheet similar to the one from last post. Here is a sample of an updated spreadsheet:
Music creation for the game’s core mechanic is handled by Jessie with my support, and all background music comes from Nick Morrison, the client-side composer.
While music is being created, I create the sound effects in ProTools using various sessions, one for each purpose. When an asset is done, it is marked on the spreadsheet. Then I grab the generated assets and import them into Wwise.
In Wwise, I need to make sure every event is played properly, and that every RTPC, state change, and bus effect is working the way I intended and then I route them to the high dynamic range (HDR) audio bus.
HDR works limits the dynamic range of a project to industry standards for each platform (in our case, Windows). HDR lets the sound designer prioritize which events play louder and which events are ducked while many sounds are being played at the same time. This tool is very necessary, since unlike in film, we never know what the player is going to do and when. So I need to mix and re-mix every event, which will work in any — or at least the most likely — of the possible situations.
Every iteration of the mix is exported in a header file to be plugged into Unity and shared on ourSVN server. We then divide into two implementation teams: the CDM team featuring Palle and Jonathan, and the String Theory team of Dan Coburn. and Mike Peredo. The CDM team implements the sounds that are tied to our vertical slice while all other assets are implemented by the String Theory team. All the details of who needs to implement what and how are listed on the spreadsheet to assist with off-site communication.
Anthymn has its own blog, where you can check out more details about our process including paper prototyping, creating the pipeline for the company and dealing with clients. To check out the full blog, please click here.