As this is my first post on my own blog about Anthymn, I have to explain that we have an original blog and I am only posting the audio section on my personal blog. Anthymn is a MMO/RPG game where all the world is musical. To check out the full blog, please click here.
Let’s leave music behind for a bit and talk about the actual sound design for the game.
Some people confuse music with sound design, but in reality they are two completely different fields.
In this world, everything is music, so the hard part of this process is making a sound musical, magical, and realistic at the same time. As our character plays a piccolo, many of the sound elements include flutes and woodwind instruments.
The concept of the tritone is being used to create some sound effects, even though a piccolo cannot play a tritone unless one interval is played after another in sequence.
The creative process has to follow a pipeline in order to be executed correctly.
I needed to make a spreadsheet of all effects needed for this iteration together with Palle, one of our programmers. We developed a naming convention for the files so we both understand what are they and how to use them.
Some sound effects, such as the sound of someone playing a wrong note on the piccolo, cannot be easily found in libraries,. Technically, there is no wrong note without context, so to make something sound “bad” I decided to use an over-blow into a flute. In the next image is an example of how to play the instrument correctly (by Dylan) and how not to play the instrument (me):
After collecting all necessary sounds for this iteration of the prototype, I used Pro Tools with a series of chains and plug-ins to re-record the mix of my audio layers. I used the same session to edit every sound, as I think it is very practical, placing markers with the file name in them to separate from one SFX combination to another. This way I don’t need to remember what I was trying to do in that particular layer chain, just read the marker, adjust the levels, fades, effects and plugin sends to be re-recorded.
After finalizing the audio, I had to export it in WAV 44.1/16 bits to save some space in Unity. I uploaded all files to the server in order for Palle have access to them so he could implement them in-game.
Implementing single files into Unity is not my favourite choice. I like to use middleware tools such as Wwise where I can monitor, mix, add files into containers, and add real time parameter controls within other options to create a full dynamic audio soundscape. Since most of the sound effects created today are the first iteration of a rapid prototype, using Wwise would be an un-clever use of time for both the programmers and me at this stage. We do plan to use Wwise when we are further in the process and the sound concept is at a more solid stage.
Tomorrow we will be showing this version of the prototype to our clients in order to receive feedback for the concept and quality of the design in order to make further changes and tweaks.