How To Process & Edit Audio For Games

Overview

In this section, you’ll learn some of the most important skills a sound designer needs. They are the fundamentals that we use on a daily basis, and I’m going to use my trusty cooking analogy to introduce them to you:

  • Audio Editing - this is the ‘preparing your ingredients’ stage, cutting and washing them, and removing any unwanted bits. In audio terms this means things such as changing the pitch and volume, cutting, pasting, merging and stretching the sounds.

  • Audio Processing - is like seasoning, frying, baking, boiling, and generally processing the ingredients so that they are ready for assembly into a dish. In the context of sound design, this means manipulating the audio signal through various plugins like EQ, reverb, compression, saturation, modulation and so on.

  • Audio Layering - this is where we combine and assemble our processed ingredients to create the final dish and plate it up. This means layering our edited and processed sounds together in our DAW to design our finished sound.

With that broad overview done, let’s dive deeper into each step individually.

Audio Editing

Audio editing is the most fundamental skill of any sound designer, dialogue editor, field recordist, foley editor and composer.

We did some editing already when we were creating our sound libraries - cleaning up our recordings by removing unwanted sounds and background noises - but editing is also a key part of the creative process.

The tools we use to edit sounds are usually the functions built into our DAW, rather than plugins. The most common tools that we use are:

  • Volume - balancing the relative levels of different sounds.

  • Cut, Copy and Paste - removing and isolating specific parts of a sound (e.g. removing the initial transient of a sound to isolate the resonance that comes after), or duplicating and re-arranging elements to create variations of a sound.

  • Time Stretching and Pitch Shifting - making a sound longer or shorter, or the pitch higher or lower.

  • Fades - short ramps in volume at the start and end of a sound, that help to create smoother transitions between sounds and remove unwanted noises caused by cutting and editing.

  • Synchronisation - positioning sounds along a timeline to synchronise with any visuals you might be working with, such as a cutscene.

I recommend watching this excellent tutorial from Kenny Gioia on how to edit audio in Reaper:
https://youtu.be/W2FduoFjYwk

Audio Processing

In a nutshell, audio processing means manipulating sounds with analog or digital audio effects.

This is where our plugins come into play - having edited and arranged our sounds to create the general impression of what we want to achieve, we can start transforming them into something new using compression, EQ, reverb, distortion, etc.

You can learn more about them in the Audio Effects (Plugins) section of the Learning Roadmap, which can be found in the Sound Design Basics tab.

Additionally, if Reaper is your DAW of choice then I recommend watching this video on how to effectively use plugins in Reaper: https://youtu.be/lFDv75U0nO0

How Do I Know What Plugin To Use?

Knowing what plugins to use when usually comes with practice and a bit of educated guesswork, but there are things you can listen for to guide you:

Experiment with different plugins and processes in response to these questions, and over time you’ll build up an understanding of how each one works and when to use it.

Modulation & Automation

Modern DAWs let us control pretty much any parameter we like with automation and/or modulation.

Put simply, automation and modulation are ways of changing the parameters of an audio effect or a sound over time. They are similar in principle but subtly different in how they behave.

  • Automation specifies the absolute value of a parameter at a given time, allowing for precise and consistent changes that are the same every time the audio is played back. Automation usually has to be input manually and is displayed as a line graph next to or on top of the audio events it's affecting.

  • Modulation, meanwhile, is all about modifying the value of a parameter relative to its starting position, based on a control signal of some kind. This can be used to create rhythmic or regular changes that would be time-consuming to program by hand or unpredictable and chaotic sounds that you would never produce through manual control.

Check out this video from Catalyze Music Academy that illustrates the difference between automation and modulation in Ableton Live: https://youtu.be/1rczwuoeEfw

Making effective use of automation and modulation in your audio processing is an essential part of sound design, allowing you to create moving and evolving sounds that are exciting to listen to.

Modulation effects like chorus and phaser plugins are self-contained ways to modulate parameters, with the control signal being built into the plugin itself. However, it is also possible to set up your own modulation routing, using tracks in your DAW as the control signal, which can be great for experimentation and happy accidents.

Check out this video by Kenny Gioia demonstrating how to setup Parameter Modulation in Reaper:
https://youtu.be/Dn5_GvsJIhI

Below you can find some of the various ways you can automate and modulate sounds.

Low Frequency Oscillators (LFOs)

LFOs are oscillators that output frequencies usually below 20hz. Rather than being used to generate a sound, the shape of the LFO waveform dictates the modulation applied to the target parameters. LFOs usually operate at a specified frequency or can be synced to a tempo, making them great for rhythmic and regular modulation.

Learn more about LFOs:
https://blog.landr.com/how-to-use-lfos/

Envelopes

Envelopes provide a ‘rise and fall’ motion that can be used to modulate a parameter before returning it to its original value. They are often triggered by a volume threshold, and so are a great way to create modulation that follows and responds to a track.

Macros

Allow us to link any number of parameters to a single knob and then automate or modulate them together. They’re fantastic for creating lots of movement with a single control.

MIDI

We can link parameters to MIDI controllers and then record the automation or modulation live, perfect for people who like to work hands-on.

If you’re a Reaper user then check out this great tutorial on how to add automation to parameters:
https://youtu.be/PyRpS_BTlzE

Audio Layering

What Is Layering?

Layering is when you combine multiple sounds to create a single or sequence of new sounds. This is especially useful when designing complex, multi-stage sounds, as it allows you to break them down into sections that can be designed individually, or for creating variations by changing the layers within a sound.

This video from MarshallMcGee provides a great overview of the topic:
https://youtu.be/0biAgn2ct0A

There are a couple of different techniques for layering which are demonstrated in this great video from David Dumais: https://youtu.be/8xAjLS7oMsU

Tips For Better Layering

When layering sounds the two biggest things to look out for are frequency content and cadence.

Frequency Content

As you might expect, frequency content refers to what frequencies are present in the sound. An effective approach when layering is to combine sounds that have different frequency content so that they don’t clash and mask each other.

Here is a short video from Accusonus explaining the concept:
https://youtu.be/201N6OT4LM4

A simplified way of thinking about this when doing sound design is to split the frequency spectrum roughly into 3 layers:

  • Low (20hz-200hz)

  • Mid (200hz-2,000hz)

  • High (2,000hz-20,000hz)

When you combine sounds, make sure that you don’t have too many competing elements in each section, using subtractive EQ to remove wanted content and create space if you need. For example, if you have 2 sounds with a lot of low-frequency content, you might use an EQ to remove the low-end from one of them so that they blend together better.

Just beware that you don’t go overboard with your EQ, as you can end up with very thin and lifeless sounds if you cut too much.

Cadence

Cadence refers to the ‘pronunciation’ of a sound - the rhythm and emphasis of the sound, and how it changes over its length.

When layering sounds it is important to be mindful of how the layers interact, and how this affects the cadence of the overall sound. We can have a huge impact on the character of the final sound by changing the timing and alignment of the layers, especially when designing sounds with heavy transients like gunshots, explosions and impacts.

Here is a clip out of a MarshallMcGee video demonstrating the process:
https://youtu.be/DWIrBcM_Bxo?t=288

Putting It All Together

Before you continue, a reminder that all of this theory is most effective when it gets put into practice - that’s why I highly recommend doing as much sound design in your free time as you can, especially when you’re first starting out.

Take 3-10 second gameplay or trailer clips and replace the audio - focus on specific parts like a weapon, a door or an engine and put lots of detail into them. By doing this consistently you’ll get better at a fast pace.

And remember: at the end of the day, if it sounds good that’s all that matters.

Workflow Tips For Reaper Users

You can speed up your workflow massively in Reaper by learning the keyboard shortcuts of the actions you use the most.

This is a list of all of the keyboard shortcuts in Reaper:
https://user.cockos.com/~glazfolk/ReaperKeyboardShortcuts.pdf

However, I recommend creating your own custom keyboard shortcuts and custom actions.

Custom actions allow you to chain multiple actions together and then trigger them with a single button press.

For example, if you’re editing and exporting lots of footstep sounds, you could chain the following actions together into a custom action:

  1. Split sounds at transient - this cuts the sound at each transient to separate them

  2. Strip silence - this removes any silent parts

  3. Create a region for each item - this will make a region for each individual audio file and name it automatically

All of the footsteps are now edited and arranged into separate regions, ready to export with the batch render tool, in far less time than if you were to edit them all by hand.

and saved a lot of time in the process.

This is just one example so get creative and optimise Reaper to your workflow needs.

If you want to learn more about actions in Reaper then I recommend watching this tutorial:
https://youtu.be/CZ1IliW_0p4

Previous
Previous

Synthesis For Game Audio

Next
Next

Basic Music Theory For Sound Designers