Audio Processing In Graphical Terms

The managing, mangling, massaging, and massacring of audio can be strangely abstract. I have a theory about why. My notion is that we are much more culturally ready to be analytical about visual information than we are about sonic information. As an Internet culture this is only reinforced, because our primary Internet access-point (the Web) is all about displaying things on screens. It’s primarily a visual medium.


One of the realities that has started to really blow my mind as an audio-human is how so many concepts are cross-disciplinary. For instance, there was a moment when I realized that a monitor loudspeaker and a microphone form an acoustical resonant circuit that’s basically the same as an electronic resonant circuit. When that moment occurred, it was like being able to see the fabric of the universe. In much the same way, there comes a time when you realize that sound and light can often be thought of in the same terms.

When we consider light as a wave, different light colors have different frequencies. For sound, different pitches have different frequencies. Interestingly, both technical and non-technical discussions of sound make use of this as an analogy. For instance, we refer to lower frequency sonic material as being “warm,” in much the same way that lower frequency colors of light (reds, oranges, and some yellows) are called “warm colors.” Different kinds of audio noise with different power distributions are also referred to by color – noise with a power distribution that favors high frequencies is called “blue” or “azure,” whereas noise with equal power per frequency is called “white.” Sound pressure waves have intensity, and so do light waves. As such, both sound and visual information can be discussed in terms of dynamic range.

Bearing in mind these strongly analogous characteristics between visual and auditory input, it becomes possible to represent audio processing in a graphical way. This can be very helpful in understanding what’s going on with a band’s arrangements and tonal choices on stage, as well as getting a feel for what studio and live audio folks are doing with all those knobs and switches.

Compression – Dynamic Range Reduction

compression1Here’s a picture of a mixing console that has a rather pronounced dynamic range. That is, there are deep darks, and brightly lit areas…and not a whole lot that’s in between. It’s much the same as an instrument or singer who’s either really quiet or VERY LOUD. (In the case of this image, as in the case of certain kinds of music, highly pronounced “dynamic swing” is dramatically appropriate.)

Dynamic range compression (or just “compression”) reduces the range of that dynamic swing. This can help to reveal details of a performance that might otherwise be swamped by some other sound, or help to prevent one instrument’s extra-loud passage from obliterating everything else. In terms of a band’s arrangement and playing style, sometimes the reduction of dynamics is necessary for certain situations. In a noisy bar, for instance, it’s a good idea to avoid being so quiet that the audience drowns the delicate passages. However, you also don’t want to be so loud that the audience is driven away.

compression2Here’s the same picture with reduced dynamic range – compression – applied. The parts that were once a very deep grey are now rather lighter, and some of the details that were lost in shadow have been partially revealed.

In this particular case, the appropriateness of this is debatable. As I said earlier, the original intent of the picture was to have a large dynamic swing, details in shadow be danged! This reveals an important issue that surrounds both electronic dynamic range compression and playing with limited dynamics: You can make your “sonic picture” unpleasant in a hurry. This is not to say that compression is bad at all times, but rather to say that compression is not always helpful or desirable.

compression3Here’s the same picture again, now having been “smashed flat” in terms of dynamic range.

Yuck. The picture has lost a tremendous amount of definition, because the contrast between the “quiet” and “loud” parts has been wrecked. The picture is approaching a point where it’s all one level, and that’s neither visually nor aurally appealing. If this picture were sound, it would have been a victim of an engineer crushing it do death with a limiter, or a casualty of a band that just plays everything as loud as possible at all times.

Another revealing thing about this picture is that the details that were in deep shadow have NOT been recovered. Instead, all you have is patches of noise. The same thing happens in audio: Sonic information that actually gets drowned in the acoustical or electronic noise floor is lost forever. Further, overcompression followed by the necessary, compensating gain boost can make the noise floor much more audible than it ought to be.

Expansion And Gating – Dynamic Range Augmentation

gating1This picture also has a pretty decent amount of dynamic swing, although the range isn’t as pronounced as the console picture I used for the compression example.

Dynamic range expansion allows us to isolate the “loud enough” parts of a signal from lower-level material. This can be used both for artistic effect and for limited amounts of problem solving. For instance, imagine that the light parts of the picture are a drum hit, and the darker parts are unnecessary bleed from cymbals, monitor speakers, amplifiers, and whatever else you can imagine. Apply a bit of dynamic range expansion, and…

gating2…the parts of the sonic (or visual) signal that were already emphasized are now MORE emphasized. This reveals a limitation of expansion and gating – namely, that dynamic range expansion is only effective on signals that already have a sufficiently large amount of dynamic swing. Large dynamic swings allow the processor to discriminate between what you want to keep and what you want to ignore. On the flipside, the processor can’t help you with signals where the unwanted material is at a similar level to the desired signal.

gating3This picture is an example of “full” gating, which differs from expansion in that any part of the signal that doesn’t cross the “loud enough” threshold is pushed all the way into silence. Visual silence is blackness; the absence of light.

This is pretty extreme, although the subject of the picture is still probably identifiable. It’s the same way for sound. Full gating can be tough to get right, and it’s pretty easy to kill off parts of the sound that you actually want. At the same time, though, the tradeoff might be worth it for one reason or another.

EQ (And Arrangement)

As I mentioned before, both audio and light waves have frequencies. Equalization is altering the gain of specific frequency ranges, whether for stylistic reasons or to correct an issue. To get started, here are two pictures of a mini mixer. One picture is relatively neutral, while the other is fairly blue. The blue picture emphasizes high-frequency material. If it were an audio signal, it might have a somewhat excessive amount of hiss or “snap.”


Using EQ, we can selectively dial back the excessive highs, so that the second picture has a balance that’s closer to the first.


Now, that’s all fine and good.

But, do you notice that the two pictures are now a little bit harder to distinguish? In a subtle way, they sort of “run into” each other. This exposes a problem that can crop up in both EQ and arrangement. If everything is equalized to emphasize the same frequency range, or if different instruments don’t play in different tonal ranges, different parts become less distinct. Sounds all start to compete for the same piece of sonic real-estate, and you can quickly get into a situation where raw volume makes one part dominate all the others. On the other hand, if one part carries the low end (red) material, and the other part carries the midrange (yellow/ green), you get something like this.


Although two different frequency ranges have been emphasized, neither picture looks wholly unnatural. There’s enough of a difference to easily pick them apart from each other, even though they’re both about equally “loud.”

Just like other forms of processing, EQ can definitely be abused. For instance, some folks love to make the “smiley face” curve on graphic equalizers. Here’s what that looks like in a visual context:


The result is all high and low frequency content (reds and blues) with nothing in the middle. Boom, thump, and sizzle are lots of fun, but they aren’t actually where most of the music is. Scooped EQ is certainly exciting and attention getting, but it’s so unnatural that it can quickly get annoying. Beyond that, anything with mid-frequency content, whether a desirable sound or an undesirable sound, will very easily dominate the sonic picture. (For guitarists, this is a very important consideration. Scooped mids give distorted guitar a very aggressive tone, but going too far means that the actual notes are annihilated by the rest of the band.)


Reverberation, whether electronic or acoustic in nature, is basically the audio equivalent of motion blur. If it were gaussian blur, the sound would “smear” both forwards and backwards in time, but that’s not what happens. The wash of reflections extends after the sonic event only, because the sound has to actually…you know…happen before any reverb can be generated.

Anyway, here’s a picture that’s “dry.” It has no artificial blur added.


“Adding some verb” means that a certain amount of sonic motion blur is blended with the non-blurred, direct signal. At a certain point, an effect that’s highly noticeable can be produced. Even so, the different objects in the image retain some intelligibility. They’re all still identifiable and distinct from each other, even with the blurring of the “visual reverb.”


Even with this somewhat restrained picture, a problematic reality of reverberation begins to reveal itself. All reverb, whether from a processor or from the room, reduces musical definition. Individual notes and individual musical parts become less distinct from each other, because their sonic presence is extended and “smudged” in time. In a certain way, reverb acts as a kind of time-variant dynamic compression. Whether visually or sonically, “reverberant blur” smooths sharp volume-level transitions into softer versions of themselves. This is not necessarily bad. Properly applied reverb can be beautiful, but go too far and…


…everything loses its shape.

This picture also reveals why acoustical problems are hard to fix via electronic processing. Would making the picture brighter fix the blurring? No – the blur would just be brighter. Would pulling the yellow colors out of the image help? No – not fully, anyway. In audio terms, this means that making the PA system louder just makes the reverb louder. Trying to EQ around the reverb has limited usefulness (although it can sometimes help a bit in the right situations, if you’re careful).

In audio, just as in visual art, a canvas that imposes its will upon the painting is tough to fix with brushes and paint – no matter how expensive they are.

But fully getting into that analogy is probably best saved for a different time.