I got into this business thinking I would be an engineer in a studio. That’s not how it worked out. Live-sound got a hold of me, and that was pretty much it. Even so, I do some occasional studio-style mixing, and I think I’m starting to get the hang of it.
One of the major problems with recordings is that you don’t control the playback system. One person might play your music on a rig that’s built to reproduce the entire audible range of sound with a laser-flat response curve. Another person might be listening on barely-working earbuds. Someone else might be one of those incredibly annoying people who listen to music by pumping it through their phone’s speaker. CAN WE STOP THAT PLEASE?
Even before the age of smartphones, “translation” was a big issue for folks making records. The question that was constantly asked was, “How do I make this tune sound good everywhere?”
In my mind, that’s the wrong question.
The real question is, “Does this mix continue to make sense, even if the playback system has major limitations?”
I realize that this is an appeal to absurdity, but I see it as counterproductive to try to make a song sound “good” on a half-dead clock-radio. A mix being played through a small, damaged speaker should sound like a mix being played through a small, damaged speaker. Spending hours and hours trying to make things fool people into thinking they’re listening to a better playback device isn’t worth it for most folks, especially because the mix will probably sound strange on more decent systems.
But spending some time on making sure that your recording basically works in a variety of situations IS worth it.
It’s actually pretty easy. If you already have a digital audio workstation of some kind (ProTools, Logic, Cubase, Reaper, GarageBand, etc, etc), you won’t need any additional equipment. Back in the day (and now), studios used to have small, limited bandwidth speakers they could route mixes through. That was before you could get another equalizer, basically for free, simply by running another instance of a plugin.
And that’s what I recommend doing.
Put an extra EQ plugin across your main mix. Set that plugin to kill off both the low and high-end of your tune. A high-pass and low-pass filter set at about 200 Hz and 5000 Hz respectively should be a good start. Collapse the mix to mono if you can. Your mix should now sound like it’s being played through a phone speaker (gah!), or pretty mediocre earbuds.
Does the mix still make sense? Can you still hear all the instruments that you feel are crucial? Are the vocals still intelligible? If not, start making changes. Get to a place that you like, and then pop the “crappy speaker” EQ into bypass. Restore the stereo field, if you were working in mono before.
With all the high and low end restored, does the mix still make sense? Are the bass and kick overwhelming in the bottom end? Is there too much traffic way up high? If so, make changes in just those areas – the areas that were cut out by the EQ. Try not to touch the midrange much, though, because that’s what you just got yourself satisfied with.
Do some back-and-forth checking as you work. You’ll know that you’re done when the mix still works in both scenarios. The mix without the “sucky playback system” EQ should sound “good,” assuming that you think your regular playback monitors sound good. The mix with the EQ should work, and be basically listenable. Your tune will now have a much better chance of “translating” in multiple scenarios.
And, as a final opinion, I would say this: If your mix absolutely must be mind-blowing on a specific format, make a special mix just for that format. If, for example, you know that a huge chunk of your fanbase is definitely going to be listening on Airpods, create (and clearly label) a mix that’s designed to be stunning just for them.
But, if you’re not really sure what people will be listening on, basic attention to translation should go a pretty long way.