Mastering loudness, dynamics, and EQ in today's music

How loud should you master? 

If only the answer to this ultimate question were as simple as Hitchhiker’s Guide: 42. Alas, the answer is complicated and subjective. 

 
I can hear this in Alan Rickman’s voice

I can hear this in Alan Rickman’s voice

 

A picture is worth a thousand words, so let’s start at the end. 

Here are 3 tracks, each mastered at different loudness levels, from loudest to softest. 

  • Dangerous (Big Data) had a loudness level of -5.5 LUFS* Integrated (loudest).

  • Lethal Skies (Hooverphonic) -8.1 LUFS Integrated.

  • Congruence (Phutureprimitive) -11.6 LUFS Integrated.

*LUFS = loudness units relative to full scale. The closer to zero, the louder the song. This is the current generally agreed upon method of assessing loudness. More here…

Songs at original volume

Songs at original volume

Now here’s what happens when these songs are normalized by Spotify. 

Songs normalized to equal loudness levels

Songs normalized to equal loudness levels

Wow! What has happened here? Why does the previously loudest song now have the shortest waveform? 

These songs are now approximately the same loudness. Each song has been assigned a “loudness penalty,” reducing the volume by a specific amount. This is a feature employed by Spotify, Apple Music, YouTube, and most other services so that you can stream music without constantly adjusting the volume of your playback system. These days normalization is turned on by default and reported to be used by 85% of listeners. 

In Spotify, Normalization reduces (or increases) the volume of each track to about -14 LUFS (we'll not go into the gory details of ReplayGain vs LUFS). The louder the song, the greater the loudness penalty, and you can see this effect in the waveforms. 

So what? Well, not only can you see how the waveform peaks are higher in the third song, you can hear the difference as well. The greater dynamics provide more punch and depth, as one would expect. 

Are dynamics the new loudness? 

If this is the end result of normalization, then it moves the “loudness wars” in a very different direction. Rather than squashing the dynamics to push the LUFS level up higher, which is (in many cases) not aesthetically desirable, artists can master using dynamics that fit the song, yet still compete on loudness. 

Mastering Engineer Ian Shepherd has written extensively about the loudness penalty and helped to create the Loudness Penalty web site and plug-in, where you can checkout the effect of the audio loudness normalization performed by various streaming services. (This is how I determined how much to reduce the audio levels in the waveforms above. Also, “Dynamics are the new loud” should be attributed to Ian). 

More on the normalization settings of various music services here.

The Test 

 
Simulation. Do not attempt.

Simulation. Do not attempt.

 

As the artist Edge Of The Universe (shameless plug), I was starting the mastering stage for my next album and wanted to get a reference point. What are artists doing these days? 

The result was a rabbit hole of data collection and comparison, resulting in the below chart. I chose some current hits, some songs I like, some I was curious about, and some reference songs.

(Pinch to zoom on mobile)

*All songs above were purchased, or data was analyzed by using Rogue Ameba Loopback to stream into the plug-ins, with Normalization or Sound Check turned off.

*All songs above were purchased, or data was analyzed by using Rogue Ameba Loopback to stream into the plug-ins, with Normalization or Sound Check turned off.

Example of loudness tracked across the song in Insight 2

Example of loudness tracked across the song in Insight 2

Example of loudness penalty results on Loudnesspenalty.com

Example of loudness penalty results on Loudnesspenalty.com

Monitoring dynamics in ADPTR Metric AB

Monitoring dynamics in ADPTR Metric AB

Not having access to the uncompressed masters, I used mp3 and m4a files, as well as streaming audio with Normalization or Sound Check turned off. While this could certainly affect True Peak due to the nature of encoding, LUFS or PLR would only be affected marginally, if at all.

As you can see, in at least this small sample of songs, the LUFS Integrated levels land in a range from -5.4 to -11.6. In general, the louder the master, the lower the dynamics as indicated by PLR. PLR (Peak to Loudness Ratio) and PSR (Peak to Short-term loudness Ratio) are metrics used to assess the dynamics of a song. A PLR of 12 is considered to be dynamic, and a PLR of 5 squashed. More on PLR and PSR here.



Who is right? 

Alrighty, if… 

A) Spotify recommends providing a master at -14 LUFS 

B) Most streaming services have normalization on by default 

C) Normalization is the trend and will likely continue to expand 

Then why are so many songs still mastered at high LUFS levels? 



Possibilities: 

  1. Aesthetic choice. This iZotope article sums up nicely how -14 should not necessarily be a target, but rather a reference. Master in a way that best suits the music.

  2. They are concerned about loudness when normalization is off or not available. The Spotify web player and Spotify apps integrated into third-party devices do not normalize as of yet. Also some services such as Bandcamp also do not normalize.

  3. Habit / reference tracks / typical for that genre



We must go deeper...

I spoke with Brian Lee, Engineer at Gateway Mastering about this. Brian indicated that loudness is usually dictated by the client’s mix engineer by way of reference tracks, and their masters are anywhere from -6 to -12 LUFS. They also provide Apple Digital Masters, which is cool. Per Brian:

“With music streaming services being the way most people hear music, many musicians (not all) are bringing down their mastering levels and letting the mixes breathe a little more, which I say is a nice thing."

I also posed the question to Ian Shepherd, who had this to say:

“I suspect it’s a mixture of reasons, and I’m not sure it’s all down to mastering engineers, either. Many mixers are pushing levels higher themselves these days, either because their clients are comparing without normalisation, because they want to be ‘in charge’ of the process rather than the mastering engineer, for artistic reasons or just bragging rights. And the problem is that if the client is already accustomed to a super-loud mix, it’s very hard for a mastering engineer to pull back from that. Other reasons include ‘that’s the way I’ve always sone it and my clients are happy’, ‘it will still sound louder online even when normalised (not reliably true, in my experience, although it can happen) etc.’”

Ian also reiterated the fact that -14 is a reference, not a target. Which brings us to… 


The Bottom Line

Songs are mastered to all different levels. Quite possibly many are mastered hotter than they need to be, and others mastered to -14 when they shouldn’t be. And, there are a multitude of reasons why songs get mastered at different levels.  

I think Ian put it quite well: 

“I think people are really noticing that the sweet spot is somewhere in between, and depends on the material - and I hope they’ll start just doing what’s right for the music, and just check that it still sounds good when normalised too. 

The positive thing about normalisation is that it’s not really possible to “game” the loudness effectively, so it becomes a much less important factor. Make the music sound great, and know you won’t be blown completely out of the water by other super-loud stuff.” 

Personally, I will be attempting to use the following as guidelines, but letting the album and material determine where things land.

  • -11 to -13 LUFS Integrated

  • -9 to -10 LUFS Short Term max

  • -1 to –2 True Peak (all services recommend this to avoid clipping in the encoding process)

  • 10-12 PLR, typically staying above 8 PSR


Other notes 

In looking at the song loudness history, it was common for the integrated LUFS level to build up over the course of the song. Translation - you don’t need to hit the loudness peak out the gate (unless you want to). The final 1/3 of the song is typically when everything is firing and the mix is dense, levels are topping. If you’re flatlining the whole way, then there’s no loudness climax, as its were. It’s a subjective decision, obviously.



Tonal Balance

OK that’s loudness and dynamics, but how about EQ? How did these songs stack up? 

iZotope offers a very useful tool called Tonal Balance which averages the EQ spectrum in the song over a short term (i.e. the last 10 seconds). 

They’ve done the hard work of profiling thousands of songs to provide an average EQ range for different genres. And sometimes, songs lined up perfectly. Other times, not so much (after all, they are an average). There was quite a bit of coloring outside the lines, but the songs sound great, so EQ balance is clearly used creatively and to the advantage of the material. 

Even though these are short term snapshots, they tended to be representative of the song.

Some songs were squarely in the range and sounded that way.

Some songs were squarely in the range and sounded that way.

Tycho shaving off the highs for that lofi effect

Tycho shaving off the highs for that lofi effect

Being lite on other instrumentation, the bass in Bad Guy by Billie Eillish takes up all the energy

Being lite on other instrumentation, the bass in Bad Guy by Billie Eillish takes up all the energy

Asura dropping the low mids and boosting the highs. The song sounds amazing, so it works!

Asura dropping the low mids and boosting the highs. The song sounds amazing, so it works!

Conversely, Carbon Based Lifeforms pushing up the low mids for a full dreamy sound

Conversely, Carbon Based Lifeforms pushing up the low mids for a full dreamy sound

 



DIY Time

The normalization test is easy to perform with your own work. Try this:

  1. Master your mix at two different levels. One loud, one at a level that feels right.

  2. Drop them into the Loudness Penalty tool and get the numbers.

  3. In your DAW, simply reduce the gain for each master by the appropriate amount for the service you are referencing.

  4. Listen.

Does the louder master sound better than the dynamic master when normalized or not? A pretty straightforward test. 

Even better, Meterplugs has some great tools that make this a seamless part of your process. 



EQ, True Peak, and Bandcamp

Artist and Producer @fusedofficial had these excellent additional points to make (paraphrased):

  1. EQ can significantly impact the normalized result, as extreme lows and highs contribute to the integrated LUFS level, but are not really heard by humans. So your overall loudness penalty will be affected by these frequency ranges, making your song appear quieter after normalization. Keep these in check and get the muddiness out of the mix to maximize perceived loudness potential after normalization.

  2. Bandcamp does not normalize, so he creates a separate master at about -9.2 LUFS integrated (same as CD). This could be especially important when your song is arranged with others on a compilation.

  3. If you master at something like -8 LUFS for streaming, once normalized your True Peak will be far below -1, so you’re not getting the dynamic impact of a song that was mastered at -14 LUFS and still hits True Peak of -1 after normalization. (See songs #1 and #2 at the start of this article for the visual representation of this.)


Thank you

Thanks for reading. I hope you found it useful or interesting. Comments welcome. Big thank you to Ian Shepherd, Brian Lee, and Fused for their thoughts! As well as Lux Elliott for some peer review on the data. 

Please join our mailing list for updates on our products as well as more articles. Good luck out there!

 
Previous
Previous

Guitar chord names demystified