top of page

The Audio Blog

Tips, tricks and fun for the recording musician

The Audio Blog is a set of thoughts, techniques, knowledge bits and the occasional rant about the wonderful world of audio and music recording. Follow me on the path to great sounding music, never a boring moment!

Writer's pictureCristiano Sadun

Average LUFS are irrelevant!

Updated: Nov 11, 2022

Well, ok - they are relevant, but not in the sense you might think.


There are a lot of people (very few of whom are mastering engineers) who seem to agonize about the LUFS level of their mastered tracks.


The situation is not improved by much online material, which seems to push the idea that you, as a mixing engineer (or a home-mastering one) should target a certain LUFS level.


"Am I at -14?", "Gee I average at -5!", "How can I reduce the average LUFS of my track?" and so on and so on.


Now, loudness normalization is probably the best thing happened to music after... well, after whatever you think is the best thing happened to music.


But it's most definitely not something worth obsessing over.


So, if you are among the ones desperately seeking wisdom on how to reach that perfect -14 LUFS average... this post is for you.


And you can stop worrying.

 

Let's start with the conclusion: by virtue of existing, and having been adopted in some form by major streaming services and YouTube (aka the way music is listened nowadays), loudness normalization is fulfilling its role, and you don't really have to worry too much about it at all. Actually, you may have to worry a little bit, but in the opposite direction of what people usually worry about: make sure you do not make your track too quiet.


If you are skeptical, carry on reading and find out why.

Loudness normalization, in case you don't know, is a process that:

  • looks at a stream of PCM samples (that's your song right there),

  • calculates the average loudness (often in loudness units, LU) relative to a given scale (full scale, FS),

  • and makes sure that this average is at a specific desired level (usually something in the -13 or -14 LUFS ballpark).

The whole process on how to find the average and what is the "reference" average level is standardized, but there's lots of outlets (streamers, radios) that do not necessary follow the standard. The principles, however, are always the same and it makes no difference for the purpose of this post: the loudness normalization process finds the average and then makes sure that it's at a wanted level (decided not by you, but by the process).

 

And how does that process makes sure that the average loudness of your song stays around that level?


The general idea is super simple: if the average loudness of track is too high, the entire track will be "turned down". Turned down as in turning down the volume knob.


The process is called loudness normalization, and the word is not chosen at random.


"Normalization" means that the entire track is transformed consistently, with the kind of uniform, linear amplitude reduction that you get when you move the volume knob counter-clockwise.


This is in opposition to compression/limiting.


The important difference is that compression/limiting changes the sound of a track; normalization doesn't.

That's because a compressor operates the volume fader while the track is playing.


It's like if someone had a hand on the fader, listening to the music and lowering the fader it if the level goes over a given threshold (using a certain speed, the "attack" speed).


The imaginary hand keeps the fader there so long the signal stays over the threshold ("holding") and raises it again when the music goes back below the threshold, with the right speed to bring it back to the original position over a given "release time").


Doing all these fader movements while the track is playing tends to change the sound of the mix a lot - depending on attack speed, release time, where the threshold is and of course how much the fader is lowered (the "compression ratio").


On the contrary, normalization is akin to setting the (imaginary) fader (or the volume knob) before playback and leaving it there: there's no attack, hold and release that depend on a threshold or anything. Nothing happens to the fader while the track is playing.


That means that the sound of your mix is absolutely unchanged by normalization.


When you're turning your volume knob on the hi-fi or the phone, you're effectively normalizing the track.

 

Alright then, now you know what normalization does.


Now say you've made your mix. maybe even mastered it yourself. You place your free Youlean loudness meter on the result, in nervous expectation... and it turns out your track averages at say -8 LUFS! Suddenly the world is crashing around you, and you feel you going into borderline clinical depression.


Well, there is no need.


Why?


Say you exceed the target LUFS of 6 units full scale (i.e. you are at -8 LUFS instead of -14).


What will happen when your track is played back by a streaming service is that the "normalizer" will turn its "virtual playback volume knob" 5 units. But as a listener you can easily defeat this by turning up you real volume knob of 5 units, and the mix will sound as loud as it sounded on the mastering monitors.


Because the mix sound has not changed at all.


In rough terms, loudness normalization gives back the control of the playback level to the listener, as opposite to the mastering engineer. That's about it.


The implications on the mastering engineer are huge, tough.


Because before loudness normalization, the level of the master was exactly the level that reached the user (give or take radio processing etc which is the same for all songs so doesn't matter). So it was important to make a master as loud as possible, otherwise a certain track would have felt "small" when played right after another louder mix. "As loud as possible" of course meant compression/limiting, which do change the sound of the mix.


As a producer you had to decide how much to sacrifice the sound to get to a "competitive" loudness.


In opposition to that, with loudness normalization all masters are brought down to the same average loudness (so long they are loud enough.. since usually they will not be brought up).


That means that now you don't need to sacrifice anything in order to be competitive

 

The result of all this is that now you really don't need to worry about loudness (well, you still have to worry to be "loud enough" but that's usually not a big issue): just make the best mix you can.


"The best mix" means that the (mastered) mix must sound good at low volume level, sound good at medium level and sound good at high level.


Make it so and, if the LUFS are somewhat too high, fine - the track will be turned down a little... just enough to be at the same average level of all the other material. Not lower, not worse - just the same average.


But since you made sure it sounded good at any reasonable level, it will still sound good.


So if it gets turned down, it is no big deal.


Obviously, in order to make sure that your mix sounds good at low levels, you generally end up mixing it at low, conversational level... which is a well known trick of the best mixing engineers out there.


The conclusion is that you don't really need to obsess about LUFS at all.

Just make the best, most dynamic, most good-sounding-at-all-levels mix you can, and have it mastered in a way that preserves the goodness (aka no hard limiting, and the mastering engineer will raise the average level in case it's too low), and you're all set.


Spotify here I come.


If the final LUFS are a little on the high side, it just means that you like to listen to music (and to mix) at a little higher level than what the broadcasting community thinks is a reasonable average. But that's it.


If you turn down your volume knob you will get the same level that streaming listeners will hear - and since you have checked your mix at all levels - nothing untoward will happen: your mix will still "pop" since you've made sure it's damn good also at lower volume.

 

The only thing you need to be careful is not to master too quiet. In doubt, better go a little bit over the recommended LUFS, and let the normalizer set the volume knob a little lower for the listener.


Some services/broadcasters will jack up the gain (and add a little limiting as necessary) if the track is too quiet, but others will not.


Which do what depends on the moment you are reading, of course, as many services add or update their features all the time. As of the date of writing, YouTube for example does not increase the gain of a quiet mix, whereas Spotify for example does.

 

All this said, if you mix with your monitors a properly calibrated reference level and with a proper gain structure, your mix should (give or take) turn out around the "standard" reference loudness, in the ballpark of -14, -13 LUFS. But if it doesn't, it simply means you like to mix or listen at a different level than the broadcast standard. Fair enough.


In conclusion - it's important to realize that basically loudness normalization removes the need of excessive limiting at the mastering stage, so there's one less thing to worry about, not one more.


Whatever LUFS you end up with, it's fine - so long the mix sounds good at all volumes.

 

Before leaving: there is another recurring and interesting question about loudness normalization, and it goes like this: how comes certain commercial tracks on Spotfy/Tidal/YouTube/YouNameIt still manage to sound louder than mine even if theoretically they have the same average loudness?


The answer is for another post.


For the moment, forget about LUFS and enjoy your mixing!

2,515 views4 comments

Recent Posts

See All

4 Comments


Steve Keith
Steve Keith
Jul 13, 2021

I liked this a lot. I usually shoot for -12 with a ceiling of -1dB. It seems like if I do things right and then use these levels, then the adjustments made by the individual platforms (spotify etc) do not really hurt that much. Anybody found out different? (I'm steve (at) baselines.com. Let me know.

Like
simonsifi
simonsifi
Jul 14, 2021
Replying to

Similar to you Steve, I tend to master to around -11LUFS with a ceiling of -1dB, taking into account my logic on the earlier post from me below about streaming DEVICES, I feel is a good compromise between retaining dynamic range, minimal normalisation, and loud enough for when playing outside the app (that does normalise) such as a streaming speaker which typically do not apply normalisation. Purely based on trial and error and my own testing.

Like

simonsifi
simonsifi
Oct 12, 2020

Great post, I've been struggling with loudness levels for while. Mastering at around -14 and finding them really quiet in many situations. For example playing within the Spotify app they sound fine, when sending to a streaming speaker system they sound very low in volume compared to other tracks, it turns out that volume normalisation that is applied by default in the app, is not implemented to most streaming devices! So actually LUFS really is important but because of a different reason than expected, i.e. nothing to do with loss of dynamic range, but it needs to be LOUD enough to complete and not sound small alongside the rest of the music out there. This is contrary to the 'loudness…


Like
Cristiano Sadun
Cristiano Sadun
Jul 16, 2021
Replying to

Hi! I finally made it so you find it a little further on in the list 😊

Like

Subscribe Form

Stay up to date

bottom of page