Normalization confuses a lot of people. Let's pull back the veil and clarify.

Normalization: What Is It?

Normalization is adjusting the level of some audio in order to hit a target level. Normalization just turns an entire piece of audio up or down, in order to hit a specific level.

That doesn't sound too complex -- right? But here's where it gets a little tricky .  .  .

There are different ways of measuring the level of a piece of audio. And normalizing can cause problems, if you don't understand what you're doing. Let's clear it up.

Peak & LUFS Levels

You can divide the ways of measuring the level of audio into two broad categories: Peak and average. Peak is concerned with absolute, instant levels. There are a few average scales, including LUFS, which is designed to approximate how our ears work.

Your ears aren't linear in their response. In other words, if a sound is twice the level of another sound, it doesn't sound exactly twice as loud.

The ear doesn't respond in a linear fashion to dynamics, either. A sound with a fast attack and a lot of transients won't sound the same loudness as a steady, even sound that reaches the same level.

And guess what? The ear isn't equally sensitive to all frequencies. Some frequencies sound louder to our ears than other frequencies -- even when played back at the same level.

So, we end up with various ways of measuring audio. Some are concerned with how loud something sounds to our ears. Others may be concerned with the accurate level in terms of the waveform, or voltage. In the digital; realm, we typically don't want to go above 0dBFS, as digital distortion usually sounds icky.

My point is, we have two basic ways of measuring levels -- like our ears and absolute level.

Normalization On An Individual Item, In Reaper

Normalization On An Individual Item, In Reaper

Enter LUFS & dBFS.

There are lots of ways to measure level, but dBFS (decibels, full scale relative to 0, or peak meters) and LUFS (loudness units, relative to full-scale) are probably the most useful for home recording. dBFS measures actual level, and LUFS is a pretty close estimation of how the ear hears.

In short, peak meters are used to make sure we're not distorting our digital recorders and LUFS are used to measure how loud something will sound to us.

In Reaper, we can normalize to Peak, True Peak, RMS, or LUFS values.

Why Normalize?

I don't, usually. Normalization is necessary far less often than most people new in the audio recording game think. But, there are a few cases where it can be useful. If you can think of others, leave me a comment.

  • If you're recording an audio book or other narration where there's a specific level target you're required to hit.
  • If you have various sections of a single track (let's say vocals or guitar) and they vary in level, you can normalize them individually to the same value. This happens to me sometimes if I do vocal takes on different days and don't set my recording level accurately. It can also happen because a vocal is louder in higher passages.
  • If you're mixing for other people, you might normalize all the tracks to a specific value as a starting point for your mix. I'm usually working on my own stuff and by the time I get to mixdown, the levels are reasonably well balanced, so I don't do this.
  • Honestly, that's about it. If you're trying to hit a specific LUFS target when rendering a song (which I don't think is a good idea anyway), there are better ways to achieve it.

How Normalizing Can Cause Problems & What To Do About It

If you try to normalize your mix to a specific LUFS level (let's say - 9 LUFS) and you haven't controlled your peaks, your mix will distort.

The way most professional do it instead, is to control dynamic peaks with a combination of clipping, limiting, and saturation. And also, they often use general compression to reduce the difference between the loudest and quietest things in a song or other piece of audio.

And usually, a limiter is the last thing in the signal chain when a pro renders. This final limiter is set to ensure that nothing goes over 0dBFS. Then, if it sounds good, they may look at a LUFS measurement to see if it's in the ballpark with reference mixes of the same genre/energy level.

Normalization dialog in the render window

Normalization dialog in the render window, in Reaper.

Should We Normalize To LUFS or dBFS?

Like I said, I don't usually normalize. I can go weeks without using it. We didn't have it back in the days of tape, at all.

But if you're going to normalize, do it this way. If you want to maximize level, normalize to peak, and set the value to somewhere just below 0dBFS (maybe -0.7 dBFS). If you want several pieces of audio to sound about the same loudness, normalize to a LUFS level. Anywhere from -24 LUFS to - 14 LUFS might work, depending on the source material.

Should We Normalize Our Mixes?

I don't.

I see people in various forums talking about normalizing their mixes to get maximum level. That's not how I'd do it. To get a high level on your mix, tame the peaks with a limiter. Bring the threshold down until you hear the most prominent element in the mix start to go dull. Then back off the threshold to taste.

Then set the output of the limiter to a value just below 0dBFS. I often use -0.7 dBFS, some people use -1, or -.03. It's good to leave a bit of room as sample rate conversion might create inter sample peaks.

Should We Normalize To -14 LUFS-I For Spotify?

No. Spotify famously states their playback level is -14 LUFS, and if your mix is hotter than that, they'll turn it down. So what? Their playback target is not your mixing/mastering target. They don't compress it or degrade the quality of your song when they turn it down. They just match the level of the other music.

Most of the mixes on Spotify are much hotter than -14 LUFS.

Why is this? Well for one thing, Spotify does not normalize for all playback situations. If someone listens to Spotify on a computer or TV, normalization is off. Some also turn off normalization in the app. Your -14 LUFS mix gonna be weak sauce compared to all the -9s.

About the author

Keith Livingston

Keith Livingston started recording his own music in the late '70s, on a 4-track. He worked his way into live sound and studio work as an engineer -- mixing in arenas, working on projects in many major studios as a producer/engineer, and working in conjunction with an independent label.

He taught audio engineering at the Art Institute of Seattle, from 1990-1993, and in '96, contributing to authoring several college-level courses there.

He was General Manager of Радио один (Radio 1) in St. Petersburg, Russia.

Now he spends his time recording his own songs wherever he roams, and teaching others to do the same.

You might also like

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}