• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

do streaming platforms apply loudness normalization?

Blake Klondike

Addicted to Fun and Learning
Forum Donor
Joined
Jan 20, 2019
Messages
528
Likes
361
I was just watching a video on the history of mastering and the host said that the loudness wars dont make any sense in the streaming era because streaming platforms apply loudness normalization. is this true?
 
It's almost true, AFAIK they apply LEVEL normalization based on a loudness metric. "Loudness" could imply a change in frequency balance, but I think they just adjust levels so that the Dynamic Range measurement comes out the same across the board. That said I haven't looked into exactly how they do it. I do know that Spotify and other platforms have guidelines on what they dynamic range measurement needs to be on a new upload.
 
the host said that the loudness wars dont make any sense in the streaming era because streaming platforms apply loudness normalization. is this true?
What "this" refers to, the first statement or the second? Both are true, but there are nuances. For instance, Spotify allows the consumer to select from three different target levels or to disable normalization completely. Other major platforms have their own targets and settings, so there is still some temptation possible for labels to engage in loudness wars.

P.S. Here is Spotify's take on the matter: https://support.spotify.com/us/artists/article/loudness-normalization/
 
It's almost true, AFAIK they apply LEVEL normalization based on a loudness metric. "Loudness" could imply a change in frequency balance, but I think they just adjust levels so that the Dynamic Range measurement comes out the same across the board. That said I haven't looked into exactly how they do it. I do know that Spotify and other platforms have guidelines on what they dynamic range measurement needs to be on a new upload.
in my recording experience, normalizing a file degraded the sound quality. any thoughts re whether this would apply here?

so it sounds like the relative levels are all the same, rather than having all the dynamics smashed out of a cut, but they are trying to make the transition from a miles track to a juice newton track seamless, in terms of loudness?

what does this mean for a record like "rid of me" by pj harveytta was mastered quietly on purpose?
 
What "this" refers to, the first statement or the second? Both are true, but there are nuances. For instance, Spotify allows the consumer to select from three different target levels or to disable normalization completely. Other major platforms have their own targets and settings, so there is still some temptation possible for labels to engage in loudness wars.
I was thinking of applying loudness across a record so that, for instance, wond'ring aloud and locomotive breath from aqualung will both be the same volume-- basically concerned about the streaming process interfering with sound quality. sometimes the desire just to listen to the damn album the way it is supposed to sound seems like an endless battle.
 
It's not typically the streaming services that compress and "normalize" recordings. It's the labels.

When I listen to chamber music on Idagio and Qobuz, it all sounds great with an enormous dynamic range, on both platforms. When I (very rarely) listen to hard rock on Qobuz, the range is very narrow.

SiriusXM does compress classical music considerably. This makes it unusable for critical listening, but excellent for automobiles and background music.
 
the host said that the loudness wars dont make any sense in the streaming era because streaming platforms apply loudness normalization.
Yes, IMO it doesn't make sense but AFAICT they still do it. Which doesn't make sense to me :-)

in my recording experience, normalizing a file degraded the sound quality.
Normalization is just a volume change. If that degraded the quality, I'd start looking what is broken in my setup.

so it sounds like the relative levels are all the same
Spotify has both album and track normalization. Album normalization retains relative levels between tracks in the album and it is used when you listen in album view. Track normalization is used in playlists and there each track is normalized to the same level (which imo is a bad choice by Spotify).

Tidal, AFAIK, has only album normalization, which is used both when listening to albums and playlists.

Youtube doesn't have a notion of an album, so there's only track normalization. I don't know about Youtube Music.

from a miles track to a juice newton track
what does this mean for a record like "rid of me" by pj harveytta was mastered quietly on purpose?
wond'ring aloud and locomotive breath from aqualung will both be the same volume
If you have such specific examples, can't you just listen to them and check for yourself?
 
Yes they use variations for EBU R128 with lowered LUFS from 24 to 13, 15 or 17 and there you have it again (loudness wars). All do EBU R128 is certainly good enough with range of 24 dB even for THX video and preserves DR but doesn't increase or change it. Their excuse is how on most recipient devices output stage is too weak to support reduction of 11~12 dB. But that's stupid because you simply compensate for it with digital volume authentication and bring it back to or close to maximum output it supports. The point of loudness wars whose that you can get programme loudness higher and as it's compressed to DR of 7~8 or less peaks won't go much higher. It whose always stupid (loudness wars) and produced a lods od bad, broken and there for forever lost materials but that's how it is.
 
Yes they use variations for EBU R128 with lowered LUFS from 24 to 13, 15 or 17 and there you have it again (loudness wars). All do EBU R128 is certainly good enough with range of 24 dB even for THX video and preserves DR but doesn't increase or change it. Their excuse is how on most recipient devices output stage is too weak to support reduction of 11~12 dB. But that's stupid because you simply compensate for it with digital volume authentication and bring it back to or close to maximum output it supports. The point of loudness wars whose that you can get programme loudness higher and as it's compressed to DR of 7~8 or less peaks won't go much higher. It whose always stupid

(loudness wars) and produced a lods od bad, broken and there for forever lost materials but that's how it is.
 
I was told, on GearSpace I think? That EBU R128 was a loudness standard for broadcasters. The participants there don't care about loudness anyway, Their attitude is "serve the song" and let users adjust the volume, or apply their own normalization apps.

Still, I'm all for standardized loudness across the board, since digital audio itself is distortion free up until the FS clipping point.
 
Last edited:
I was told, on GearSpace I think? That EBU R128 was a loudness standard for broadcasters. The participants there don't care about loudness anyway, Their attitude is "serve the song" and let users adjust the volume, or apply their own normalization apps.

Still, I'm all for standardized loudness across the board, since digital audio itself is distortion free up until the FS clipping point.
Think about that next time you change channel on TV and make sure you have responsive volume control's trigger happy.
Yes it is made purposely for broadcastings, it pretty much works for all, catch is you need to have attached metadata or encode in it before streaming. Exactly what we aren't getting.
 
With Apple Music you can enable or disable the volume normalization. The legacy one was not great, with lossless enabled it’s a better one, album based AFAIK (was annoying on albums like The Wall)
 
What "this" refers to, the first statement or the second? Both are true, but there are nuances. For instance, Spotify allows the consumer to select from three different target levels or to disable normalization completely. Other major platforms have their own targets and settings, so there is still some temptation possible for labels to engage in loudness wars.

P.S. Here is Spotify's take on the matter: https://support.spotify.com/us/artists/article/loudness-normalization/


Now I am confused.......
I use Spotify Premium and with albums I am positive are the same master (Rock/pop) stuff doing A/B comparisons, everything comes out, as far as I can hear, identical.

Some older classical, ALSO, comes out a a VERY low overall volume on Spotify also, just as the originals were.

But a few newer Rock/pop remasters that are "Known" for being ungodly bad for loudness and DR Compression, sounded a good bit more listenable and at fairly normal average volumes, making me wonder if Spotify uses some type of limiter on really loud annoying stuff??

The average normal stuff matches exactly, but it seems to be only a few really loud offenders, that come off as NOW fairly listenable overall.

FYI, I used NO normalization at all, obviously.
 
I was thinking of applying loudness across a record so that, for instance, wond'ring aloud and locomotive breath from aqualung will both be the same volume-- basically concerned about the streaming process interfering with sound quality. sometimes the desire just to listen to the damn album the way it is supposed to sound seems like an endless battle.
Tracks across an album ARE at their intended dynamic difference.

The album in its entirety can be adjusted up or down several decibels, to better match against ALL other albums is my understanding of it.

Although I am not so sure with classical, as it seems to not be affected by their "Leveling" adjustments, but I am not positive.


Most pop/rock remasters "Now" do not have that overly loud sound we get on CD, but they are still squished dynamically, but just not with the aggressive loudness level.
In my experience they still sound a bit louder for sure than the original OLD masters, but instead of say a 8-10 db increase, it appears close to maybe 3-4 db louder on the really loud albums.
 
Then your observations are consistent with that fact.
I "think" so........

To me, it appears like really LOUD remasters of rock pop, are simply "Lowered down" so they do not stand out as annoyingly loud, but they have their same built in squished dynamics.

Two that I know are same masters Suzanne Vega "Solitude Standing" and Genesis "Trick of the tail", sound identical in all ways to their CD counterpart as far as dynamics, tonality and loudness peaks and average levels.

When I A/B those two albums, I can not tell which is which at all.
 
@beagleman no matter how good EBU 128 is made it's not magic and fun is fun and done is done. What's ruined with over gate compressor and down to 4~5 DR can't really be fixed just not shouting loud. EBU doesn't change DR, doesn't do gate compression/expansion and it's not peak based limiter. It does it in sequences (peace's) to them self, each other and to absolute scale as outputted (per and between chenels). Spotify disassembled it, put their own LUFS target's to three lv they classified instead automatically independent based on segments as it is in EBU as that needs much less metadata attached obviously. So did many other including Apple. Their argument are various from that they don't need -23 dB as output lv or how for music that's too much or how mobile devices don't have enough power to put output that low. And they are all true but it doesn't fix the problem and you will have substantial SPL lv output difference between them, other streaming and broadcasting services so you end up at the beginning of the problem. Of course you can bust LUFS a lot when you do it in floating point precision and glue it back to integer not introducing noise so up to -23 dB (it's not -23 more like -11~-12) can be compensated back and still stay in line with everything else (absolute level's).

When they lower LUFS threshold the materials out of range (with higher DR than -LUFS) it doesn't get processed as it's out of reach as you found out your self with very high (16~17) DR classical music and not adopted THX cinema movie's can go up to 24 DR.
Edit: Wiki article is actually decent all do it doesn't go down in details.
 
Last edited:
It's not typically the streaming services that compress and "normalize" recordings. It's the labels.

When I listen to chamber music on Idagio and Qobuz, it all sounds great with an enormous dynamic range, on both platforms. When I (very rarely) listen to hard rock on Qobuz, the range is very narrow.

SiriusXM does compress classical music considerably. This makes it unusable for critical listening, but excellent for automobiles and background music.
Ditto about Qobuz, pretty much matches the CD file on my NAS. I appreciate that. Volume is always slightly lower on NAS but when equalized sounds identical on most. I suspect any differences come different masters.
 
Although I am not so sure with classical, as it seems to not be affected by their "Leveling" adjustments, but I am not positive.
Because it's not that easy to find an album that qualifies to any change, assuming we are talking about the "Normal" level. You either need an album with integrated LUFS above -14 or an album with integrated LUFS below -14 and enough unused headroom left. AFAICT most classical albums are below -14 but there's no unused headroom, so Spotify won't make them louder because that would clip them. But you can try Rachel Podger / Guardian Angel. At "Normal" level it gets about 6 dB louder, so quite noticeable.
 
I have playlists, both local and streamed that mix classical, jazz, and various eras of pop (from Cole Porter to recent).

I can go from something like hard rock to Lark Ascending. The level change is sometimes disconcerting, but I like it.
 
Back
Top Bottom