• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Loudness compression, loudness wars.. What exactly it is and why is it happening?

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,522
Likes
37,050
@Blumlein 88 , thanks for your comment on the Deruty-Pachet (2015)!

Yes, you are right that there may be hidden biases in the selection of empirical data. This is a researcher's worst nightmare; that his data is faulty and full of biases.

Let me now try and comment your remarks as well as having other ASR readers in mind when I comment.

The authors have a discussion (chapter 4.3) of their findings (on year being more important than genre), and here they write that "This may bring the suspicion that dynamics are only dependent on the trends followed by the most represented genres, such as the subgenres of rock represented in Figure 3, but independent from the trends followed by most other genres, in which case our conclusion would not stand" (my underlining).

They try and show that their results are year related controlling for genres. And that analysis may implicitly give us an insight into what would have happened with their results if they controlled for total album releases per year. The bars with 5-95 percentiles in figure 1 may also be an indication of how a control for total album releases per year would have fared.

See also this ISMIR poster that accompanies their 2015 article: http://www.emmanuelderuty.com/pages/publications/2015_ISMIR_poster.pdf

FWIW, the entire dataset of Deruty-Pachet (2015) can be found here:

http://emmanuelderuty.com/pages/dynamics/Corpus7200/Values.xlsx

But you are right: It would have been great if we would have a complete dataset of all recordings ever made since the 1950s, 1960s.

One interesting finding in their paper, is their note on micro vs macro dynamic:

"A notable exception lies in macrodynamics as measured by the EBU3342 Loudness Range, which are more independent from both genre and year of release. In other words, dynamic range in the musical sense (pianissimo tofortissimo) is only marginally dependent on either mainstream genre or trend (...) As an exception, macrodynamics, which have not been significantly influenced by the loudness war, appear to increase since the loudness war’s peak, and are currently reaching very high values".

In other words, the debate on loudness has been little nuanced if it doesn't make a distinction between micro and macro dynamics.

Please note that the paper was presented at International Society for Music Information Retrieval, ISMIR:

http://www.ismir.net/society.php

This perspective, big data in audio, is highly interesting because it replaces opinion and anecdotes with fact. And this perspective may also draw more computer people into audio science. Needless to say, habit and convention will predict that old school audio people are a bit skeptical towards this new breed of audio scientists that are more pattern oriented than case and anecdote oriented.

Many people, including people at ASR, think loudness is a battle lost. So they keep on fighting as guerilla fighters. However, other people, that are in high regard, are of a totally different opinion. Bob Katz, thinks (2013) the war is over, due to normalization features in distributors like iTunes (Katz' original blog post is no longer available):

https://www.soundonsound.com/techniques/end-loudness-war

Lastly, just a couple of words on the authors, Deruty and Pachet. Deruty is a frequent publisher of scientific audio articles:

http://emmanuelderuty.com

Pachet is the better-known name of the two:

https://en.wikipedia.org/wiki/François_Pachet

He is, among other things, a fellow of The European Association for Artificial Intelligence. Which may be an indication of his ability to deal with datasets.

Well I had high hopes downloading the Excel file with the data. Sort by year, and put in the number of albums sold by year and keep the percentage used the same to see what the results were. The Excel file had everything I wanted except for one key thing. For each year they listed it by alphabet, but didn't include the rank within each year. I would need to go and find the rank for 7200 tracks in order to check it. Putting in numbers for 47 years isn't too bad, then setting up a percentage function. Searching for the rank of 7200 titles is a bit much for idle Sunday afternoon curiosity.

Then I paid attention to them using all the tracks on the top rated albums. So most years have 15 to 17 albums which would reduce the searching from 7200 to a few hundred. I think of the fact most albums are popular because of one or two songs on the album. While true that most albums have a reasonably consistent loudness profile for the whole album that too could turn into a confounding factor. Maybe only the loudest songs have an edge in getting an album to sell.

Easy to complain about what someone left out. I do think now upon reconsidering it the data is probably okay. I didn't realize they were including all tracks on an album. That means not really the top 150 tracks per year, but top 15-17 albums per year. In years of lowest album numbers that would be the top 3% or so and in years of greatest album numbers it would be top .5% or so. Either of which is fairly far out on the tail of distribution.
 

TBone

Major Contributor
Joined
Mar 16, 2016
Messages
1,191
Likes
348
... Either of which is fairly far out on the tail of distribution.

Exactly.

When I pay for music, I expect a certain level of dynamic range. If compressed to the T, its money wasted.

So ... Buyer beware ... online tools now exist which aid in choosing wider DR musical purchases. No such research alters that "fact".
 

Wombat

Master Contributor
Joined
Nov 5, 2017
Messages
6,722
Likes
6,459
Location
Australia
Not if that change of opinion is due to changed condition or new data.


Opinions can be made from ignorance, self-interest, etc., credible consideration of available knowledge, or any point along the way. Scepticism before acceptance is wise.
Of course the degree of scepticism is inversely proportional to the credibility, subject wise, of the view holder.
 
Last edited:

TBone

Major Contributor
Joined
Mar 16, 2016
Messages
1,191
Likes
348
Opinions can be made from ignorance, self-interest, etc., credible consideration of available knowledge, or any point along the way.

Beyond me why anyone would post "WAR OVER"; back it up with limited "facts" void of info that cannot possibly help any reader or consumer purchase dynamic capable music. Pot luck they keep choosing \ paying for music - perhaps loaded with compression, or maybe not ... based on nothing more than a trend.

Scepticism before acceptance is wise. Of course the degree of scepticism is inversely proportional to the credibility, subject wise, of the view holder.

Certainly, ~expectations = experience. Some talk, some do ...
 

Guermantes

Senior Member
Joined
Feb 19, 2018
Messages
484
Likes
561
Location
Brisbane, Australia
I remember in the '90s a friend came over to a creative music session with a new toy: a TC Electronic Finalizer. It was basically a mastering studio in a box (minus the monitoring, of course) with parametric EQ, dynamic multiband compression and limiting. We were mesmerised as we fed some favourite CDs through it and heard how they magically emerged "remastered". Bass was given more punch, high frequencies enhanced and everything seemed to go up a notch, especially loudness. But even then we could see there was a point at which it actually became ridiculous if pushed too far -- and it really was easy to get there.

There was a particular production trend emerging at the time influenced by electronic dance music where everything had to sound big and modern, and this was the type of box to feed that. 20 years or so later I can see that some genres thrived on this but I think others were trashed by it. I can understand EDM or J-/K-Pop where everything is synthetic, but why does a piano ballad by Adele have to be squashed and brickwall-limited into distortion? Even worse when the same is done to older recordings in an attempt to "improve" them.

If this trend is running out of steam, I'm glad. Leave the over-loud compression to the genres where it belongs.

I think the introduction of loudness meters and standards relating to broadcast levels have helped educate many. See the page at TC Electronic: https://www.tcelectronic.com/brand/tcelectronic/loudness-explained

Here's one I use from Waves (the Range value seems somewhat equivalent to the DR value in the TT Dynamic Range Meter):
wlm-plus.jpg


But having a loudness target doesn't necessarily improve DR problems, it just keeps things at an acceptable level for broadcast.
 

Guermantes

Senior Member
Joined
Feb 19, 2018
Messages
484
Likes
561
Location
Brisbane, Australia
Apparently Spotify has set it's loudness target to -14 LUFS:
https://ask.audio/articles/spotify-...-to-14-lufs-what-does-this-mean-for-producers

And more on loudness targets for streaming services:
https://www.masteringthemix.com/blo...dio-for-soundcloud-itunes-spotify-and-youtube

The problem I see is that all of these are louder than the ITU/EBU (-23 LUFS) and ATSC (-24 LUFS) standards, so anything already mastered for those will be increased and therefore peak-limited. So if I have carefully crafted my levels for a recording of the Berliner Philharmoniker (simply as an example) for European broadcast, then Spotify, TIDAL, et.al. may further compress and volume normalise it -- just like the good old days of FM radio.
 
Last edited:
OP
Krunok

Krunok

Major Contributor
Joined
Mar 25, 2018
Messages
4,600
Likes
3,065
Location
Zg, Cro
So if I have carefully crafted my levels for a recording of the Berliner Philharmoniker (simply as an example) for European broadcast, then Spotify, TIDAL, et.al. may further compress and volume normalise it -- just like the good old days of FM radio.

To me that doesn't look like the war is over..
 

Guermantes

Senior Member
Joined
Feb 19, 2018
Messages
484
Likes
561
Location
Brisbane, Australia
I think it just demonstrates that streaming services are carving out their own broadcast model and aren't really a substitute for reference recordings whether via download or hard media.
 
OP
Krunok

Krunok

Major Contributor
Joined
Mar 25, 2018
Messages
4,600
Likes
3,065
Location
Zg, Cro
I think it just demonstrates that streaming services are carving out their own broadcast model and aren't really a substitute for reference recordings whether via download or hard media.
Certainly looks like that. Hopefully we will have two masters produced - one for streaming services and the other of "reference" quality. :)
 

TBone

Major Contributor
Joined
Mar 16, 2016
Messages
1,191
Likes
348
I think it just demonstrates that streaming services are carving out their own broadcast model and aren't really a substitute for reference recordings whether via download or hard media.

Bingo ... but they won't tell you that.

Same story with higher-rez alternatives, which often supply compressed alternatives, with dynamic values far less than the orig CD. Yet they preach that by simply recording to a "higher-rez" mix, it suddenly becomes reference quality. Misguided nonsense et all ...
 

TBone

Major Contributor
Joined
Mar 16, 2016
Messages
1,191
Likes
348
... but why does a piano ballad by Adele have to be squashed and brickwall-limited into distortion?

Why indeed.

Can't remember which Adele recording, but the CD version is slammed, while the LP version contains twice the dynamic range values. Perhaps they figure all CD buyers play music in loud vehicles, and/or the vinyl listeners appreciates greater dynamic values?

Along the same lines ... after witnessing Hayley Westenra ...
1528128709964.png


... I ran out to buy her music. A Decca CD/recording, I initially had high hopes. Those were quickly dashed upon listening. You'd think Decca would know better ...
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,368
Likes
234,385
Location
Seattle Area
Bob Katz, thinks (2013) the war is over, due to normalization features in distributors like iTunes (Katz' original blog post is no longer available):
He miscalculated. He made a prediction there but in reality, that factor did not change the equation. Since there is no guarantee of any loudness normalization in playback (say on radio or by user in a playlist), content creators have not changed their strategy because of it.
 
  • Like
Reactions: PSO

svart-hvitt

Major Contributor
Joined
Aug 31, 2017
Messages
2,375
Likes
1,253
He miscalculated. He made a prediction there but in reality, that factor did not change the equation. Since there is no guarantee of any loudness normalization in playback (say on radio or by user in a playlist), content creators have not changed their strategy because of it.

You may be right. The loudness normalization (i.e. peak in loudness) started around 2007-2008. Katz may have used something he knew a little bit about to explain a change that was well underway. It’s quite normal that people use anything available to them to explain a phenomenon.

https://en.m.wikipedia.org/wiki/Availability_heuristic
 
OP
Krunok

Krunok

Major Contributor
Joined
Mar 25, 2018
Messages
4,600
Likes
3,065
Location
Zg, Cro
Recently someone mentioned in another topic that you need to have quite a strong amplifier to handle music peaks even at "normal" listening levels. Now, which peaks were we talking about if everything has been so much dinamically compressed? :D
 

dc655321

Major Contributor
Joined
Mar 4, 2018
Messages
1,597
Likes
2,235
For those who may be interested in how the "dynamic range" (DR) is calculated at the Dynamic Range website, the algorithm is here.
Note, the banner at the site intimates that perhaps this algorithm has been retired?

The algorithm divides a track into 3 second buckets, calculates RMS and peak values for each bucket, then considers the top (loudest) 20% of the RMS signal range for its final output value.

It has been discussed elsewhere (maybe here too?) that this algorithm is more of a crest factor than a dynamic range indicator.
It's my understanding that, as such, it is more of a worst case figure.

Still, it is at least some figure of merit. As in, better than nothing.
And if it has had any influence in producers releasing less squashed tracks, IMO that's a very good thing.
 

TBone

Major Contributor
Joined
Mar 16, 2016
Messages
1,191
Likes
348
I read somewhere else, a while back, the use of a very different algorithm, but can't recall the site in order to post it here. Oh well, as you stated, as a loose "guide", it has its place. Unfortunately, I've not noticed any change in musical compression levels since the DR db became available, so I doubt the producers are even listening (literally) ...
 
Last edited:

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,368
Likes
234,385
Location
Seattle Area
You may be right. The loudness normalization (i.e. peak in loudness) started around 2007-2008. Katz may have used something he knew a little bit about to explain a change that was well underway.
As noted, this technology wasn't anything new (we had implemented it in media player way back when). The thing Bob was relying on was it being on by default in itunes, etc. And that would cause content to no longer being loudness compressed. It was wishful thinking. He should have kept fighting instead of putting down the sword due to that.

If I were him, I would have given up mass market releases and work to make sure high-res/lossless CD releases were free of it. He could make a difference there. But had no shot or power for mass market releases.
 
Top Bottom