• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Why there is a difference in Frequency Shape (Bass & Lows) between music genres?

”Compression remains a fundamental tool in modern audio engineering, used for a wide range of purposes, including:
- Controlling Dynamics: Reducing the difference between the loudest and quietest parts of an audio signal to create a more consistent and balanced sound.
- Enhancing Clarity: Making vocals, instruments, and other elements more intelligible and present in a mix.
- Adding Punch and Impact: Creating a more impactful and exciting sound by strategically using compression on drums, bass, and other instruments.
- Achieving Specific Effects: Using compression creatively to achieve unique sonic textures and sounds”.
Note: this includes fine-tuning timbre as an alternative to eq.
To @kyuu :

The text I quoted above appears to be from the very same source that I remember reading in the mastering/recording engineering textbook that I mentioned above. You might be able to find the source of this quoted material via LLM/GPT prompt for a bibliography/source.

And like everything, you can misuse it.
I should mention that using today's hi-fi gear with its higher performance than past decades, all I can tell is that compression has been misused by virtually all popular music genre mixing and mastering guys (and mostly mastering) since 1991 (where I above mentioned the multiband compressor plugins that appeared at that time).

The really bad thing is that this culture of overactive compression even appears to be infiltrating traditional music recording genres that initially limited its use to an average crest factor of ~12 dB (i.e., as much as 10 dB of overall compression for Jazz and classical guitar recordings, etc.). Nowadays, you see average crest factors of 8 dB or lower appearing. If there is anything that sucks the life out of music of these genres, it's any compression that reduces the crest factors below ~17 dB or higher (IME).

Chris
 
Last edited:
No, different artists, different tracks.
The point here is to underline the trend, the overall sound.

Newer tracks has tons of lower output, even at the same genre.
The charts are made of 3 tracks of each artist so to have some short of average slope.
I think both the low notes of the instruments and engineers used to vinyl have an influence. High levels of bass on vinyl shorten the side timing a lot due to dynamic pitch when cutting.
 
I recently stumbled upon this study and I think it gives a new perspective and possibilities about this issue in EQ variance in music:
https://www.researchgate.net/public...d_its_Relation_to_the_Level_of_the_Percussion

There are many key points that I would like to cite:
- "A clear relationship was found; tracks with more percussion have a relatively higher LTAS in the bass and high frequencies. We show how this relationship can be used to improve targets in automatic equalization. Furthermore, we assert that variations in LTAS between genres is mainly a side-effect of percussive prominence.·"

In my main post I talked about how I had to change my EQ constantly in order to enjoy differents styles of music and different eras when listening to them at the same average volume. Here the study acknowledges the frequency differences and how an automatic EQ could fix the problem of constantly and manually change EQs.

- "One type of application that could benefit from LTAS analysis of big datasets is automatic equalization in mixing and mastering. The field is fairly new, with researchers trying out different techniques and focusing on different aspects."

Another problem I also mentioned at my original post is that, me being a music producer that mixes and masters his own music, is concerned about how the music will be felt compared to the rest of music, same or different music genres, and which EQ shape to pursue. If studies like this helps into finding a universal and automatic EQ for mixing and mastering then that also is very good since it eliminates the dilemma of same-genre-EQ vs flat-EQ. Of course this automation/standardisation would only guide the overall EQ shape, it shouldn't change or say how the dynamics of the music should be, and from all we have mentioned in the thread so far is clear that we want our music to be as dynamic as we want, no following a loudness standard of overcompressed mastering.

- "In a previous study of LTAS in commercial recordings, differences between genres were analyzed [8]. One of the main findings was that genres such as hiphop, rock, pop and electronic music had louder low frequencies (up to 150 Hz) than genres such as jazz and folk music. The same relationship was evidentalso for the high frequencies (5 kHz and above), with hip-hop, rock, pop and electronic music being the loudest. The differences in the mean LTAS were clear: some genres have a (relatively) louder low-end and high-end of the spectrum, whereas other genres such as jazz and folk music generally have a (rela- tively) higher sound level in the mid frequencies.
Why is this? Although certain genres have somewhat stylistic preferences with regards to the LTAS (a prime example being the heavy bass in reggae), mas- tering engineers generally try to keep the “symphonic tonal balance” as a basic reference for most pop, rock, jazz and folk music [14]. Could there then be something else than the general genre that give rise to the differences in LTAS? Given that genres that were found to have a higher sound level in the low end and high end of the spectrum have more emphasis on rhythm, the relative level of the rhythm instruments seems to be a relevant factor. In this study, we will explore the relationship between the sound level of the percussive instruments in a musical mixture and the LTAS
."

So the study also comments about the differences of EQ between music styles and genres and their previous explanation, but in this case they analyse and shows that the "level of percussive sounds" is the main reason of the variance in EQ, even across song of the same music genre. I think is worth reading.

1755292091420.png

Hopefully more studies and tools should help for both making and listening to music, no matter the style or genre, at the same average volume and not feeling a lack (or excess) of some part of the frequency.
 
The problem that I immediately see with this study is that it is examining recorded music tracks that have already been through mixing/mastering, downstream of mastering EQ. I've found this is a problem. I believe it's better to look at the actual recorded spectra of the ensemble or instruments/voices before mastering EQ is applied. YMMV.

Chris
 
And my last observation: is the lack of bass a result or a secondary effect if you will, of pushing the loudness limits to such high levels? I saw that the most compressed and loud masters often were the ones of lacking bass energy, so I am guessing that, because they were overcompressing the transients of the mix then that would mean that the percussive elements of the song are the main ones of suffering a decreased volume, which translates into weaker kick basses too.

This certainly is part of the problem but in another way than you imagine. The lows are not suppressed due to compression. Instead the lows are deliberately held back in the mix to allow the total mix to reach a higher loudness level (see Fletcher-Munson curve), and to assure the mix also works on speakers with poor low end performance. Melodic music doesn't rely on low end frequencies, so for such genres (the majority of music until 30 years ago) that tradeoff works for most people.

When you compare a rock music recording with a the sound of a live rock concert (using a capable PA system) you’ll notice that the lows are much stronger at the concert. It’s not uncommon to feel the pressure of the kick drum and bass guitar throughout your all body. This confirms the deliberate tradeoff made in the recording process.
 
In my main post I talked about how I had to change my EQ constantly in order to enjoy differents styles of music and different eras when listening to them at the same average volume. Here the study acknowledges the frequency differences and how an automatic EQ could fix the problem of constantly and manually change EQs.

Two issues I see. First is that the results of the study confirm rock music needs less bass, so it’s not going to solve ”when I listen to Rock/Metal music, the lows sound’. What it maybe can do is identify outliners within a specific genre, which brings us to the second issue.

The study mainly suggests to use it’s spectral database as a tool during mixing and mastering, as it still requires an engineer or producers to assess potential issues: ”For example, when the spectrum of a track contains outliers in relation to the percentile ma-trix, it would be wise to focus attention to the deviating frequencies. In this case, it is important to analyze the reason for the deviations. Can the spectral outliers be motivated by the musical arrangement, e.g. are the outliers unavoidable to be able to maintain a desirable spectrum for the instruments in the mix? If not, the outliers may be alleviated by adjusting the frequency spectra of the most relevant instruments in the mix, or by adjusting the equalization curve of the master track directly. In the latter case, the spectrum should just be adjusted partly towards the target, and only so for the frequencies that deviates significantly. If the outliers however are unavoidable given the arrangement, attention may instead be directed to the instrumentation; is it possible to add or remove any instruments in the arrangement?”.
 
Last edited:
I think both the low notes of the instruments and engineers used to vinyl have an influence. High levels of bass on vinyl shorten the side timing a lot due to dynamic pitch when cutting.
That's true but we have a lot of examples of deep lows even from the 50's (classical) .
Trade-offs depending on skill probably.
 
That's true but we have a lot of examples of deep lows even from the 50's (classical) .
Trade-offs depending on skill probably.
I was thinking about rock and electronic music where bass levels can be very high, where differences between old recordings made to be issued on LP may well have had less high level bass simply because they were mixed to be cut on a record lathe and today this is rarely a consideration.
Certainly low frequencies exist on lots of classical LPs but typically not at high levels (and sometimes at frequencies a seismic sensor is physically unable to reproduce accurately).
 
...Melodic music doesn't rely on low end frequencies, so for such genres (the majority of music until 30 years ago) that tradeoff works for most people...
I guess you'll have to tune up your definition of "melodic music" to make that statement true.

Go listen to any J.S. Bach prelude & fugue (BWV 531-582) to test your theory. I think you'll have to amend it or sharpen up your definition.

[There are many other music examples.]

Chris
 
When you compare a rock music recording with a the sound of a live rock concert (using a capable PA system) you’ll notice that the lows are much stronger at the concert. It’s not uncommon to feel the pressure of the kick drum and bass guitar throughout your all body. This confirms the deliberate tradeoff made in the recording process.
The "deliberate tradeoff" is an arbitrary one nowadays, as most reasonable quality multichannel setups have had the capability to more or less convincingly reproduce accurate facsimiles of "the real thing" since 20-25 years ago. [I think that Toole, et al. have previewed that discussion in the run-up to their 4th Ed. book.]

The point here is that the choices made by hi-fi loudspeaker buyers has necessarily changed over the decades to obsolete many of those choices that used to be tacitly made without discussion. This includes choices made over music compression and clipping--that no longer make any sense for real hi-fi.

Chris
 
I guess you'll have to tune up your definition of "melodic music" to make that statement true. Go listen to any J.S. Bach prelude & fugue (BWV 531-582) to test your theory. I think you'll have to amend it or sharpen up your definition.

I’m talking about the music genres mentioned by the OP; ”Electronic and Rock/Metal music”. And I don't want to derail this thread with a discussion about the definition of melodic. If you want to make a point about that, you can dismiss the whole study the OP referenced.
 
I choose to let that comment pass, but since you bring it up again; there is no ”actual recorded spectra of the ensemble” in the music genres mentioned by the OP, and for these genres the spectra of ”recorded instruments/voices” are processed (eq-ed and often compressed) while mixing.
I suppose I should say "then maybe you should take it a little more seriously".

I'm actually pretty careful about which subjects I weigh-in on in these forums. For instance, this one that I've already posted a link to, above:

https://community.klipsch.com/topic...taves-audacity-remastering-to-restore-tracks/

This subject is one that I've worked pretty extensively on for >10 years, spending (probably) a lot more time than the casual commenters in this thread. There are sources for "actual recorded spectra of the ensemble": https://tech.ebu.ch/publications/sqamcd

...among others.

Chris
 
as most reasonable quality multichannel setups have had the capability to more or less convincingly reproduce accurate facsimiles of "the real thing" since 20-25 years ago.
In my multi channel system it is impossible to site speakers precisely as advised in both angle and distance, so in my case not.
Special effects on video sort of work but probably not in a way the recording engineer expected, and I imagine all spatial information will simply be wrong here.
 
Back
Top Bottom