• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Why there is a difference in Frequency Shape (Bass & Lows) between music genres?

Joined
Feb 3, 2024
Messages
19
Likes
16
I have my Studio Monitors with a Subwoofer calibrated for the flat frequency response + a little bit of personal taste.
I have been listening to different genres of music and I have been jumping between Electronic and Rock/Metal music. While I have been listening Electronic Music I haven't seen the need of changing anything on my Equalizer, all the spectrum sounds in equilibrium (Lows, Mids, Highs).
However, when I listen to Rock/Metal music, the lows sound low, and I know an accoustic bass kick is not the same as an electronic bass kick (same for the bass lines) but still they sound low for me (both in low and high volumes). So then I have to adjust the EQ to really enjoy the music, in these cases I often have to increase the Lows +3 dB to hear the lows in equilibrium with the rest of the mix.

To confirm what my ears were perceiving, I have been monitoring the frequency spectrum of every song with a Spectrum Visualizer (slope of 4.5 dB), and indeed there is a difference in the general EQ shapes. Here below I drew the shapes I see between genres:

Electronic Music:
Captura de pantalla 2025-08-04 192524.png


Rock/Metal Music:
Captura de pantalla 2025-08-04 192432.png


And this got me thinking: why are music genres following different EQ shapes? Wouldn't be a more logical way of producing and listening to music in such way that we are not constantly changing parameters in our audio systems? isn't supposed people who produce/mix/master music to make the music in a calibrated flat frequency response audio system such that the bass, mids, and highs are perceived equally by everybody?

Which creates another problem for the music makers:
If I am making music of a certain genre, should I follow a flat calibrated system, or should I copy the same EQ shape that all the songs of the same genre are using? And if I copy the shape of other songs of the genre, wouldn't that make the track sound weak for the people that listens to different music genres?

And my last observation: is the lack of bass a result or a secondary effect if you will, of pushing the loudness limits to such high levels? I saw that the most compressed and loud masters often were the ones of lacking bass energy, so I am guessing that, because they were overcompressing the transients of the mix then that would mean that the percussive elements of the song are the main ones of suffering a decreased volume, which translates into weaker kick basses too.

I would like to know what you think about all this.
 
So then I have to adjust the EQ to really enjoy the music, in these cases I often have to increase the Lows +3 dB to hear the lows in equilibrium with the rest of the mix.

Does not sound surprising to me, as rock and metal recordings often employ a lot of dynamic compression in order to make the mix louder and more dense, both on the total mix and the bassdrum/bassguitar signals, while distorted electric guitars being pretty heavy on lower mids (open E string riffs and power chords) and drums as well as electric bass guitars are not as heavy in the lowest bands. So pretty likely that upper bass is either underrepresented or gets subjectively masked.

EDM, on the other hand, tends to show more low bass heavy mixes and not as much is going on in the lower mid region.

So then I have to adjust the EQ to really enjoy the music, in these cases I often have to increase the Lows +3 dB to hear the lows in equilibrium with the rest of the mix.

If you like both genres, I wonder why you have to adjust the EQ like every time. While there are differences in the average spectral taste between genres, it should always sound the way it was intended on a linear system with no need to equalize.

If you have this desire constantly, it might indicate a problem with your system, such as dominant lower mids in the reverb or a narrow band of cancellation in the upper bass.

The easiest way to verify this is to check if you have the same desire when listening to tonally balanced headphones?
 
Also not surprised. Despite what people may think, most rock/metal has nothing much below the open E bass guitar string at 41Hz, and that will have significant harmonic content even before playing style and distortion further increase the harmonics. Then there's the typical EQ applied to bass guitars (like this example) to skew it further - note the notch to make the kick drums more distinct, and the higher frequency boosts. That notch shows the kick drums are higher than many would think. The style tends to put the focus on vocals and/or lead guitar too, so they tend to be higher in the mix than the bass. In contrast electronic music doesn't have these limitations of instruments and style.
 
There are big diffferences between styles, that is because different music styles sound very different. Most here are used to classic western pop and rock, but a lot of styles have a very diferent sound, with more or less bass and treble. But those styles are not so known in the western world and often not regarded at by the mainstream industry.

I come of a subculture where low bass is key (reggae, dub and decendents) where a speaker that is perfect for pop or rock is lacking. That's why we build a lot of custom systems for our music, for home and for events. The big "dub sound" stacks are tuned a lot lower than the classic pa, and the music on a classic pa are lacking something. It used to be more an issue a few decades ago than now, but still many club systems and pa systems don't give the bass that music requires (30hz flat at high volume). The rave culture (at least here in Europe) requires similar systems (as it took the heavy bass from dub) and that trickled down now to more mainstream dance in the last decades. That is why specialised builder (like Danley Labs or F1) makes huge subwoofers tuned below 30hz now. The old 40Hz standard is not enough for many kinds of modern music.

But that is only one example, other styles have other needs and a specific sound, and sometimes you need to tune your setup different or get other type of speakers to play it like it should, certainly on high volume. And no, those systems are not the most neutral in sound, but neutral in sound does not always do the trick, certainly for dub and reggae, the coloured (bass heavy) sound is essential.
 
If you like both genres, I wonder why you have to adjust the EQ like every time. While there are differences in the average spectral taste between genres, it should always sound the way it was intended on a linear system with no need to equalize.

If you have this desire constantly, it might indicate a problem with your system, such as dominant lower mids in the reverb or a narrow band of cancellation in the upper bass.

The easiest way to verify this is to check if you have the same desire when listening to tonally balanced headphones?

I think there may be a lack of energy in the upper bass/lower midrange frequencies in @Auditory Cortex system, and by raising the deeper bass, it may act as a compensation for that lack of energy higher up in the range. When listening to electronic music, which usually contains a higher level in the lowest bass region, the compensation is already in place and may hide the actual problem, but when he is listening to rock/metal music, it gets obvious that ”something” is missing.

I never feel the need to change the bass level for different genres in my system. It's an expectation for me that electronic music, in general, contains deeper low bass than rock music, but that has never led me to think there is something that is lacking in the rock productions, in general.
 
To be fair a lot of rock music just is mixed so that there is very little bass impact. When your ears gets used to heavy bass tracks, most old rock records just feel absolutely guttless, atleast in my experience. You can fix it to some degree with EQ, but then when you go back to the bass music, it just sounds too boomy and you have to bring it back. DSP presets are really nice tool for this.
 
And this got me thinking: why are music genres following different EQ shapes?
No two songs have identical spectral content but of course different instruments and different combinations of instruments are even more different.

Wouldn't be a more logical way of producing and listening to music in such way that we are not constantly changing parameters in our audio systems? isn't supposed people who produce/mix/master music to make the music in a calibrated flat frequency response audio system such that the bass, mids, and highs are perceived equally by everybody?
Recordings should be produced to sound like (or somewhat like) the real, natural, live music. A string quartet or vocal group won't have as much bass as a rock band.

And my last observation: is the lack of bass a result or a secondary effect if you will, of pushing the loudness limits to such high levels? I saw that the most compressed and loud masters often were the ones of lacking bass energy,
I don't know how this all "washes out" in the real-practical world... Mixing & mastering engineers adjust EQ & compression/limiting together. But our ears are most sensitive in the mid frequencies and much less sensitive to low frequencies. So if you want to boost the (relative) bass you have to give-up some loudness in the mid-range and it won't be as "loud'. Of course if you've got enough analog gain you can still go as loud as you like , but the mastering engineer can't win the loudness war that way!

Another factor that applies to every genre is that mixing/mastering engineers are often monitoring LOUD. The Equal Loudness Curves mean that we perceive it as the bass being turned-down when we aren't listening equally loud.

so I am guessing that, because they were overcompressing the transients of the mix then that would mean that the percussive elements of the song are the main ones of suffering a decreased volume, which translates into weaker kick basses too.
That's true... With "everything loud" the kick drum can't "stand out". But of course that applies to a snare hit or cymbal crash or trumpet blast or a scream from the singer, not just the bass.
 
And this got me thinking: why are music genres following different EQ shapes?
See plots in this excerpt of the 1996 JAES article (enclosed) by Chapman (B&O). Legend for the enclosed plots follows:

1 Symphonic (large orchestra)
2 Chamber (small orchestra including quartets etc.)
3 Opera (including choir with and without accompaniment)
4 Pop (including soft rock and electronic pop)
5 Heavy (hard rock, heavy metal and thrash)
6 Hip-Hop (including techno and dance)
7 Jazz
8 Blues
9 Folk (including easy listening and country and western)
10 Speech

2. Conclusion

The primary aim of the project was to calculate an average power spectrum for programme material available on CD and to compare the results to the existing IEC Simulated Programme Signal. The results show that the newly determined spectrum is similar to the IEC spectrum (figures 13 and 14). This result is virtually independent to time weighting of the different groups of programme material, therefore an extensive study into more accurate weighting is not required.

For a long term average, in terms of frequency, the existing IEC signal stresses the loudspeaker under test more than would occur with programme material particularly at very low frequencies (below 40 Hz).

For the purpose of the shorter duration tests (Short and Long Term Maximum Power), these should represent a worst-case scenario rather than the long term average i.e. using a signal that represents the worst-case programme material. The worst-case groups are the heavy and hip-hop groups. They contain large amounts of low and high frequency energy which stresses loudspeakers most and experience has shown that it is this type of material that most frequently causes breakdowns. A more suitable signal for the shorter tests is one that contains more low and high frequency energy. Based on the results of this analysis a new power spectrum has been suggested for use in the Short and Long Term Maximum Power tests. This spectrum is tabulated in section 16 and is compared to the existing IEC spectrum in figure 15. It contains significantly less power below 40 Hz but an increase of-3 dB in the 50 and 63 Hz bands (power doubling).
Between 1 kHz and 6.3 kHz there is 1 - 2 dB less power and in the 10kHz and 12.5 kHz bands there is 2 dB more power.

The peak-to-r.m.s, ratios for the programme material groups are much higher than the ratio used in the existing power tests. Even the more compressed hip-hop material has a ratio of 8.2 dB which compared to the 3 dB currently used is an increase of 3.3 times. Hence, using the 3 dB peak-to-r.m.s, ratio does not represent programme material
From the same article (plotted here):

Ave Dynamic Range by Genre.jpg


Chris
 

Attachments

Last edited:
Wouldn't be a more logical way of producing and listening to music in such way that we are not constantly changing parameters in our audio systems? isn't supposed people who produce/mix/master music to make the music in a calibrated flat frequency response audio system such that the bass, mids, and highs are perceived equally by everybody?
You're seeing one direct result of the "circle of confusion" (Toole). You're also seeing the genres that are disproportionately affected by "the loudness war".

Which creates another problem for the music makers:
If I am making music of a certain genre, should I follow a flat calibrated system, or should I copy the same EQ shape that all the songs of the same genre are using? And if I copy the shape of other songs of the genre, wouldn't that make the track sound weak for the people that listens to different music genres?
To some degree, the spectra are a result of the instrumentation of each genre, but much more to the styles and practices of the mastering guys.

And my last observation: is the lack of bass a result or a secondary effect if you will, of pushing the loudness limits to such high levels? I saw that the most compressed and loud masters often were the ones of lacking bass energy, so I am guessing that, because they were overcompressing the transients of the mix then that would mean that the percussive elements of the song are the main ones of suffering a decreased volume, which translates into weaker kick basses too.
See "Loudness War" and the Dynamic Range (DR) Database - some observations and The Missing Octave(s) - Audacity Remastering to Restore Tracks

Chris
 
Last edited:
Ok, I am glad to see all the answers, it helps me into knowing more about all the reasons of why this is happening (or at least to me as the one who is perceiving it), so it could be due how its done by other people (speaker manufacturers, artists, bands, music engineers, etc) or by the way I listen to them in my own environment (or a combination of both).
You're seeing one direct result of the "circle of confusion" (Toole). You're also seeing the genres that are disproportionately affected by "the loudness war".


To some degree, the spectra are a result of the instrumentation of each genre, but much more to the styles and practices of the mastering guys.


See "Loudness War" and the Dynamic Range (DR) Database - some observations and The Missing Octave(s) - Audacity Remastering to Restore Tracks

Chris
Interesting because I also happen to listen to different eras of music between these different music genres, and indeed, depending of the time of when the music was produced, the even more differences in EQ they had, adding more confusion for me. But it's true that when I am going to listen to past-music I also tend to increase the Lows, but as I pointed in my first post, also happens to me for the modern music in Rock/Metal genres.
 
Interesting because I also happen to listen to different eras of music between these different music genres, and indeed, depending of the time of when the music was produced, the even more differences in EQ they had, adding more confusion for me.
Somewhere in one of the two threads I posted above there is a discussion of the breakpoint year of 1991, when multichannel compressors (digital apps and plugins) were first made available. Before this year, the only way to make a mix sound louder was to attenuate bass below ~100 Hz by varying degrees. This is the reason why you see in older recordings the tendency of the user to turn the "bass" tone control knob up (or boost the digital EQ using increasing shelf filters)--to compensate.

After 1991, you see more and more compression being used on the low frequencies--the part of the spectrum that has the highest amplitudes, the 1/f curve that you see with pink noise. Additionally, you unfortunately see the willingness of the mastering guys to simply clip the output (they euphemistically call it "limiting"). So no longer did they need to attenuate the bass **as much** to make the mix even louder. So as time goes on after 1991, bass deficient tracks become rarer to hear. But what you've now got instead is no dynamics and plenty of clipping. It sounds both strident (the clipping) and like mush (the compression).

The point that is easy to miss is that no customers are asking them to continually make the mixes louder: that's something wrapped up with the corporate cultures of the record companies alone. Most people are trying to find the highest dynamic range versions available, and that's led to the resurgence of vinyl (the most limiting of all of the recording formats)--since the compression and clipping used on digital media can't be introduced into the grooves of a vinyl record--or the needle will jump out of the groove. They can't compress or clip the vinyl versions any further due to its own limitations.

Chris
 
Last edited:
The point that is easy to miss is that no customers are asking them to continually make the mixes louder: that's something wrapped up with the corporate cultures of the record companies alone. Most people are trying to find the highest dynamic range versions available, and that's led to the resurgence of vinyl (the most limiting of all of the recording formats)--since the compression and clipping used on digital media can't be introduced into the grooves of a vinyl record--or the needle will jump out of the groove. They can't compress vinyl any further due to its own limitations.
There's someone involved in the music production industry on ASR who has in fact tried to explain (read: gaslight) that we do, in fact, want compression. Basically, the mixing/mastering "engineers" think they know better than everyone else. There's a lot of this rather arrogant attitude among them, at least from what I've seen of their contributions on ASR.
 
It all became very clear to me what the "audio engineering" profession thinks it's "prime directive" is (...really the mixing and mastering operations, but also the recording parts, too, to varying degrees...). I recently began to read a class textbook on audio engineering, and lo and behold...in chapter 1 in the first paragraph of the text, compression was identified as the "start of modern audio engineering".

The light bulb went on at that point. This mantra is there at the very beginning--reduce the dynamic range. This is a cultural problem that is a holdover from the days of analog recorders, mixing, and mastering--with at most 70 dB of usable dynamic range (without saturating the output stages of the recording medium).

Never mind that the CD format and associated digital recorders of that age (16 bit) suddenly introduced almost 100 dB of dynamic range, and higher bit depth recordings go to over 120 dB dynamic range--well beyond the ~100 dB of usable dynamic range of the human hearing system (without damaging it almost immediately). This is not to mention that the playback environment in almost any room is limited to about 60-70 dB of true usable dynamic range without going below the noise floor at most frequencies.

Similarly, preamp gain is now not a problem--so virtually any lower level digital music/audio file can be boosted to useful listening levels without issues of noise floor or limiting gain arising.

In short, making hi-fi audio files much quieter than today's -20 to -8 dBFS levels seen. I've found, subjectively, that when the dynamic range (crest factor) of the produced recordings is 17 dB or higher, the realism of the music just comes to life.

YMMV.

Chris
 
Last edited:
It all became very clear to me what the "audio engineering" profession thinks it's "prime directive" is (...really the mixing and mastering operations, but also the recording parts, too, to varying degrees...). I recently began to read a class textbook on audio engineering, and lo and behold...in chapter 1 in the first paragraph of the text, compression was identified as the "start of modern audio engineering".
Do you happen to have a link to that particular textbook? I'd be interested in giving it a glance over if possible.
 
Sorry--I apparently lost the link. If I come across it again, I'll let you know.

Chris
 
Last edited:
Did a small actual true output comparison (mic at MLP with all the sins of room+gear present) between some tracks of the same genre, only with some decades difference.

So, Evanescence (green) vs DIO (red), 2021 vs 1983:

spec.PNG


the difference down low is staggering.
 
Same track, right? (You wrote “some”). It is quite usual that new re-masters have emphasized deep bass compared to original releases.
 
Same track, right? (You wrote “some”). It is quite usual that new re-masters have emphasized deep bass compared to original releases.
No, different artists, different tracks.
The point here is to underline the trend, the overall sound.

Newer tracks has tons of lower output, even at the same genre.
The charts are made of 3 tracks of each artist so to have some short of average slope.
 
I recently began to read a class textbook on audio engineering, and lo and behold...in chapter 1 in the first paragraph of the text, compression was identified as the "start of modern audio engineering". The light bulb went on at that point. This mantra is there at the very beginning--reduce the dynamic range.

Google AI got further than chapter one:

Compression, is often attributed to the introduction of the first studio-specific compression devices in the 1960s. Prior to this, compression was primarily used for telephone and broadcast systems. The development of dedicated studio compressors like the Universal Audio 175B and 176 by Bill Putnam, Sr., marked a significant shift towards the use of compression as a creative tool for shaping and controlling audio within recordings”.

”Compression remains a fundamental tool in modern audio engineering, used for a wide range of purposes, including:
- Controlling Dynamics: Reducing the difference between the loudest and quietest parts of an audio signal to create a more consistent and balanced sound.
- Enhancing Clarity: Making vocals, instruments, and other elements more intelligible and present in a mix.
- Adding Punch and Impact: Creating a more impactful and exciting sound by strategically using compression on drums, bass, and other instruments.
- Achieving Specific Effects: Using compression creatively to achieve unique sonic textures and sounds”.
Note: this includes fine-tuning timbre as an alternative to eq.

And like everything, you can misuse it.
 
Back
Top Bottom