Hi friends, I've been doing some reading about critical bands, which as I understand it is a field that tries to describe the bandwidth around a particular sound within which a second sound will mask the perception of the first. This has me wondering about models of loudness that take critical bands into account. To me, the existence of critical bands kind of implies that the loudest sound within a particular bandwidth around it will essentially dominate the perceived loudness in that bandwidth, and that a second sound within that bandwidth would not increase perceived loudness very much overall, despite adding more energy to the signal. With a little experimentation, I can kind of prove this to myself. I can hear that a pure 1kHz sine tone hitting 0dB is roughly as loud as a surprisingly wide band of white noise around 1kHz where all of the frequencies present also peak at 0dB, even though the white noise has far more total energy.
Another way I recently saw this come up is when comparing the spectrums of a lead guitar with a rhythm guitar, shown below. I perceive both of these tracks to be about the same loudness across the spectrum, but the visuals paint a different picture. The lead is missing large scoops of low end around 200Hz and below 100Hz that are present in the rhythm, and yet they sound the same to me in this band. It seems to me like the large fundamental in the lead guitar at 120Hz-ish is largely compensating for the lack of other frequencies around it.
I'm curious if there are any well known concepts or research that model loudness to account for this? It seems like the equal loudness contours do not take this effect into account. I'm imagining some kind of study where, instead of asking listeners to judge how they perceive sound A specifically when sound B is moved into A's critical band, the study asks "How much louder do you find A+B than A alone (if at all) when A and B are within each other's critical bands?" Ultimately, it would be really cool to generate a "perceptual" spectrogram of a sound, where the two guitars above end up having much more similar spectrums in the low end.
Another way I recently saw this come up is when comparing the spectrums of a lead guitar with a rhythm guitar, shown below. I perceive both of these tracks to be about the same loudness across the spectrum, but the visuals paint a different picture. The lead is missing large scoops of low end around 200Hz and below 100Hz that are present in the rhythm, and yet they sound the same to me in this band. It seems to me like the large fundamental in the lead guitar at 120Hz-ish is largely compensating for the lack of other frequencies around it.
I'm curious if there are any well known concepts or research that model loudness to account for this? It seems like the equal loudness contours do not take this effect into account. I'm imagining some kind of study where, instead of asking listeners to judge how they perceive sound A specifically when sound B is moved into A's critical band, the study asks "How much louder do you find A+B than A alone (if at all) when A and B are within each other's critical bands?" Ultimately, it would be really cool to generate a "perceptual" spectrogram of a sound, where the two guitars above end up having much more similar spectrums in the low end.
Last edited: