• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

dBA vs. dBZ

Anton S

Senior Member
Joined
Mar 9, 2021
Messages
330
Likes
475
As many know but some don’t, OSHA safety guidelines for noise levels are based on dBA to account for the change in human hearing sensitivity at different frequencies. I believe dBA was a poor choice, because it is unrepresentative of the actual sound intensity in the room. (The intensity at 40 Hz must be about 40 dB higher than at 1 KHz in order to register at the same level with dBA weighting, and about 50 dB higher at 20 Hz. Consequently, I consider dBZ (no frequency weighting) to be a much more accurate representation of the overall sound intensity reaching one’s tympanic membrane and, therefore, the level of stress (or trauma) to the auditory system, regardless of frequency and how “loud” it is perceived to be.

I have always used dBZ when measuring the sound intensity in my own space, but the other day while listening at what felt like a pretty good clip, I decided to compare the two different weightings with the same selection. As expected, the average sound level according to dBA was about 15 dB lower than without frequency-based weighting (dBZ). What was unexpected was that the reported dynamic range was also significantly reduced with dBA weighting. Compare the Lmax – Lmin differences in the two readouts below. Same selection, same actual volume, same settings on all of my equipment, yet the reported dynamic range is 10.7 dB less with A-weighting.

dBA%20Loudness.jpg


dBZ%20Loudness.jpg



This brings up an interesting question. Presumably, when people cite the dynamic range of certain music releases, they are basing their numbers on unweighted measurements, which I consider to be correct, but I am unsure. They generally don't specify, and some may not even realize that there will be significant a difference. Are they using frequency weighting for those numbers? Some are, but some aren't?
 
to be a much more accurate representation of the overall sound intensity reaching one’s tympanic membrane and, therefore, the level of stress (or trauma) to the auditory system, regardless of frequency and how “loud” it is perceived to be.
I'm not 100% sure but I believe mid-frequencies are the most dangerous.

But as far as I know, A-weighting is mostly used because it better approximates (and it only approximates) perceived loudness.

This brings up an interesting question. Presumably, when people cite the dynamic range of certain music releases, they are basing their numbers on unweighted measurements, which I consider to be correct, but I am unsure.
There are many ways to measure "dynamic range". LUFS Loudness Range DOES take frequency into account. There is a simpler method that measures the Crest Factor without regard to the frequency content.

I prefer the term "dynamic contrast" when talking about program material, and there is no one-simple number to describe it. A recording with lots of loud drum hits is different from a recording that starts out quiet and ends loud, etc. ...And manufacturers like it because it shows a "better number".

And I like "dynamic range" to describe a storage format, equipment, or transmission channel. A-weighted noise & dynamic range measurements are more meaningful.
 
I agree - dBZ makes more sense unless you are measuring a machine shop (occupational noise hazard).
 
130dB SPL at 30Hz will not damage your hearing the way 130dB SPL at 3kHz will.
 
OSHA’s mission is workplace safety. Outside of perhaps a car stereo shop, I’m sure it’s unusual to find a job site generating enough energy at low frequencies to damage workers’ hearing. Even if such a place exists, it's the employer's responsibility to provide to the workers hearing protection and regular hearing tests.

Regards,
Wayne A. Pflughaupt
 
130dB SPL at 30Hz will not damage your hearing the way 130dB SPL at 3kHz will.
This is what conventional wisdom says, but it doesn't make sense to me. Doesn't 130 dBZ, regardless of frequency, produce the same impact on the tympanic membrane, the same level of turbulence in the endolymph, and the same level of stress on the cochlear cilia?
 
In a post on AVSForum, Dr Toole attached a brief summary he wrote on the risk of hearing damage and loudness.

Screenshot From 2025-05-18 21-53-22.png

Below is from his summary that showed the highest risk was at around ~3 ± 2 kHz, and low frequency was not a problem.

index.php
 
This is what conventional wisdom says, but it doesn't make sense to me. Doesn't 130 dBZ, regardless of frequency, produce the same impact on the tympanic membrane, the same level of turbulence in the endolymph, and the same level of stress on the cochlear cilia?
If you ever do a manual sine wave sweep, or a band pass filter sweep of pink noise, you will instantly know how nasty the 3k region can be to our hearing. It has to do with the ear canal gain and the mechanics of the ear drum and the little bones that transfer the vibrations to the cochlea. They work in the most efficient fashion in that frequency band, so it takes little energy to excite the system there.
 
Amir's reference bass track (Fading Sun), actual playback, exact same level:

1747630593664.png
(A) weighting

1747630635364.png
(C) weighting

(Z) is reported only for peaks.

The difference is significant.
 
I believe dBA was a poor choice, because it is unrepresentative of the actual sound intensity in the room.
I think you're trying to ignore the fact that the human hearing system doesn't respond to all frequencies linearly like the dB(Z) scale shows. The problem with the "A" scale is that it is based on the threshold of human hearing, not on how the same system responds at critically high sound pressure levels: A-weighting

1024px-Acoustic_weighting_curves_(1).svg[1].png


A better scale is dB(C), but also there are several other scales that have been defined much more recently, including the ITU-R 468 scale for weighting noise in audio systems.

I personally use the "C" scale when measuring loudness levels in-room using my handheld SPL meter, which still recognizes the equal loudness scale effects of the human hearing system, but at much higher SPL where the equal loudness curves flatten out:

Sound-Pressure-Level-Frequency-Weightings-wiki.jpg


SO-equal-loudness-contours-with-frequency-in-hertz-from-ISO-2262003.png


Chris
 
Last edited:
This brings up an interesting question. Presumably, when people cite the dynamic range of certain music releases, they are basing their numbers on unweighted measurements, which I consider to be correct, but I am unsure. They generally don't specify, and some may not even realize that there will be significant a difference. Are they using frequency weighting for those numbers? Some are, but some aren't?
The crest factor scale for dynamic range (not to be confused with loudness scale) does not take into account the equal loudness scales. In fact, when you read the dynamic range database ratings for albums, what you are really reading is the bass dynamic range scale, not the perceived dynamic range. This becomes much more obvious when re-EQing music tracks which have been subjected to mastering EQ that attenuates bass below ~100 Hz (popular music) and ~450 Hz (classical). When you re-EQ, no change in the dynamic scale ratings will really occur until you reach bass frequencies to boost the music tracks in this range to offset mastering EQ, and then re-normalize the entire track loudness level downward to avoid clipping the output.

In earlier years (before 1991), the advent of digital multiband compressors, the typical way that mastering guys tried to make each music track louder included attenuation of bass below 100 Hz in order to create high-bit headroom for higher frequencies. Since then, the mastering guys just apply huge amounts of compression (i.e., non-linear amplitude scales) and clipping (euphemistically called "limiting") to achieve even higher perceived loudness levels--something I might point out that the listeners didn't really ask for.

Chris
 
Last edited:
Since then, the mastering guys just apply huge amounts of compression (i.e., non-linear amplitude scales) and clipping (euphemistically called "limiting") to achieve even higher perceived loudness levels--something I might point out that the listeners didn't really ask for.
There are actual proper limiters and separate clippers in the mastering engineer's arsenal. A common processing chain might include both along with full range and multiband compressors, plus various types of EQ and spatial processors.
 
I have always used dBZ when measuring the sound intensity in my own space, but the other day while listening at what felt like a pretty good clip, I decided to compare the two different weightings with the same selection. As expected, the average sound level according to dBA was about 15 dB lower than without frequency-based weighting (dBZ). What was unexpected was that the reported dynamic range was also significantly reduced with dBA weighting. Compare the Lmax – Lmin differences in the two readouts below. Same selection, same actual volume, same settings on all of my equipment, yet the reported dynamic range is 10.7 dB less with A-weighting.
As appears to be have noted previously, you are likely neglecting the physiological effects in the sound gain due to ear canal and earlobes at the very least. There can be other effects in the inner ear, but it is well known that noise in the 1-6 kHz region damages hearing faster than noise elsewhere. It probably looks something like this:

1747659726694.png


This is also probably why the equal loudness contours look like they do. We seem to have a lot of system gain centered somewhere around 3-4 kHz. Sound wavefront's incidence angle also matters for a picture like this, and this appears to be computed or modeled for the 45 degree diagonal.
 
In a post on AVSForum, Dr Toole attached a brief summary he wrote on the risk of hearing damage and loudness.

View attachment 451886

Below is from his summary that showed the highest risk was at around ~3 ± 2 kHz, and low frequency was not a problem.

index.php
Thanks for that link. I genuinely appreciated it, because I respect both the hard data and Dr. Toole's opinion. I can now abandon the notion that low frequency high intensity sound is a significant factor with respect to potential hearing damage.
 
Last edited:
This is what conventional wisdom says, but it doesn't make sense to me. Doesn't 130 dBZ, regardless of frequency, produce the same impact on the tympanic membrane, the same level of turbulence in the endolymph, and the same level of stress on the cochlear cilia?
I don't think anyone answered this question directly.

Each cochlear bundle is tuned to fire once the stimulating sound reaches a certain threshold. These tuning curves or characteristic frequencies have been measured in the past. Below is a well-known example of a few.

1747826865760.jpeg


Hearing damage occurs, it is thought, because with enough time the firing hair cells eventually saturate, leading to temporary threshold shift, where more stimulation is needed for a response. At some point consistent stimulation leads to permanent threshold shift. Hearing damage itself is measured in dB of hearing loss (dBHL) compared to the threshold of hearing loudness curve.

The other sign of hearing damage is the broadening of the tuning curve for that hair cell. See below where the damaged hair cell has a tuning curve that is less precisely oriented around a center frequency.
1747827816028.png


In his book Toole has comments about both OSHA standards and hearing loss.

Hearing loss takes away the ability to discern as well as detect sounds. This is corroborated by the audiometric science and by the listening studies Toole has done.
 
I don't think anyone answered this question directly.

Each cochlear bundle is tuned to fire once the stimulating sound reaches a certain threshold. These tuning curves or characteristic frequencies have been measured in the past. Below is a well-known example of a few.

View attachment 452457

Hearing damage occurs, it is thought, because with enough time the firing hair cells eventually saturate, leading to temporary threshold shift, where more stimulation is needed for a response. At some point consistent stimulation leads to permanent threshold shift. Hearing damage itself is measured in dB of hearing loss (dBHL) compared to the threshold of hearing loudness curve.

The other sign of hearing damage is the broadening of the tuning curve for that hair cell. See below where the damaged hair cell has a tuning curve that is less precisely oriented around a center frequency.
View attachment 452461

In his book Toole has comments about both OSHA standards and hearing loss.

Hearing loss takes away the ability to discern as well as detect sounds. This is corroborated by the audiometric science and by the listening studies Toole has done.
Thanks for the additional details. My primary concern was the appropriateness of using a frequency-weighted loudness scale, rather than an unweighted scale, with respect to gauging one's risk of hearing damage. If I'm understanding THIS correctly, Dr. Toole's position in this regard is that A-weighting is a reasonable choice. Do you concur?

I have a 3rd Edition copy of Dr. Toole's book, and I found this passage from Section 17.1 to be rather disturbing:

I will begin by getting something off my chest.

Occupational hearing conservation programs are almost totally irrelevant to audio professionals and serious audiophiles.

It is a bold assertion, but it is totally supportable. Figure 4.3 contains the OSHA occupational noise exposure limits. They suggest that a sound level of 90 dBA is acceptable for an eight-hour work day. But we know that anything above about 75 dB can have progressively greater effects on hearing. What is going on here?

It is essential to know that in existing national and international standards, the only criterion being considered is the preservation of the ability to understand speech. The OSHA, NIOSH and similar occupational noise exposure criteria were created for manufacturing and other industrial workers, and the goal was not to prevent hearing loss, it was to preserve enough that at the end of a working life, conversational speech at 1 m distance was possible. Permanent damage of an important kind is inevitable. Hi-fi hearing, critical listening ability is not preserved.

Distilled to its essence, it is considered acceptable for hearing loss to accumulate up to 25 dB in both ears at the 1 kHz, 2 kHz and 3 kHz audiometric frequencies. In practical terms this translates to a loss of about 10% understanding of entire sentences, about 50% misunderstanding of monosyllabic “PB” words (words that are ambiguous because of similar sounding consonants) during conversation at normal voice levels, in the quiet, with persons one meter apart (Kryter, 1973). And this is considered to be an acceptable situation—“normal” hearing. Further losses from 25 to 40 dB are described as “slight.” Really? For whom?


Toole, Floyd. Sound Reproduction: The Acoustics and Psychoacoustics of Loudspeakers and Rooms, 3rd Edition (pp. 443-444). Taylor and Francis.

However, note that the vertical scale in Figure 4.3 (loudness) is specified as dB, not dBA. :oops:
 
If I'm understanding THIS correctly, Dr. Toole's position in this regard is that A-weighting is a reasonable choice. Do you concur
Yes, it's reasonable, as many others have said.

The dBA scale was invented for intelligibility measurements for phone lines as an approximation of human hearing sensitivity and human speech production.

The more you look into audio, the more you'll see that many of the tools are quite dated and imprecise when it comes to perception. Hearing tests beyond standard bandwidth limited 500hz to 8khz at the doctor's office are rare, so elements of hearing damage outside of permanent threshold shift are hard to detect.
 
Back
Top Bottom