Hi, these days I don't level calibrate with noise, I look at curves, but I'm very curious as to where the 30-80Hz vs 500-2000Hz frequency ranges come from for the pink noise generator in REW and other places.
For a theoretical perfect system measuring linearly in-room, it seems that since 30-80Hz is only about 1.3 octaves, and 500-2000Hz is 2 octaves, I would see a difference of almost 2dB using a theoretical un-weighted perfect SPL meter. If I used a C-weighted SPL meter, that difference would be 3-4dB. If the sub signal was affected by a low-pass filter at, say, 80Hz, it would make this error even larger, as the 500-2k range would be unaffected.
It seems to me that "subwoofer" test-tones would make more sense being 30-120Hz, and played through the individual speaker channels, to get a true 2-octave measurement that can be fairly compared to a 500-2000Hz measurement for that speaker. This would test more than just the subwoofer, as it would cover the crossover range for a speaker set to small, but it'd be closer to how actual program material will actually be played?
For a theoretical perfect system measuring linearly in-room, it seems that since 30-80Hz is only about 1.3 octaves, and 500-2000Hz is 2 octaves, I would see a difference of almost 2dB using a theoretical un-weighted perfect SPL meter. If I used a C-weighted SPL meter, that difference would be 3-4dB. If the sub signal was affected by a low-pass filter at, say, 80Hz, it would make this error even larger, as the 500-2k range would be unaffected.
It seems to me that "subwoofer" test-tones would make more sense being 30-120Hz, and played through the individual speaker channels, to get a true 2-octave measurement that can be fairly compared to a 500-2000Hz measurement for that speaker. This would test more than just the subwoofer, as it would cover the crossover range for a speaker set to small, but it'd be closer to how actual program material will actually be played?