• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Signal Strength Levels, Pro vs Home

Miguelón

Major Contributor
Forum Donor
Joined
Mar 12, 2024
Messages
1,117
Likes
600
Location
Vigo (Galicia, Spain)
I recently red some articles about signal strengths, but I was a little bit confused.

They states that usually pro signals are around +4 dBu whereas home audio systems operate at -10 dBv (-8 dBu aprox.).

What does means? My audio interface goes from 0 to 4V (balanced), or - infinity to +14 dBu aprox

My WiiM Ultra and other DAC give between 0 and 2 volts signal, -infinity to +8 dBu.

Is that meaning that home tracks are normalized to some -8 dBu and pro tracks should be aligned to +4 dBu?

Has any importance in practice when sending a signal to the active monitors or the amp?

Another thing I don’t understand is why Genelec 8030C (pro monitor) has a sensitivity range of 106-94 dB @ 1 Volt whereas home version Genelec G Three goes from 96 dB to 86 dB @ 1 Volt. It seem a little bit contradictory because at +4 dBu the 8030C is over its nominal long term SPL (96 dB)
 
These may help a bit:




Jim
 
These may help a bit:




Jim
Thanks Jim!

I missed the big V instead of little v also :)
 
These may help a bit:




Jim
Now I understand better the context in which the usual +12 dB should be taken into consideration.

But what it confounds me even more is the higher sensitivity of 8030C, taking to account that most professional audio gear have 4Vrms outputs: this force the user to operate on the first 25% range of volume to listen at 80 dB and avoiding bit loosing in DAC section.

In another post it was suggested that Genelec prefers avoid any complaint from lack of volume, but they still can provide a -20dB reduction on sensitivity…

Actually I have the chance of reducing the voltage output in my WiiM Ultra, and is like I’m listening to my monitors for the first time!
 
They states that usually pro signals are around +4 dBu whereas home audio systems operate at -10 dBv (-8 dBu aprox.).

What does means?
According to Audio levels, dBu, dBV, and the gang: What you need to know:
0 dBVU was generally defined as the turning point for a piece of audio gear, beyond which distortion would start to increase.
(note: that's VU - Volume Unit) Then:
What do those “+4/-10” numbers mean? Simply, they refer to the voltage levels defined as 0 dBVU.
Unbalanced
The standard consumer “-10dBV” standard means that 0 dBVU equals -10 dBV, or ... 316.2 mV
...
Balanced
+4dBu, and hence 0 dBVU, is 1.228 V

Then there's wikipedia, with Line level:
A line level describes a line's nominal signal level ...
and Nominal level:
The nominal level is the level that these devices were designed to operate at, for best dynamic range and adequate headroom.
... the headroom as the difference between nominal and maximum output.

To be honest, I'm not sure how applicable it is nowadays. If we take the consumer -10 dBV = 0.3 V nominal level, it also has 2 Vrms max level. That's 16 dB difference between the level it was supposedly "designed to operate at" and the max. But take any contemporary pop song, its RMS is way above -16 dBFS, so it seems that the device will have to operate above the "designed level" most of the time.
 
Last edited:
According to Audio levels, dBu, dBV, and the gang: What you need to know:

(note: that's VU - Volume Unit) Then:


Then there's wikipedia, with Line level:

and Nominal level:


To be honest, I'm not sure how applicable it is nowadays. If we take the consumer -10 dBV = 0.3 V nominal level, it also has 2 Vrms max level. That's 16 dB difference between the level it was supposedly "designed to operate at" and the max. But take any contemporary pop song, its RMS is way above -16 dBFS, so it seems that the device will have to operate above the "designed level" most of the time.
I also think that for most DACs operating levels close to 90% are still excellent in terms of harmonic distortion.

As far as I know, alignment levels in pro gear are fixed on -20 dBFS to have 24 dB headroom till reach +4dBu output, and avoid clipping in mixing consoles.

But after the mixing and editing, volume is “normalized” which I supposed to be a process to keep into home standards, lower than studio recordings. Is a little bit messy…

To my practice, I keep digital volume on 50-60% (I have no dBFS indicator but by log scales maybe -6 or -4 dBFS) and preamp volume limited to 1 Vrms. This let me operate my ultra sensitive Genelecs on healthy volumes and reduce distrotion.

The G Three comes with a -10 dB lower sensitive switch, but honestly is not very well made: keeping the by default 96 dB @ 1 Vrms and reducing preamp volume gives much better results.

Thank you for the articles!
 
In which way exactly?
-10 dB is too low (it goes to 86 dB @ 1V) so it demands all of the DAC volume for some tracks. It sounds also not very clean, as if attenuator introduces noise or distortion.

Reducing voltage output by a factor of two produces similar effect than reducing sensitivity by 6 dB and allow my DAC operate in correct ranges. Probably attenuation on the WiiM output is better made than those in Genelec input. Or maybe just those 4 dB louder that makes the difference.
 
Back
Top Bottom