• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Alternative method for measuring distortion

xr100

Addicted to Fun and Learning
Joined
Jan 6, 2020
Messages
518
Likes
237
Location
London, UK
It looks like we have confusion with the terms. For you, if I understand correctly, a “signal” is some information useful for recipient.

OK, I was a bit sloppy with terminology in that post.

1581466352612.png


Source: Recent Contributions to the Mathematical Theory of Communication (Warren Weaver.)


Accordingly an audio signal can be interpreted as an information useful for recipient and as only a carrier of information.

To quote from the above-linked document:

"The word information, in this theory, is used in a special sense that must not be confused with its ordinary usage. In particular, information must not be confused with meaning.

"In fact, two messages, one of which is heavily loaded with meaning and the other of which is pure nonsense, can be exactly equivalent, from the present viewpoint, as regards information. It is this, undoubtedly, that Shannon means when he says that 'the semantic aspects of communication are irrelevant to the engineering aspects.' But this does not mean that the engineering aspects are necessarily irrelevant to the semantic aspects."

Can we call white noise an audio signal? In my interpretation it is a valid audio signal. What information it carries is another question, it depends on the recipient and context.

~1kHz sine, slightly under 0dBr, truncated to 16-bits in one file, TPDF dithered to 16-bits in the other.

Original sine then subtracted from truncated/dithered output, then the signal scaled by +90dB.

The "non-white noise" can be heard in the truncated file.

Download link.

(Link expires in one week.)
 
Last edited:
OP
S

Serge Smirnoff

Active Member
Joined
Dec 7, 2019
Messages
240
Likes
136
OK, I was a bit sloppy with terminology in that post.

View attachment 49819

Source: Recent Contributions to the Mathematical Theory of Communication (Warren Weaver.)




To quote from the above-linked document:

"The word information, in this theory, is used in a special sense that must not be confused with its ordinary usage. In particular, information must not be confused with meaning.

"In fact, two messages, one of which is heavily loaded with meaning and the other of which is pure nonsense, can be exactly equivalent, from the present viewpoint, as regards information. It is this, undoubtedly, that Shannon means when he says that 'the semantic aspects of communication are irrelevant to the engineering aspects.' But this does not mean that the engineering aspects are necessarily irrelevant to the semantic aspects."



~1kHz sine, slightly under 0dBr, truncated to 16-bits in one file, TPDF dithered to 16-bits in the other.

Original sine then subtracted from truncated/dithered output, then the signal scaled by +90dB.

The "non-white noise" can be heard in the truncated file.

Download link.

(Link expires in one week.)
Yes, the “engineering level” of a signal is the key. At this level an audio signal has no meaning, it is just one type of time series. Quantization of any signals, including audio ones, at this level is done by rounding, because this is the only operation that provides the best fit of processed signal to the original one. And the only valid measure of goodness of fit at this level is NULL test (or correlation coefficient). At this level there is no point to linearize or “de-linearize” the quantization operation because any of such operations do increase the error level and decrease goodness of fit for resulting signal, what can be easily demonstrated with error level measurements. At this engineering level the mathematics can tell only, for example, that rounding is better than discarding of fractional part because the latter results in higher errors, but it can not tell that any additional operation like linearizion of quantization is necessary or required or helpful.

Such necessity arises when we start to account the meaning of audio signal - the signal that is intended for perception through human auditory system, which has very special characteristics. In particular we have huge dynamic range of hearing and evolutionary determined attentiveness to quiet sounds, so, we pay special attention to the “tails” of fading sounds, they are important for us. Taking into account these aspects of the signal, its meaning, it is advisable to linearize the quantization operation in order to “pull out” those below-LSB components of the signal into resulting quantized signal. Even at the expense of “spoiling” the signal at the engineering level (higher overall error level of quantization operation). So we add dither before quantization.

This task (pulling out of the faint components) can be solved not only by means of dithering but, for example, by means of non-linear amplification of the faint parts of the signal, a kind of slight dynamic compression of the signal from the bottom. This also helps to save those components after quantization. But dithering has the nice bonus - it also helps to change the structure of quantization distortion for above-LSB signals, effectively “converting” new frequency components (we are sensitive to) to broadband noise (which is less annoying for us and which can be additionally shaped for the purpose).

So, linearizing of quantization operation is “recommended” not by mathematics, because at the engineering/formal/mathematical level it has no sense and only further degrades quantized signal. It is “recommended” by psychoacoustics (and cognitive psychology) as it helps to improve perception of the signal. That is why I consider dithering as psy operation, not math one.
 
Last edited:

xr100

Addicted to Fun and Learning
Joined
Jan 6, 2020
Messages
518
Likes
237
Location
London, UK
Quantization of any signals, including audio ones, at this level is done by rounding

If by "quantization," you mean A/D conversion, if this is at 24-bit, then what is the input noise floor...? The addition of (analogue) dither prior to sampling has been used in A/D conversion for audio applications?

"Meaning" is in relation to the meaning of the "message" being transmitted/recovered by the receiver.

For example, if spoken word is received, and that individual's CNS (the destination) does not "understand" that language, then it will have limited meaning to them, and hence the recovered message has carried very little information to them. (Perhaps the odd word, or ability to discern from tone of voice that the person speaking to them is angry, etc.)

So, linearizing of quantization operation is “recommended” not by mathematics, because at the engineering/formal/mathematical level it has no sense and only further degrades quantized signal. It is “recommended” by psychoacoustics (and cognitive psychology) as it helps to improve perception of the signal. That is why I consider dithering as psy operation, not math one.

Psychoacoustic...?

https://www.electronicdesign.com/te...led-data-system-performance-by-at-least-10-db
 
Last edited:

solderdude

Grand Contributor
Joined
Jul 21, 2018
Messages
15,982
Likes
36,179
Location
The Neitherlands
This task (pulling out of the faint components) can be solved not only by means of dithering but, for example, by means of non-linear amplification of the faint parts of the signal, a kind of slight dynamic compression of the signal from the bottom.

Are you suggesting to use amps with compressor functionality or odd harmonics ?
It thought it was your goal to force manufacturers to reduce any type of distortion to as low to 0 as possible ?

In particular we have huge dynamic range of hearing

The dynamic range of our hearing for listening to music is about 70dB.
Even when we go to a loud pop concert or disco with 100dB levels our hearing dials back its sensitivity for the quietest sounds.

We need more from our systems if we want to use it at low late-night listening sessions and when we want to give a party or listen to music at high SPL levels just because we like to though. In both cases our dynamic range in such occasions is still about 70dB.

Why would trailing notes be an issue for electronics ?
 
OP
S

Serge Smirnoff

Active Member
Joined
Dec 7, 2019
Messages
240
Likes
136
If by "quantization," you mean A/D conversion, if this is at 24-bit, then what is the input noise floor...? The addition of (analogue) dither prior to sampling has been used in A/D conversion for audio applications?
No, I mean more simple case. A/D conversion is more complicated as it includes at least sampling and deals with a signal in electric form. I think, the case that we used already - quantization of 32bit signal in digital form - is sufficient for discussing the dithering.
"Meaning" is in relation to the meaning of the "message" being transmitted/recovered by the receiver.

For example, if spoken word is received, and that individual's CNS (the destination) does not "understand" that language, then it will have limited meaning to them, and hence the recovered message has carried very little information to them. (Perhaps the odd word, or ability to discern from tone of voice that the person speaking to them is angry, etc.)
The "meaning" of an audio signal also can be divided into levels/layers. Prior to be "understood" the signal needs to get into the brain through the auditory system, which is very "special" by itself. On the way to the brain the signal (in acoustic form) is heavily processed by the hearing system. I consider this as the first level of the "meaning", the psychoacoustics' level. Comprehension/understanding is another (higher) level.

So, at the engineering level of a signal, where we don't know the nature of the signal (is it sound, temperature or tide level ...) dithering before quantization has no sense. This is my point.
then what is the input noise floor...?
At the engineering level the noise floor of a digital signal is determined by the number of quantization steps available (bits); the amount of noise in the signal itself can not be estimated without knowing some "meaning" of the signal; the noise can be integral part of the signal (in music in particular).
 
Last edited:
OP
S

Serge Smirnoff

Active Member
Joined
Dec 7, 2019
Messages
240
Likes
136
Are you suggesting to use amps with compressor functionality or odd harmonics ?
No, I don't recommend it for audio signals, but for other cases - why not? I just wanted to show that below-LSB signals can survive after quantization not only thanks to dithering.
 
OP
S

Serge Smirnoff

Active Member
Joined
Dec 7, 2019
Messages
240
Likes
136
Why would trailing notes be an issue for electronics ?
No issue for electronics, the issue is for quantization. It spoils the tails of fading sounds in quiet passages when hearing dials "forth" its sensitivity.
 

solderdude

Grand Contributor
Joined
Jul 21, 2018
Messages
15,982
Likes
36,179
Location
The Neitherlands
No issue for electronics, the issue is for quantization. It spoils the tails of fading sounds in quiet passages when hearing dials "forth" its sensitivity.

It doesn't.
In fact you can get tails of fading sounds well below the noise floor and well below 1LSB.
You will only be hear the added dither when you play 16 bit files at SPL peaks above 110dB, and even then the noise of the recording is much higher anyway.
 

solderdude

Grand Contributor
Joined
Jul 21, 2018
Messages
15,982
Likes
36,179
Location
The Neitherlands
No, I don't recommend it for audio signals, but for other cases - why not? I just wanted to show that below-LSB signals can survive after quantization not only thanks to dithering.

Good to hear, I was glad to be rid of Dolby, B, C, dBX, High Com and other companders that to me, even with perfect bias still left me with the side effects of 'pumping' sounds. (dolby B and C weren't that bad in this regard).
 
OP
S

Serge Smirnoff

Active Member
Joined
Dec 7, 2019
Messages
240
Likes
136
It doesn't.
In fact you can get tails of fading sounds well below the noise floor and well below 1LSB.
You will only be hear the added dither when you play 16 bit files at SPL peaks above 110dB, and even then the noise of the recording is much higher anyway.
I agree with that. To be honest I don't quite understand what is the point of arguing here.
 

solderdude

Grand Contributor
Joined
Jul 21, 2018
Messages
15,982
Likes
36,179
Location
The Neitherlands
Compression brings about distortion.
Dither linearizes and thus reduces distortion but adds noise.
Now we are back on track.
 
OP
S

Serge Smirnoff

Active Member
Joined
Dec 7, 2019
Messages
240
Likes
136
Compression brings about distortion.
Dither linearizes and thus reduces distortion but adds noise.
And both degrades the signal on the engineering level, where there is no sense to linearize quantization and to reduce distortion at the expense of noise (because on this level we don't even know that this is an audio signal). Imagine, you have a series of 32bit values of unknown origin and you need to convert them to 16bit values. This is math/engineering level of a signal.
 

solderdude

Grand Contributor
Joined
Jul 21, 2018
Messages
15,982
Likes
36,179
Location
The Neitherlands
Nope, you just dither the LSB of the 16 bit values and you have beyond 16 bits resolution again.
Dithering 32 bits is a bit pointless given the fact that real world noise levels are magnitudes larger.
Not so with 16 bits.
 

xr100

Addicted to Fun and Learning
Joined
Jan 6, 2020
Messages
518
Likes
237
Location
London, UK
Good to hear, I was glad to be rid of Dolby, B, C, dBX, High Com and other companders that to me, even with perfect bias still left me with the side effects of 'pumping' sounds. (dolby B and C weren't that bad in this regard).

"Companding" systems are/have been used. NICAM (Near Instantaneous Companded Audio Multiplex) is a good example, since it's quite straightforward, and was widely used, certainly in Europe, for digital stereo audio on a subcarrier to terrestrial analogue TV channels. Completely different to analogue NR companding systems--the scaling is transmitted, not "guessed."
 

xr100

Addicted to Fun and Learning
Joined
Jan 6, 2020
Messages
518
Likes
237
Location
London, UK
And both degrades the signal on the engineering level, where there is no sense to linearize quantization and to reduce distortion at the expense of noise (because on this level we don't even know that this is an audio signal). Imagine, you have a series of 32bit values of unknown origin and you need to convert them to 16bit values. This is math/engineering level of a signal.

Link in previous post (https://www.electronicdesign.com/te...led-data-system-performance-by-at-least-10-db) is not engineering? Dither is not maths? Hmm.
 
OP
S

Serge Smirnoff

Active Member
Joined
Dec 7, 2019
Messages
240
Likes
136
Nope, you just dither the LSB of the 16 bit values and you have beyond 16 bits resolution again.
Considering that:
- the noise is added before quantization (to 32bit signal)
- dithering is used not only in combination with quantization (not in audio)
I prefer to say "dithering of 32bit signal" just because the noise is added to it. I can expect that this looks unusual and understand why you say "dithering of 16bit signal" in this case. Not important.
 
OP
S

Serge Smirnoff

Active Member
Joined
Dec 7, 2019
Messages
240
Likes
136
Link in previous post (https://www.electronicdesign.com/te...led-data-system-performance-by-at-least-10-db) is not engineering? Dither is not maths? Hmm.
Everything is math and engineering, even psychoacoustics is full of math. But I'm talking here about different levels of information - engineering and semantic (you pointed to the appropriate article by Warren Weaver above). In order to understand better the difference between those levels I suggested the following example above: if you have a series of 32bit values of unknown origin and you need to convert them to 16bit values, will you apply noise before rounding? What is recommended by math in this case?
 

solderdude

Grand Contributor
Joined
Jul 21, 2018
Messages
15,982
Likes
36,179
Location
The Neitherlands
Considering that:
- the noise is added before quantization (to 32bit signal)
- dithering is used not only in combination with quantization (not in audio)
I prefer to say "dithering of 32bit signal" just because the noise is added to it. I can expect that this looks unusual and understand why you say "dithering of 16bit signal" in this case. Not important.

It looks like you don't understand dithering. And I don't give a crap where and how one uses dither in whatever shape in anything other than audio and for whatever reasons.
Here we talk about audio and dither is added to linearize response.
So the dither added when the objective is to obtain a dithered 16 bit signal is 1LSB of a 16 bit size. It doesn't matter anything if the original file is 17, 18, 20, 24, 30, 32 or even 60 bit.
 
OP
S

Serge Smirnoff

Active Member
Joined
Dec 7, 2019
Messages
240
Likes
136
So the dither added when the objective is to obtain a dithered 16 bit signal is 1LSB of a 16 bit size. It doesn't matter anything if the original file is 17, 18, 20, 24, 30, 32 or even 60 bit.
Agree. This really doesn't matter. That is why I added: "I can expect that this looks unusual and understand why you say "dithering of 16bit signal" in this case. Not important". Meaning that "dithering of 16bit signal" is also correct and explained why I said "dithering of 32bit signal". Yes, it can be non-conventional use of the term but I really think that this is not important in this case because we have in mind exactly the same flow of operations: initial signal --> adding noise --> rounding.
 
Top Bottom