• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required as is 20 years of participation in forums (not all true). There are daily reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

When do we start looking at NCD?

GXAlan

Major Contributor
Forum Donor
Joined
Jan 15, 2020
Messages
1,363
Likes
2,156
I did a quick forum search and didn't see much discussion of non-coherent distortion (NCD) and thought I'd bring it up.

At the headphone level, it's been shown that NCD is better at predicting preference than THD, IM, and Multitone

The Correlation Between Distortion Audibility and Listener Preference in Headphones
Steve Temme, Sean E. Olive, et al. (AES 2014)

And NCD also works at predicting preference for in-vehicle audio

Is there any easy way to measure differences in NCD for electronics? Or is still only meaningful for speakers since you can only get to those high levels of distortion with tube electronics?

@amirm, when this was discussed in 2018, you mentioned that we haven't even gotten to good THD/IMD in the industry so it was easier to focus on those established metrics before adding anything new to get the industry to work on getting the easier engineering done. Back in 2018, between the test gear and the electronics, Topping was only playing in the 10x dB SINAD range. While there is clearly opportunity for ongoing improvement from the mainstream, now that 120 dB SINAD DACs are common and value priced, how hard would it be to start doing some NCD testing?

@pkane, is this something that can be easily done in a future version of your software to expand hobbyist assessments of NCD?
 

pkane

Major Contributor
Forum Donor
Joined
Aug 18, 2017
Messages
4,176
Likes
7,171
Location
North-East
I did a quick forum search and didn't see much discussion of non-coherent distortion (NCD) and thought I'd bring it up.

At the headphone level, it's been shown that NCD is better at predicting preference than THD, IM, and Multitone

The Correlation Between Distortion Audibility and Listener Preference in Headphones
Steve Temme, Sean E. Olive, et al. (AES 2014)

And NCD also works at predicting preference for in-vehicle audio

Is there any easy way to measure differences in NCD for electronics? Or is still only meaningful for speakers since you can only get to those high levels of distortion with tube electronics?

@amirm, when this was discussed in 2018, you mentioned that we haven't even gotten to good THD/IMD in the industry so it was easier to focus on those established metrics before adding anything new to get the industry to work on getting the easier engineering done. Back in 2018, between the test gear and the electronics, Topping was only playing in the 10x dB SINAD range. While there is clearly opportunity for ongoing improvement from the mainstream, now that 120 dB SINAD DACs are common and value priced, how hard would it be to start doing some NCD testing?

@pkane, is this something that can be easily done in a future version of your software to expand hobbyist assessments of NCD?

I don't have access to the paper behind the AES paywall, so can't tell for sure, but at least as defined here: non-coherent-distortion it would appear to be already computed by DeltaWave for an arbitrary musical test signal.
 
OP
G

GXAlan

Major Contributor
Forum Donor
Joined
Jan 15, 2020
Messages
1,363
Likes
2,156
With deltawave, the delta in spectra is reported as dB. Is there a way to reference that to the signal at that level to reflect %distortion referenced to the signal at that frequency?
 

pkane

Major Contributor
Forum Donor
Joined
Aug 18, 2017
Messages
4,176
Likes
7,171
Location
North-East
With deltawave, the delta in spectra is reported as dB. Is there a way to reference that to the signal at that level to reflect %distortion referenced to the signal at that frequency?

PKMetric is a better perceptually-weighted result, IMHO. It can be displayed in dBr relative to the signal level and incorporates frequency and time masking, as well as, audibility curve corrections. You can convert dB to % using a calculator ;)
 
OP
G

GXAlan

Major Contributor
Forum Donor
Joined
Jan 15, 2020
Messages
1,363
Likes
2,156
PKMetric is a better perceptually-weighted result, IMHO. It can be displayed in dBr relative to the signal level and incorporates frequency and time masking, as well as, audibility curve corrections. You can convert dB to % using a calculator ;)

It just hasn’t undergone the same validation :)

The calculator is Ok, but you mentioned that the graphing library in your software is not the greatest so I have to use the shift-arrow feature to point to a specific frequency on both curves and then do the math?
 

pkane

Major Contributor
Forum Donor
Joined
Aug 18, 2017
Messages
4,176
Likes
7,171
Location
North-East
It just hasn’t undergone the same validation :)

The calculator is Ok, but you mentioned that the graphing library in your software is not the greatest so I have to use the shift-arrow feature to point to a specific frequency on both curves and then do the math?
No, dBr is the ratio between the two curves, that’s the units on the chart so you can read it right off of it. The value represents the level of all distortion + noise adjusted for audibility relative to the original signal.
 

GaryH

Addicted to Fun and Learning
Joined
May 12, 2021
Messages
774
Likes
869
With transfer function and any broadband signal—including speech, music, or noise—APx users can now assess the complex frequency response, coherence, and impulse response of their device or system.
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
16,343
Likes
28,074
Here is an earlier paper by some the same people part of which is the description of NCD.

Here is a video on the headphone paper. NCD is briefly commented upon just after the 19 minute mark.

Results of listening tests are shown just after 31:20 where various distortion measurements and their correlation with listening tests are summarized. NCD didn't appear to be better than THD in correlation though surprisingly IMD was a poor correlation and Multitone even worse. (Incidentally, all these kinds of measures could be done with Paul's Multitone software other than NCD). Mention is made by Olive that he believes a perceptually based model based upon music would provide better correlation results with listeners preferences. As Deltawave can do that with music and the PK metric is at a minimum a step in that direction it likely isn't a bad way to go. I was a bit disappointed in that the test didn't attempt to gauge at what levels distortion matters. Only one of the phones was distinguished reliably in the test versus the others. The others were not reliably detected amongst each other by listeners.
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
16,343
Likes
28,074
Not without controlled listening tests confirming its correlation with audibility (or preference), and it already fails at this in some blind tests.
That was more caused by inappropriate use of the software with an overly short signal known to cause issues with the results.

While confirmation from listening tests would be optimum it isn't a yes or no binary choice on something like this.

For instance REW has some various smoothing curves based upon good knowledge of human hearing. ERB being one of them. Yet you don't need a test of listeners to know it is in the right direction to have more relevance than pure octave smoothing. Exact thresholds aren't known, but you can know it is better rather than worse in terms of how audible something is. The adjustments made in the PK Metric are like that. Made based upon some solid information on how human hearing works though this exact algorithm hasn't been confirmed precisely by listening tests.
 

GaryH

Addicted to Fun and Learning
Joined
May 12, 2021
Messages
774
Likes
869
That was more caused by inappropriate use of the software with an overly short signal known to cause issues with the results.
Then that's a failure of the algorithm, not of the tester or test files. The fact is non-coherent distortion has support from controlled scientific listening tests. PK metric doesn't.
 

pkane

Major Contributor
Forum Donor
Joined
Aug 18, 2017
Messages
4,176
Likes
7,171
Location
North-East
Then that's a failure of the algorithm, not of the tester or test files. The fact is non-coherent distortion has support from controlled scientific listening tests. PK metric doesn't.
Sorry, but incorrect use of the software doesn’t represent the failure of the algorithm. Or would you say that AP analyzers are broken because there are some individuals that don’t use them correctly?
 
OP
G

GXAlan

Major Contributor
Forum Donor
Joined
Jan 15, 2020
Messages
1,363
Likes
2,156
@pkane
When I take the source file and a recording of a source file, I get PK metrics in the -44dB range even with “transparent” chain of 110 dB SINAD sources and 100 dB 5 W sinad (but at lower volume).

Is there a way to separate noise from distortion in PK Metric? Could the noise be giving me the -44 dB as opposed to distortion?

If someone has a 120dB signal chain, what would the PK metric be comparing the digital source versus a recording?
 

GaryH

Addicted to Fun and Learning
Joined
May 12, 2021
Messages
774
Likes
869
Sorry, but incorrect use of the software doesn’t represent the failure of the algorithm. Or would you say that AP analyzers are broken because there are some individuals that don’t use them correctly?
The tester did nothing wrong:
That's the right workflow.

If there is a sharp burst of noise or other difference that falls off by 400ms, these will get averaged out by PK Metric, but may still be audible.
If the metric is not reflecting something that's audible, it's not doing its job properly.

And if a perceptual metric doesn't work properly for samples less than 30s long, that's also a failure of the metric to model human perception, because the ear doesn't need this long to detect differences in distortion/noise.
 

pkane

Major Contributor
Forum Donor
Joined
Aug 18, 2017
Messages
4,176
Likes
7,171
Location
North-East
The tester did nothing wrong:



If the metric is not reflecting something that's audible, it's not doing its job properly.

And if a perceptual metric doesn't work properly for samples less than 30s long, that's also a failure of the metric to model human perception, because the ear doesn't need this long to detect differences in distortion/noise.

Not sure why you're even talking about this. Did you run through the test yourself? Did you read through the thread, where it was mentioned that 14 seconds isn't enough for DeltaWave measurements? Did you include the context in which I said that the workflow is correct? It was a specific comment on the steps the original tester outlined, and not the whole test.

In any case, the audibility threshold is not set or determined by PK Metric. So I don't know why you are arguing about "If the metric is not reflecting something that's audible, it's not doing its job properly." This shows clearly that you don't understand the purpose or the goal of the metric.
 

pkane

Major Contributor
Forum Donor
Joined
Aug 18, 2017
Messages
4,176
Likes
7,171
Location
North-East
@pkane
When I take the source file and a recording of a source file, I get PK metrics in the -44dB range even with “transparent” chain of 110 dB SINAD sources and 100 dB 5 W sinad (but at lower volume).

Is there a way to separate noise from distortion in PK Metric? Could the noise be giving me the -44 dB as opposed to distortion?

If someone has a 120dB signal chain, what would the PK metric be comparing the digital source versus a recording?
PKMetric is computed on the difference between the original file and the recording. This doesn't distinguish between noise and distortion, all is included and then weighted by some of the more well-known perceptual weights.
 
OP
G

GXAlan

Major Contributor
Forum Donor
Joined
Jan 15, 2020
Messages
1,363
Likes
2,156
If the metric is not reflecting something that's audible, it's not doing its job properly.

And if a perceptual metric doesn't work properly for samples less than 30s long, that's also a failure of the metric to model human perception, because the ear doesn't need this long to detect differences in distortion/noise.

To be fair, @pkane is offering software for free that easily could be sold for thousands. Additionally, as we go beyond simple THD, IMD, Multitone, the more tools/options we have the better…
 
OP
G

GXAlan

Major Contributor
Forum Donor
Joined
Jan 15, 2020
Messages
1,363
Likes
2,156
PKMetric is computed on the difference between the original file and the recording. This doesn't distinguish between noise and distortion, all is included and then weighted by some of the more well-known perceptual weights.

What is tough is that the difference is so high. That is, if original file and recording is -44 dB PK Metric, we can confidently say that the reproduction chain is not transparent to the digital file but the question is what is the difference? To use a statistical analysis, you can run an ANOVA but the Tukey Post Tests help you hone down on more details.

I’d get that sort of score with a UB9000 to Topping PA5 to E1DA Cosmos which should be reasonable, except for my recording level being lower where noise may in fact dominate.

Is that noise noticeable only if I were to raise the recording to a theoretical +0 dB?

That is, the source recording has no reference volume. Presumably for a digital file with reasonable dynamic range in a 24 bit container, you could have 120 dB true dynamic range out of 144 dB and let’s say it’s movie standard so 124 dB average, 144 dB peaks, and noise of 24 dBz. When compared to a recording that may reflect 70 dB average volume, 90 dB peaks, the noise in the system is going to dominate, and when the PK Metric level matches to the recording level to the digital source, the noise will really rise!

I guess question #2. For level matching, does DeltaWave match comparison volume to the reference volume? At which point I should have the recording as reference and digital source as comparison? Or does DeltaWave raise the volume of the quieter file to the louder one?

It’s an easy test but I am admittedly away from my desktop right now…
 
Top Bottom