• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Audibility thresholds of amp and DAC measurements

solderdude

Grand Contributor
Joined
Jul 21, 2018
Messages
15,891
Likes
35,912
Location
The Neitherlands
Do explore this further.
I recommend to 'measure' other devices as well and correlate to blind listening tests with at least a few persons and statistically valid amount of tries.
 
  • Like
Reactions: j_j

j_j

Major Contributor
Audio Luminary
Technical Expert
Joined
Oct 10, 2017
Messages
2,267
Likes
4,758
Location
My kitchen or my listening room.
Nobody will do this. If you have a DUT, you just use it, not tweak.
(#177).


Now that's pure deception on your part. You said nobody could reverse your allpass filter. This is about the reversability of linear operations, not about how somebody would actually bother to do them.

You're also wrong about who might do what, but that is a bit more exceptional, and not that many people could or would bother.
 

j_j

Major Contributor
Audio Luminary
Technical Expert
Joined
Oct 10, 2017
Messages
2,267
Likes
4,758
Location
My kitchen or my listening room.
My point is that below some low level of difference signal 'the spectrum of the error' doesn't matter. And "final acoustic system gain" along with noise floor of the listening environment define that low level (#177).

And that would be wrong, too. The ear also has a minimum sensitivity, that varies with frequency, just like a room's noise floor varies with frequency. If you fail to account for SPECTRUM you will have to overdo your "number" by several orders of magnitude, which is ridiculous.

You persist in oversimplifying the most basic calculations here, and your responses make it clear you are just making fun of the people who are trying to help you through this.
 

j_j

Major Contributor
Audio Luminary
Technical Expert
Joined
Oct 10, 2017
Messages
2,267
Likes
4,758
Location
My kitchen or my listening room.
"Original reference" is the waveform produced by author of music piece in the form of file.

So, not the original acoustics. So, then, a signal modification that creates a listener sensation (in two channels) MORE LIKE THE ORIGINAL SOUNDFIELD would be a distortion, and you're opposed to it? Is that correct?
 

Serge Smirnoff

Active Member
Joined
Dec 7, 2019
Messages
240
Likes
136
Now that's pure deception on your part. You said nobody could reverse your allpass filter. This is about the reversability of linear operations, not about how somebody would actually bother to do them.
I think the meaning of my “irreversible” was very clear from the very post where the issue was mentioned for the first time [#208]:

The result is lossy/irreversible in both cases because corrupted waveform at the output of audio device can not be corrected in any way; it is on the way to your brain for perception and comprehension.

Linear distortion is irreversible not because it can't be reversed in principal but because nobody will do this in practice, for a listener they are final. As such they should be accounted and their audibility should be researched. Their impact on perceived audio quality can be less significant though, you might be a more competent specialist in the issue.

If you fail to account for SPECTRUM you will have to overdo your "number" by several orders of magnitude, which is ridiculous.
Yes, there will be some headroom in accuracy and definitely R&D of such circuits will cost a bit more. But the benefits will outweigh those R&D troubles:

- audio chips with “overdone numbers” (operating below the singularity level) can be installed in ALL audio equipment both professional and consumer (including doorbells)). In other words they can be commoditized and manufactured in large quantities, which means they will be cheap.

- delivering perfect audio quality then becomes a standard service and listeners finally stop discussing all those “warmth and depth” with manufacturers and start to do this with music authors.

There are only two disadvantages of the approach:

- absence of extra-profits for manufacturers
- much lower necessity for research of numerous aspects of audibility of distortions (including fundamental distinction between linear and non-linear ones, sorry for that).

Now THAT I can agree with 100%. There is a lot of total BS out there.
The irony of the situation is that exactly your apology of “inevitable inaccuracy” in audio is the basis for this BS. Your heroic defense of researching distortion instead of its complete elimination creates the ground for assifying audio consumers.

Measure CODECS?

No. Just no.
Yes, just, yes. The key concept is artifact signature of DUT - https://www.audiosciencereview.com/...measuring-distortion.10282/page-3#post-284457

So, then, a signal modification that creates a listener sensation (in two channels) MORE LIKE THE ORIGINAL SOUNDFIELD would be a distortion, and you're opposed to it? Is that correct?
Yes, I'm opposed to all UNINTENTIONAL modifications of the reference signal. All such modifications are allowed only on top of transparent channel. I also like creative listening.

You persist in oversimplifying the most basic calculations here, and your responses make it clear you are just making fun of the people who are trying to help you through this.
What you actually trying to do in this discussion is to convince everybody that your overcomplicated/muddy approach is better than my one, which is simple and clear. So, in this sense you are not helping but hinder me from developing a new approach. On the other hand, your active opposing helped me better understand and formulate some of my concepts/points and I should say thank you for that.
 

j_j

Major Contributor
Audio Luminary
Technical Expert
Joined
Oct 10, 2017
Messages
2,267
Likes
4,758
Location
My kitchen or my listening room.
I think the meaning of my “irreversible” was very clear from the very post where the issue was mentioned for the first time [#208]:

The result is lossy/irreversible in both cases because corrupted waveform at the output of audio device can not be corrected in any way; it is on the way to your brain for perception and comprehension.

If you're going to invent your own meanings for words, communication is impossible. You're using mathematical terms in ways that don't make sense. Use some other word, then.
 

makmeksam

Member
Joined
Nov 13, 2019
Messages
59
Likes
44
So, not the original acoustics. So, then, a signal modification that creates a listener sensation (in two channels) MORE LIKE THE ORIGINAL SOUNDFIELD would be a distortion, and you're opposed to it? Is that correct?
This sounds like something we see in super best audio friends forum. They stick with subjective methods claiming that it is not possible to establish a reference.
 
Last edited:

dshreter

Addicted to Fun and Learning
Joined
Dec 31, 2019
Messages
794
Likes
1,226
So, not the original acoustics. So, then, a signal modification that creates a listener sensation (in two channels) MORE LIKE THE ORIGINAL SOUNDFIELD would be a distortion, and you're opposed to it? Is that correct?
I’m going to assume you’re not intentionally trolling and provide an earnest response. Implementing a playback system, the recorded material is the only reasonable candidate to be the reference. It’s the only element that is distributed identically to all systems that will play back that specific version of a song (or movie, whatever), and therefore CAN be referred to. What happened prior to that is virtually unknowable and certainly not reproducible so cannot be the “reference”.
 

solderdude

Grand Contributor
Joined
Jul 21, 2018
Messages
15,891
Likes
35,912
Location
The Neitherlands
Linear distortion is irreversible not because it can't be reversed in principal but because nobody will do this in practice, for a listener they are final. As such they should be accounted and their audibility should be researched. Their impact on perceived audio quality can be less significant though, you might be a more competent specialist in the issue.

Linear distortion is reversible when one knows (measurements for this are very easy) and is simply corrected before the signal goes into the DUT.

This would be a lot harder/impossible to do with non-linear distortions.

One has to realize that linear distortion at the frequency extremes of the audible range are quite detectable with nulling but might go fully unnoticed with music reproduced by transducers, especially when the discerning ears are getting aged. So the nulling may give a poorer correlation to actual perceived SQ. When one knows the FR (and one does) then it is possible to correct for FR/phase differences as well and do the correlation thing with a 'phase and amplitude corrected' measured result and only asses the non linear crap + linear correction errors.


A DUT decoding an encoded stream not only characterizes the codec but also the DUT. Of course with crappy codecs at low bitrates you will be mostly measuring the codec and how it reacts to music. One would need to have the original file and encode that.


B.t.w. you know who J_J is and what his field of expertise is and how many years he has been at this ?

Before you claim your method is correlating well with how things sound (blind test panel results) I think in the interest of science it would be really helpfull to characterize amplifiers that have been said to sound very musical yet measure bad (needs actual loads no resistor loads) and well measuring amplifiers.
Amplifiers are much easier to null than comparing a digital file that has been DA and AD converted in varying clock speeds.
Samples may not be the same (have small deviations) because of minute timing differences.

Note: I am impressed with all the work you put in it and think that unless you come up with measurements of more than just a couple of phones it will be very hard to prove correlation with more expensive measurements.
 
Last edited:

j_j

Major Contributor
Audio Luminary
Technical Expert
Joined
Oct 10, 2017
Messages
2,267
Likes
4,758
Location
My kitchen or my listening room.
I’m going to assume you’re not intentionally trolling and provide an earnest response. Implementing a playback system, the recorded material is the only reasonable candidate to be the reference. It’s the only element that is distributed identically to all systems that will play back that specific version of a song (or movie, whatever), and therefore CAN be referred to. What happened prior to that is virtually unknowable and certainly not reproducible so cannot be the “reference”.

In the course of discussion, Serge has rejected any number of conventional uses of terminology, and has flat-out contradicted established mathematics, and then tried to equivocate his way around some of it, while remaining attached to confusing, unusual terminology. So this discussion is a bit hard to sort out between issues. First, there IS language defined here, and it helps communication work better when people actually use the language as intended, and not play some ad-populum game of "I want this to be distortion". The issues of linear vs. nonlinear vs. random processes are well established mathematically, and separating them out WHEN CALCULATING THE ERROR SIGNAL is actually kind of key to getting a meaningful measurement. What's more, his methods have no way of determining if a signal may rise above the masking level, or above absolute threshold (or room noise floor) without consideration of spectrum. Yeah, in an average listening room if all error is down 110dB you're probably very safe, but that's not a very useful number, and it still only applies to signals in the electrical space, and only if carefully applied.

For one of the reasons that linear (which does not create new frequencies) and nonlinear (which does) are so very, very different TO HUMAN HEARING, please go here: http://www.aes-media.org/sections/pnw/pnwrecaps/2019/apr2019/ Distinctions between linear and nonlinear are key.

BUT

It's kind of novel for me to get accused of being a subjective troll. Maybe you should check into my position on audiophile BS sometime.

It is, generally (obviously not always) possible to provide better envelopment, etc, in most any live recording, or in many produced recordings, and create something more like an original sensation. Now, you can't ever do that with loudspeakers without 5 channels (and you still have to do the production right, which is kind of rare), and you're better off with 7, but you can do quite a bit in 2 channels, either over headphones or speakers, to add some good spatial sensation that was lacking. Now, you're creating a sensation, NOT an exact soundfield duplicate, but we'll go there in a minute. Of course, this presumes there was an original acoustic, which is often not what one has. For some things, "original" can only be "final mix", of course.

BUT when you argue "virtually unknowable", and appeal to ignorance, you're dead wrong. I can measure a lot more than you appear to imagine.

First, at any given spot in a space, 1 point is easily measured with all 4 soundfield variables (dx, dy, dz, p). Obviously if you have more such devices (soundfield microphones) you can monitor more than one spot in space (and you should, even though most people don't). So, now we have a STANDARD, an analytic standard, a testable, falsifiable STANDARD for what was going on at, oh, let's say, 2 spots in a room, separated by say, oh, 6" or so. (yes, that distance is not pulled out of my hat, measure your head).

So, when you deliver sound, do you deliver the same 8 variables to the spot in the room where the listener's ear canal openings are? Good question.

Um, no you don't, generally. Furthermore, generally you don't get anything like that sensation, either. Fortunately you can create a good sensation without all of that information, when you consider how human hearing actually works. Which is another reason that error signal alone does not tell you what you need to know. It can not separate out the indetectable (in human terms) from the rather blatantly awful.

You can improve the sensation with processing, and yes, you can MEASURE your degree of success. You can do it in any number of ways, by asking listeners (in blind tests, of course) to localize where things are. You can even measure the 4D soundfield outside their ears (that way they get to use their own HRTF's and such, which is the best way to go), but I will say that's a pain in the butt.

Yeah, it's a pain in the butt. Believe it. Every bit of equipment is fighting you when you do that.

This sounds like something we see in super best audio friends forum. They stick with subjective methods claiming that it is not possible to establish a reference.

You, as well, may note that I am proposing nothing "subjective". You also note that I have pointed to one analytic reference just above.

I will point out to both of you that "subjective" is a very loose word to use, as well. Perception is subjective, but it is not a completely mysterious process. For example, how to create envelopment, directional sensation, distance sensation etc, are understood in the perceptual space (and can be shown in terms of analytic signals at the ear, although the audio industry is, as usual, years behind, and that and even more so in the "high end" that continues to flail about with untestable premises), and yes, it is possible to work with a recording and figure out some of what will be missing in the perceived signal. If you HAVE more of the information, you can do much, much better in terms of sensation as well as analytic measurement. More information is always useful, in the modern case where one can make wavefield measurements, etc, however, the trick is to learn WHAT PART of all those measurements is the important data. Yeah, that's what I do for a living. No, it's not something I can summarize in a paragraph.

Which is my point to Serge, frankly. His choice of "better" is rather, shall we say, dogmatic. Yeah, if we're talking about an amplifier, or electronics in general, recording playback systems (not capture!), an error signal is a very reasonable standard. But when we're talking about microphones, speakers, headphones, sorry, no, even defining the error signal is difficult. So when we take an electronic signal, MEASURE aspects of it, and modify it according to established understanding of human perception, to his measure, that's "distortion". For a simple example, consider room EQ. If you use room EQ (or speaker EQ, a much smarter idea in general) you are, according to the difference signal, increasing the error, which Serge persists in calling distortion. (you're also doing this with linear processing, and the lack of loss of information doing linear processing as opposed to nonlinear is key here, too, really)


But more is doable, and it's absolutely not trolling.

Serge is working on something that's fine for simple errors, be they linear processing, distortions, or noise mechanisms. BUT

1) Distortion refers to nonlinear processing
2) Distortion does not refer to linear processing (and linear processing is reversible to noise floor, despite his denial above, which was then excused by equivocation by him saying 'nobody does that', even though ADC's, DAC's, tape systems, LP systems all do it automatically as part of their normal function).
3) Noise can take many forms, including signal-modulated or mediated noise. It's an interesting point, but it's NOISE.

All of those contribute to the error signal.

THAT is where this argument started, and yet Serge refuses to use standard terms, and would rather equivocate than communicate clearly.

And now you guys are calling me an audiophile. You probably don't even want to know how deeply, horribly professionally insulting you're being when you say that, but you are.
 
Last edited:

j_j

Major Contributor
Audio Luminary
Technical Expert
Joined
Oct 10, 2017
Messages
2,267
Likes
4,758
Location
My kitchen or my listening room.
Linear distortion is reversible when one knows (measurements for this are very easy) and is simply corrected before the signal goes into the DUT.

This would be a lot harder/impossible to do with non-linear distortions.
For sure. In any system that's bandwidth limited or noise limited, it's impossible. Zero delay, pure delta impulse response systems are your only hope for nonlinearities.
(lots of good stuff deleted for brevity)

Before you claim your method is correlating well with how things sound (blind test panel results) I think in the interest of science it would be really helpfull to characterize amplifiers that have been said to sound very musical yet measure bad (needs actual loads no resistor loads) and well measuring amplifiers.

Here's a simple thing that will blow up such tests as far as preference testing (having ABC/hr testing will require a known reference amplifier as well as a matching process for the DUT that is simply frightening in its complexity) right here. Just for example, let us talk about an amplifier that clips softly, rather than hard at the rails, but starts to show reduced gain, say, at 1/4th of the normal output voltage. No, that's not linear, no that's not accurate, absolutely. But you may very well find that what you HEAR from that amplifier run gently into soft clipping sounds like it has a wider dynamic range than the reference, for simple reasons related to loudness perception, because in terms of actual partial loudnesses (i.e. what your basilar membrane actually decodes from the pressure at your eardrum) it IS more dynamic. No, not in analytic terms, it's not at all, in fact it's less dynamic, but what happens with that soft clipping is that it spreads the spectrum of the sound to be wider. When you increase the bandwidth of a signal, while keeping the energy constant (within bounds of hearing, please!) the signal's loudness (again, loudness is a formal technical term in psychoacoustics) will increase with increasing bandwidth. Bandwidth spread can give the same sensation as increasing the level at the ear by as much as 10dB for normal signals. (for contrived signals, much more than that). A bit less energy but quite a bit more bandwidth means the loud parts sound louder.

The subject is much more complicated than least mean squares.

Note, in an ABC/hr test you WILL hear a difference. You may have subjects complaining that the reference is "impaired" (Yes, that's happened in real tests, wherein the 'distortion' was perceived as an improvement over the original. It's not at all unknown.).

This kind of testing is a pain in the behind, too.

Since I mentioned loudness, let me point to this set of slides.
http://www.aes-media.org/sections/pnw/pnwrecaps/2014/jj_jan2014/

Yes, that's real, not some high-ender's pipe dream.
 

Serge Smirnoff

Active Member
Joined
Dec 7, 2019
Messages
240
Likes
136
All of those contribute to the error signal.
And my point is to make it small enough in order not to deal with all that complicated psychoacoustic stuff, which we can't properly sort out and which results to marketing speculations only. Why you insist on keeping the distortion, no matter what kind of, if it can be eliminated completely?

Amplifiers are much easier to null than comparing a digital file that has been DA and AD converted in varying clock speeds.
Samples may not be the same (have small deviations) because of minute timing differences.
There is no such problem anymore, both DeltaWave and Matlab code removes time inaccuracy with ease. This is deterministic operation and can be done with any predefined accuracy.

Before you claim your method is correlating well with how things sound (blind test panel results) I think in the interest of science it would be really helpfull to characterize amplifiers that have been said to sound very musical yet measure bad (needs actual loads no resistor loads) and well measuring amplifiers.
On this page you can find measurements of various loopback configurations by very similar to Df parameter, computed with DiffMaker software. The page also provide links to the source recordings, so, I computed Df levels for some of them. Test signal is 2min of music. MinDf/MedianDf/MaxDf are indicated in the file name.

RME Babyface Pro XLRout --> XLRin
Babyface Pro XLR Out 1 2 - XLR In 1 2.wav_cut.wav(44)__ref_diffmaker.wav(44)__mono_400-43.5301...png

Babyface Pro XLR Out 1 2 - XLR In 1 2.wav_cut.wav(44)__ref_diffmaker.wav(44)__mono_400-43.5301-33.3813-22.3843

Forssell Technologies MDAC4 --> Focusrite Blue AD 245
Forssell DAC from AD245(master) bis -20 dbu.flac_cut.wav(44)__ref_diffmaker.flac(44)__mono_400...png

Forssell DAC from AD245(master) bis -20 dbu.flac_cut.wav(44)__ref_diffmaker.flac(44)__mono_400-74.8529-69.9477-54.6983

Fill the difference ))

But when we're talking about microphones, speakers, headphones, sorry, no, even defining the error signal is difficult. So when we take an electronic signal, MEASURE aspects of it, and modify it according to established understanding of human perception, to his measure, that's "distortion". For a simple example, consider room EQ. If you use room EQ (or speaker EQ, a much smarter idea in general) you are, according to the difference signal, increasing the error, which Serge persists in calling distortion.
I'm pretty sure Df levels for transducers will be within 0 -20dB range.

Note, in an ABC/hr test you WILL hear a difference. You may have subjects complaining that the reference is "impaired" (Yes, that's happened in real tests, wherein the 'distortion' was perceived as an improvement over the original. It's not at all unknown.).
In df metric the psychoacoustic effects are also accounted but in a different way - by means of artifact signatures. DUTs with similar/close distortion signatures can be assessed by their Df levels (nulls). So in some cases Df measurements can't be used for predicting audio quality and such cases are detectable by means of cluster analysis.

think that unless you come up with measurements of more than just a couple of phones it will be very hard to prove correlation with more expensive measurements.
Instead of arguing about pros/cons of different measurement approaches, their predictive power and limitations I suggest to choose a few well known audio devices and measure them according to df-metric and probably some other metric of your choice. For sure, thought experiments are useful but real ones are more helpful and interesting.

It worth to note that the soft for computing df levels is freely available and can be used for your own experiments:
- DeltaWave - https://deltaw.org/
- Matlab code - http://soundexpert.org/articles/-/blogs/visualization-of-distortion#part3
 
Last edited:

Serge Smirnoff

Active Member
Joined
Dec 7, 2019
Messages
240
Likes
136
So, when you deliver sound, do you deliver the same 8 variables to the spot in the room where the listener's ear canal openings are? Good question.

Um, no you don't, generally. Furthermore, generally you don't get anything like that sensation, either. Fortunately you can create a good sensation without all of that information, when you consider how human hearing actually works. Which is another reason that error signal alone does not tell you what you need to know. It can not separate out the indetectable (in human terms) from the rather blatantly awful.

You can improve the sensation with processing, and yes, you can MEASURE your degree of success. You can do it in any number of ways, by asking listeners (in blind tests, of course) to localize where things are. You can even measure the 4D soundfield outside their ears (that way they get to use their own HRTF's and such, which is the best way to go), but I will say that's a pain in the butt.
I think the aim of an audio reproduction system is to deliver to a listener the SENSATION which is similar to the one of the people who created particular mix. Nothing more. And this can be achieved using the same studio monitors at the listening environment (with room correction) for those who care. Or another speakers with room correction or without correction ... depending on listener's demand for audio quality. Everything between the output of production (resulting waveform) and input of listener's transducer is just a communication channel, aimed to deliver audio signal with some predefined accuracy.
 

solderdude

Grand Contributor
Joined
Jul 21, 2018
Messages
15,891
Likes
35,912
Location
The Neitherlands
A codec is the DUT in a codec research.

only when done within the digital domain, not when run through actual devices and with DA-AD loops in between.


Once you find a clear audio correlation between highly regarded audiophile amplifiers/DACs etc and excellent ones and this has a high correlation with perceived SQ (level matched blind, we all know sighted can never match) then you can call me impressed.

I am impressed with your work and efforts but not convinced of full correlation. I understand about the software and the short 'time windows' but need to see more results and evidence compared with blind audibility tests.
As is pretty well known here I am all for nulling using music and real loads but know that nulls that produce a signal also produce difference signals on inaudible phase and amplitude differences at the extremes of the audio bandwidth.
 
Last edited:

pkane

Master Contributor
Forum Donor
Joined
Aug 18, 2017
Messages
5,632
Likes
10,205
Location
North-East
As is pretty well known here I am all for nulling using music and real loads but know that nulls that produce a signal also produce difference signals on inaudible phase and amplitude differences at the extremes of the audio bandwidth.

It's not hard to imagine a version of the df metric (or DeltaWave RMS null) that does account for spectrum and audibility. In fact, DeltaWave already computes an A-weighted version of the null metric, which is exactly that: a perceptually-weighted, spectrum-based curve applied to the error signal.

I'm going to add another measure to DeltaWave that computes the result using ISO 226:2003 threshold curve. This should answer the question whether the error signal is above or below audibility based on spectrum. And yes, this is designed to measure the distortion produced by electronic components and digital algorithms, not by transducers.
 

Serge Smirnoff

Active Member
Joined
Dec 7, 2019
Messages
240
Likes
136
need to see more results and evidence compared with blind audibility tests.
The beauity of df-metric is that it can work even without strong correlation to SQ. It predicts the existence of some point on df scale where psychoacoustics stops working. And provides a method to determine this point and measurement procedure to control compliance a DUT to it. There is no need to analyze distortion below this point. For analysis of distortion/corruption of the signal I provided just basic instrument - artifact signatures and distances between them. So by-design df-metric is aimed foremost to prevent distortion and secondly - to analyze it. Other methods of distortion research can be used above s-level if required. For various listening environments s-level varies between -20dB and -90dB with m-signal. Some current audio devices are already close to -90dB. And you need to control only one parameter - df level with m-signal, what is very logical as we actually control transparency of the communication channel for m-signal. Pure technical stuff.
 

solderdude

Grand Contributor
Joined
Jul 21, 2018
Messages
15,891
Likes
35,912
Location
The Neitherlands
Some changes are not sound quality degrading such as slow phase changes and gentle sloping frequency response.
This will show up in nulling but may have no influence on perceieved sound quality.

How do you circumvent seeing those kind of alterations to the signal and ignore them ?
One could qualify such an amp as having poor metrics (and of course it does) but that won't have a relation to perceived SQ.
 
Top Bottom