• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

How good does a DAC need to be for ASR approval?

watchnerd

Grand Contributor
Joined
Dec 8, 2016
Messages
12,449
Likes
10,408
Location
Seattle Area, USA
The exercise I doing in examining the real outputs of various pro gear in simple DA/AD paths demonstrates misbehaviour at only about 20dB down in the poorer cases - there is a large divergence of competence, in the real world, and looking at what the waveform does in the "tricky areas" shows up the winners, and losers.

I want to make sure I understand what you're saying.

Are you saying you have equipment that has noise and distortion artificats at -20 dB? If that's the case, it's broken or just garbage.

Or are you saying you have equipment that "misbehaves" at -20 DBFS?
 

watchnerd

Grand Contributor
Joined
Dec 8, 2016
Messages
12,449
Likes
10,408
Location
Seattle Area, USA
And the size of the check that accompany the device submitted for review.
At the very least the device should be donated to ASR

I am more than happy to donate some of my gear to Audio Science Review in perpetuity in exchange for avoiding electronic waste disposal fees, a-holes on AudiogonCraigsBay, and some kind of forum sparkles.
 

fas42

Major Contributor
Joined
Mar 21, 2016
Messages
2,818
Likes
191
Location
Australia
I want to make sure I understand what you're saying.

Are you saying you have equipment that has noise and distortion artificats at -20 dB? If that's the case, it's broken or just garbage.

Or are you saying you have equipment that "misbehaves" at -20 DBFS?
No, I don't have this equipment. But the members of Gearslutz have a great variety of components and boards designed for, and sold to the professional audio community - and I recently discovered that they have been contributing to a thread which links to uploads for DA/AD loop passes, processing a single, unique music file. The tool DiffMaker was used to assess accuracy of conversion, with a huge variety of results; I have little faith in this program and decided to do my own analysis of this quite huge repository of data.

In my early examination it appeared that there were major glitches, inaccuracies in particular points of the result files - however, I'm now starting to question whether I'm misinterpreting what I'm seeing - so, I will be doing another round of examination to ensure that I get the story completely right, before declaring anything more.

This assembly of audio data has the potential to be extremely useful, for understanding where subtle glitches in audio processing may occur - the trick is understanding precisely the best way of going about finding something meaningful in the collection. Which means, it's a learning exercise - especially for me.
 
OP
Blumlein 88

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,524
Likes
37,057
When one leaves the semi fantasy world of measuring equipment in highly prescribed ways, and looks at what the gear does in the real world, then that 80dB looks very, very ambitious. The exercise I doing in examining the real outputs of various pro gear in simple DA/AD paths demonstrates misbehaviour at only about 20dB down in the poorer cases - there is a large divergence of competence, in the real world, and looking at what the waveform does in the "tricky areas" shows up the winners, and losers.

I was hoping to post some pics by now, but the new version of Audacity is exploding on me - I'm "dithering" on whether to revert to an older version, or push for answers on what's happening.
Describe exploding on you. Prior versions are available by the way.

As already asked, you have devices that fail at -20 db relative to the signal? That is broken gear. Or can you explain further.
 
OP
Blumlein 88

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,524
Likes
37,057

I have little faith in this program

I agree. Its too flaky. Sometimes it gives results that make no sense and I can't figure out why. Sometimes it seems like magic.

I can tell Frank part of the problem in that gearslutz thread is FR. Though Diffmaker is supposedly using only 100-12khz in those tests it gets upset by small differences at frequency extremes. Something a 1/4 db down at 20 hz will get a much lower null result. If a converter uses minimum phase filters which both droop the upper bit of response and has phase issues just at the edge of the 20 khz band it also seems to throw off Diffmaker. In a sense it should, but you may get very poor nulls in a device which is pretty good for the most part. Again such things should give a lower null, but casual interpretation of that is misleading. It is a piece of the puzzle of course.
 
OP
Blumlein 88

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,524
Likes
37,057
Do you think, for one second, that you will get any kind of sensical answer?

Well in this case I sort of did. He was seeing surprisingly poor Diffmaker results, and investigating why found some oddities. :)
 

fas42

Major Contributor
Joined
Mar 21, 2016
Messages
2,818
Likes
191
Location
Australia
Describe exploding on you. Prior versions are available by the way.

As already asked, you have devices that fail at -20 db relative to the signal? That is broken gear. Or can you explain further.
Picturesque term for Audacity corrupting the work to date - work done is stored as a project, and on the start in the following day, all tracks were silenced. The program could not recover any data - and, somehow it had also managed to corrupt the previous project save. I may go back to a previous version - in the meantime, I'm rebuilding the tracks, checking more carefully that corruption is not happening as I go.

The gear is showing artifacts, but -20dB is definitely not correct. Note, I was looking at the very best performers, by DiffMaker standards, so far - will be interesting to see how lesser units fare.
 

fas42

Major Contributor
Joined
Mar 21, 2016
Messages
2,818
Likes
191
Location
Australia
Well in this case I sort of did. He was seeing surprisingly poor Diffmaker results, and investigating why found some oddities. :)
The method I'm pursuing is isolating part of the spectrum, that above 10kHz, since that is typically where problems arise; aligning the tracks; and synchronising as finely as possible by resampling and adjusting samples as necessary - what DiffMaker does under the hood. Quite excellent visual matching can be achieved this way - which makes the odd sample that was got wrong stand out like a sore thumb - spectrum analysis will fail to pick up this sort of misbehaviour: a single glitch.
 
OP
Blumlein 88

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,524
Likes
37,057
The method I'm pursuing is isolating part of the spectrum, that above 10kHz, since that is typically where problems arise; aligning the tracks; and synchronising as finely as possible by resampling and adjusting samples as necessary - what DiffMaker does under the hood. Quite excellent visual matching can be achieved this way - which makes the odd sample that was got wrong stand out like a sore thumb - spectrum analysis will fail to pick up this sort of misbehaviour: a single glitch.

Still this approach will not give you great nulls. As a for instance, 12 khz sine at 48 khz rates. Resample to 384 khz and move one sample. Invert one and add together. You only get about a -12 db null. Loopbacks also have some delay from traveling over the wire. Yes, that is enough to corrupt your nulls somewhat. About 3 nanoseconds per meter. Assuming I understand what your methodology is.
 

fas42

Major Contributor
Joined
Mar 21, 2016
Messages
2,818
Likes
191
Location
Australia
Still this approach will not give you great nulls. As a for instance, 12 khz sine at 48 khz rates. Resample to 384 khz and move one sample. Invert one and add together. You only get about a -12 db null. Loopbacks also have some delay from traveling over the wire. Yes, that is enough to corrupt your nulls somewhat. About 3 nanoseconds per meter. Assuming I understand what your methodology is.
Yes, it is almost impossible to completely null, for the reasons you mentioned. What I am currently doing is looking at the waveform, and seeing if there is excellent correlation at all points, allowing for the fact that there may be a slight time misalignment - if necessary, resample in the MHz regions to optimise the match. If there are any occasional sample points which are significantly in error, then there will be a distinct glitch in the difference waveform, visually and otherwise - I'm looking for those markers.
 

fas42

Major Contributor
Joined
Mar 21, 2016
Messages
2,818
Likes
191
Location
Australia
As a visual reference, you can take the original track, and misalign a duplicate of that - and see what that gives you. I've just done that to an interesting part of the waveform, by resampling to 2.822MHz and shifting the copy by one sample - this shows me what just a perfect time shift would produce.
 
OP
Blumlein 88

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,524
Likes
37,057
Time shifts cause a rising FR in the difference signal. It rises by 6 dB per octave. That might cause what appears to be a glitch when a high frequency transient occurs.
 

fas42

Major Contributor
Joined
Mar 21, 2016
Messages
2,818
Likes
191
Location
Australia
Which is one of the reasons I'm concentrating solely on the 10k-20kHz band - everything below is EQ'd to at least 120dB down, and plays no part in the comparison. The piece that was used is the famous Can Can number, played by full orchestra - so, plenty happening, nice big crescendos, and lots and lots of high frequency action. Is the HF content perfectly preserved by the processing, during the crescendo moments, in the quieter sections, and in the fadeout at the end?

I definitely misread what I was seeing initially - I'm learning as I'm doing this exercise, and performance of the gear looks better and better as I refine my approach to the examination. But note that these are some of the best performing combos, by the posted numbers, that I'm looking at right now - will this continue to be the case with the "lesser"units?
 
OP
Blumlein 88

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,524
Likes
37,057
Which is one of the reasons I'm concentrating solely on the 10k-20kHz band - everything below is EQ'd to at least 120dB down, and plays no part in the comparison. The piece that was used is the famous Can Can number, played by full orchestra - so, plenty happening, nice big crescendos, and lots and lots of high frequency action. Is the HF content perfectly preserved by the processing, during the crescendo moments, in the quieter sections, and in the fadeout at the end?

I definitely misread what I was seeing initially - I'm learning as I'm doing this exercise, and performance of the gear looks better and better as I refine my approach to the examination. But note that these are some of the best performing combos, by the posted numbers, that I'm looking at right now - will this continue to be the case with the "lesser"units?

One thing you can do is use Audacity to Change Speed of the recorded file. Speed it up slightly (.001% is possible). At some point the files will be in time alignment. For a fraction of a second somewhere they will null out quite well giving you an idea how well if they were perfectly matched in time. Older versions of Audacity let you Change Speed by .0000003%.
 

fas42

Major Contributor
Joined
Mar 21, 2016
Messages
2,818
Likes
191
Location
Australia
One thing you can do is use Audacity to Change Speed of the recorded file. Speed it up slightly (.001% is possible). At some point the files will be in time alignment. For a fraction of a second somewhere they will null out quite well giving you an idea how well if they were perfectly matched in time. Older versions of Audacity let you Change Speed by .0000003%.
Don't you hate it when they 'improve' software, and reduce its capability?! :mad:

Most of the captures have been clocked "perfectly" because the process was locked onto the original digital stream - in the first batch I'm looking at only one track is slowly losing sync - but holds it long enough to see how well it matches the others

When Audacity runs out of puff to do the work needed, I'll ship what needs to be processed over to SoX - the latter powers the resampling of Audacity, and has far more flexibility, and precision, to get the job done.
 

Wombat

Master Contributor
Joined
Nov 5, 2017
Messages
6,722
Likes
6,459
Location
Australia
Don't you hate it when they 'improve' software, and reduce its capability?! :mad:

Most of the captures have been clocked "perfectly" because the process was locked onto the original digital stream - in the first batch I'm looking at only one track is slowly losing sync - but holds it long enough to see how well it matches the others

When Audacity runs out of puff to do the work needed, I'll ship what needs to be processed over to SoX - the latter powers the resampling of Audacity, and has far more flexibility, and precision, to get the job done.


I hate it when loved hardware is made redundant by computer OS upgrades.
 
Last edited:

fas42

Major Contributor
Joined
Mar 21, 2016
Messages
2,818
Likes
191
Location
Australia
As usual for me, these days, I went off the boil in terms of looking at these loop captures - just tried the next increment of examining for visual differences of the waveforms in the above 10k band - I had managed to trick myself earlier, thinking I was seeing variations only 20dB down, :oops:. What I now have is errors of the order of 40dB down, relative to the instantaneous level of the HF band - so, progress being made ...
 

fas42

Major Contributor
Joined
Mar 21, 2016
Messages
2,818
Likes
191
Location
Australia
And more progress ... I'm refining the mechanism for achieving the best null, in the >10k band - and I'm still seeing errors of the order of 40dB down, for one of the test setups. Confirming that a much deeper null is possible, by using a copy of the original waveform, in the same "jig".
 
Top Bottom