• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Why is there divergent opinions here from gearslutz?

KSTR

Major Contributor
Joined
Sep 6, 2018
Messages
2,733
Likes
6,102
Location
Berlin, Germany
DeltaWave attempts to answer a lot of questions that the GS thread leaves unanswered. I've communicated with Didier to see if he'd be interested in adding some new numbers to his GS thread, but he believes this will confuse everyone and that he'd prefer to stay with the simple, single number metric.
In other words, he's simply refusing to accept reality. His list already confused everyone (by providing wrong/irrelevant data) and really should be completely deleted and replaced by your much more competent list.
Personally, I find his (and his supporters') stance unacceptable. Making errors is not the problem, refusing to learn and do at least some damage control is.
 

RayDunzl

Grand Contributor
Central Scrutinizer
Joined
Mar 9, 2016
Messages
13,247
Likes
17,163
Location
Riverview FL

BDWoody

Chief Cat Herder
Moderator
Forum Donor
Joined
Jan 9, 2019
Messages
7,039
Likes
23,184
Location
Mid-Atlantic, USA. (Maryland)
I'm sorry but the word slutz is an offensive term, please use the new proper corrected term for the website "space" so everyone can have a safe space and not get offended.

If someone comes here looking for a 'safe space,' they may be disappointed.
 

pkane

Master Contributor
Forum Donor
Joined
Aug 18, 2017
Messages
5,678
Likes
10,314
Location
North-East
In other words, he's simply refusing to accept reality. His list already confused everyone (by providing wrong/irrelevant data) and really should be completely deleted and replaced by your much more competent list.
Personally, I find his (and his supporters') stance unacceptable. Making errors is not the problem, refusing to learn and do at least some damage control is.

As I see it, the data posted in that thread is useful but only in an "engineering" sense: it is not directly related to sound quality and it's not proportional to anything meaningful, except to an engineer. A large error could simply mean that the filter isn't phase-linear, or it's frequency cut off is large, or the near-DC frequencies are filtered out. None of these are a major audible concern, but these can result in a huge, 30-40dB difference in the RMS null value. To a non-technical person looking at these numbers, some interfaces will look much, much better. The reality is, they could sound the same or worse as some of the better measuring units.
 

Grooved

Addicted to Fun and Learning
Joined
Feb 26, 2021
Messages
679
Likes
441
In other words, he's simply refusing to accept reality. His list already confused everyone (by providing wrong/irrelevant data) and really should be completely deleted and replaced by your much more competent list.
Personally, I find his (and his supporters') stance unacceptable. Making errors is not the problem, refusing to learn and do at least some damage control is.

OK, so I read your thread and it all makes sense ;-)
Now, is it possible to at least be near the results of your test, but without needing to do a 3 hours process for each file ?
Regarding the GS thread, I think it would be better to add other test results, but only if it's done for all the devices, but I can understand the fact of not adding it into the already started thread. There's a lot of information already, but it's better to keep it, even if a better test is found, as long as both test can be linked and known by everybody.
It would be too much information on the same thread, and it may just be better to start a new thread, because imagine there's later another method being better, what would you do with 3 tests results in the same thread ?

As I see it, the data posted in that thread is useful but only in an "engineering" sense: it is not directly related to sound quality and it's not proportional to anything meaningful, except to an engineer. A large error could simply mean that the filter isn't phase-linear, or it's frequency cut off is large, or the near-DC frequencies are filtered out. None of these are a major audible concern, but these can result in a huge, 30-40dB difference in the RMS null value. To a non-technical person looking at these numbers, some interfaces will look much, much better. The reality is, they could sound the same or worse as some of the better measuring units.

This is exactly what I thought first, testing a loop is not test an input ony or a output only, it's the combo of both and can't tell you if both ADC and DAC are performing average, or if the ADC is great and the DAC not so good for example... even with the best test results we could get, it will be specific to cases where you need to do a loop, and it's clearly at external analog processing during mixing and mastering for engineers not working all in the box.
With both ADC and DAC in one device, or two different device, you ideally need the best ADC-DAC combo

Did you find a method that would be at least between the GS process and the one that KSTR has developped ?
 

KSTR

Major Contributor
Joined
Sep 6, 2018
Messages
2,733
Likes
6,102
Location
Berlin, Germany
Now, is it possible to at least be near the results of your test, but without needing to do a 3 hours process for each file ?
Basically the process is pretty systematic and could be more or less fully automated/scripted... if only the tools I'm using allowed for that. The time for analysis could be reduced to minutes and then only the recording time is the dominant factor. Long recording times -- for massive averaging of 100 takes or so -- are only needed when we need to dig down really deep which is not the case for a comparison of original vs. loopback recording. Averaging is also the main reason my method only works with sample-synced recording, DAC and ADC running from the same clock in some way or another... which in most cases can be achieved.

@pkane's DeltaWave now is mature enough to be fully sufficient for the task. My method differs mainly in that the residual is cleaner so one can better judge by listening what's going on and for some kinds of analysis this also has advantages. W.r.t. the numbers, plain RMS null and perceptually weighted RMS null it makes little difference as the artifact have low energy. Plus we have his PK metric which is even better than the simple RMS metrics.
As for the source material, DW does the best job when the test file has a rich full-range spectrum at all levels which can be achieved by placing a section of pink noise slowly ramping down in volume in the file. Averaging the input data will also help to get the noise down, but again with the constraint of requiring sample-synced data.
 

Grooved

Addicted to Fun and Learning
Joined
Feb 26, 2021
Messages
679
Likes
441
@KSTR regarding your process, or even like @pkane said about adding other settings, I would be sure about one thing :
is the process doing extra analysis, or is it acting like correcting the file like it sould be and then doing analysis ?

@pkane can you confirm me the setting to change to get the three other data (RMSEQ dBFS and dBA, and PK Metric) please ?
I did test and got a different result on several files. I will record again for several devices (RME, MOTU, Universal Audio,...) later but want to be sure I have to right settings ;-)
If there extra thing to say about it, I will do it on your thread : https://www.audiosciencereview.com/...-comparison-software.6633/page-31#post-709167

Do you also know if there a way to test correctly a DAC in this loopback method, with any of the recording devices I have for ADC ? I made a test with the Khadas Tone2 Pro and the DW analysis gives me no result, but I can't sync it to the ADC as the SPDIF output is not enabled with the actual firmware.
 
Last edited:

pkane

Master Contributor
Forum Donor
Joined
Aug 18, 2017
Messages
5,678
Likes
10,314
Location
North-East
@pkane can you confirm me the setting to change to get the three other data (RMSEQ dBFS and dBA, and PK Metric) please ?
I did test and got a different result on several files. I will record again for several devices (RME, MOTU, Universal Audio,...) later but want to be sure I have to right settings ;-)
If there extra thing to say about it, I will do it on your thread : https://www.audiosciencereview.com/...-comparison-software.6633/page-31#post-709167

Here are the settings used to run the analysis for the GS files:
1621518112822.png
 

Grooved

Addicted to Fun and Learning
Joined
Feb 26, 2021
Messages
679
Likes
441
OK, I had only checked "Level EQ" and "Phase EQ" but not "Non-linear Drift Correction" nor changed FFT Size.

So after applying your settings, I got a higher result for my MOTU 828 MK2 (USB, but I can test with a Firewire version too) than your result, so the file you've used had certainly a problem (saw it was asymetric, mine is symetric) :
-56.7 -56.7 -81.2 -84.2 -88.8 on your page, while I get -56.7 -56.7 -90.3 -100.3 -107.9

MOTU 828 MK2 USB.JPG


It seems more logical as this device shares the same ADC-DAC chips than the 2408 MK3 and the 24 I/O (but are using PCI/PCI express link), and their boards are very similar.
There are other things to discuss, so I will go on the DW dedicated thread ;-)
 
Top Bottom