• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Why Audio Science Review's measurements are so different compared to other sources?

Racheski

Major Contributor
Forum Donor
Joined
Apr 20, 2020
Messages
1,116
Likes
1,701
Location
Chicago
Audibility shouldn't be at the front of mind for electronics. That should be reserved for system compatibility and optimization. Audibility is a speaker thing. Small proof is that speakers discussion here always reference tonality, while electronics discussions are mostly technical, making sure the signal is passed from one component to the next without problems (like the common "How many watts do I need?" question).
Not sure I'm following. I'll try to illustrate my point a bit more.
  • DAC "A" has been on the market for 6 months and has previously been reviewed & recommended on ASR.
  • DAC "B" is released and is reviewed at ASR.
  • DAC "A" and DAC "B" have similar features, functionality, and build quality.
  • The only notable difference is DAC "B" uses a more recent D/S chip and costs $100 more than DAC "A".
  • DAC "A" has a SINAD of 105 dB. DAC "B" is measured to have a SINAD of 120 dB.
Should DAC "B" be recommended?
 

pozz

Слава Україні
Forum Donor
Editor
Joined
May 21, 2019
Messages
4,036
Likes
6,827
Not sure I'm following. I'll try to illustrate my point a bit more.
  • DAC "A" has been on the market for 6 months and has previously been reviewed & recommended on ASR.
  • DAC "B" is released and is reviewed at ASR.
  • DAC "A" and DAC "B" have similar features, functionality, and build quality.
  • The only notable difference is DAC "B" uses a more recent D/S chip and costs $100 more than DAC "A".
  • DAC "A" has a SINAD of 105 dB. DAC "B" is measured to have a SINAD of 120 dB.
Should DAC "B" be recommended?
Yup, recommended.

What I mean is that discussions of audibility should be severed from electronics other than to establish ground rules. With gear the point isn't to go for an audible difference (unless your previous kit was insufficient in some defined way). When a new piece of gear comes out the question shouldn't be how much better it will make everything sound, but what it enables you to do that you couldn't do before. Great measurements mean fewer obstacles in the way. The system optimization question is simpler to answer and takes away some of the gearlust.

Great measuring speakers, on the other hand, mean great sound.
 

Tks

Major Contributor
Joined
Apr 1, 2019
Messages
3,221
Likes
5,496
Where does the -120 dB number come from? I know it is a strict threshold, but I do not see how it relates to actual psychoacoustic results.

@solderdude

Do you have the graph that you showed a few times concerning audibility? I think that would come in handy for the fellow here.

---------------------------------------------------------------------------------------------------

And as for you Keith, I think you had some pretty tough time on the first page, and 3 pages in, you say:

Now, the second part of my question addressed if there is a hidden agenda or if the sponsorship of many audio forums is interfering with the objectivity and interpretation of measurements. There are many myths that this site and Amir has a hidden agenda, and the other side states that by Schiit sponsoring forums such as SBAF and Head-Fi, objectivity could (or is) be censored. I don't know if any of these is true, and I would like to know more specifics and how they relate to any measurements and statements from both parties.

So here's the thing. He does have conflicts of interest with certain brands that he discloses by relation to his company Madrona Digital that sells audio products. Some of those products are good, and some of those products are one standard deviation, if not, flat-out shovelware. He states this potential bias for any brand his company sells.

The thing is though, this portion of your question is irrelevant in terms of sensibility relating to why someone would care. I like that you are 'all about the truth', or 'all about the measurements'. And I'm sure you've gotten answers relating to the specific comparisons that've led you to make these skeptic claims here(as bad as those examples were like the Schiit one where testing levels were worlds apart for example). But here's the problems with your skepticism:

  1. His bias is irrelevant. If you're all about the measurements, then you would focus on that, even if he was straight up bias in totality.
  2. There are other people with AP machines, able to fully test and verify these results, we have a resident here that sometimes tests products Amir does.
  3. Manufacturers would call out some of the results if they were lies (though this is debatable considering some bigger AVR or botique manufacturers don't test their products like the morons, or lying market scum that they are of course).
  4. Incentive simply isn't there, fudging a few tests (unless you want to make the claim he fudges them all) would make every single expert that strolls around here, and company manufacturer that strolls around here, a bunch of played fiddles. There is simply no way someone would risk lying about something of this nature, in the same way any scientists caught lying EVEN ONCE about their research, have their reputation tarnished to perpetuity. Incorrect interpretations are always possible and are common at times for some esoteric measurement techniques, but those anomalies usually spark a new set of measurements to reveal more truth of the matter.
One such example of #4 was thermal performance over time of devices. And no we're not talking about idiotic "burn-in" (that somehow always results in a device that sounds better, and never worse comedicaly enough for subjectivists). But there was a large discrepancy revealed when user Wolfx-700 measured an SMSL M500 product and was revealed to show a swing in overall SINAD as time went on. There was also the Sabaj D5 that had proper thermal heatsinks on dies of the PCB, where it took some time before the ESS chip was hitting it's rated performance metrics. We now (not always, and I'll conclude with comments on this) get measurements regarding SINAD over a span of 15 minutes or so.

So while you are "all about the truth", that's fine. But if that's the case, then the bias shouldn't worry you even if he sold all products under the sun. One purposeful dishonest move, and he'd be finished. The calculus for that kind of risk makes no logical sense unless he thinks we're all 100% complete morons, and even then it wouldn't make much sense.

I said I would conclude on a portion of certain measurements.

There is one large discrepancy that does occur for measurements, and that is, number of tests run per device. Now of course there are some devices you simply can't test a certain way (devices without proper drivers, or just oddities of that nature). But there is an inconsistency I think has gotten worse ever since speaker testing started, and that is, missing test metrics that are included in some, but not all devices.

Like DAC's for instance, I don't remember the last time I've seen the thermal performance over time measurement. Likewise for DAC's I'm not seeing all the inputs being tested (Bluetooth I can forgive as it's an insane $7,000 module AP offers). But we're not seeing TOSLINK for example much of anymore. Also there was a new SINAD vs Level measurement that plots SINAD performance over the volume range of the device. That seems to also be MIA sadly. There's also the issue with output level testing (there is leeway here with this considering the power output of many devices and the amount of "volume steps" a device has), but the issue I've seen is a slight imprecision of output level. Somtimes we get 3.98V, sometimes 4.03V (which are fine), but then we get sometimes like 4.31V or something like that. Sure it's not a big deal, but I just don't see the reason for it (then again I don't have an AP so I don't know how easy it is to accurately set a volume level output). We also sometimes get MAX output measurements, and sometimes not. Other times, we get different Gain level measurements VS older devices (like medium gain being plotted against the device being tested for it's low gain performance). Other missing metrics also include digital filters (unexciting to most, but believe it or not there's almost no way of knowing what they are because manufacturers are morons that can't post the proper results, and we have seen devices with weird filter performance that violates spec sheets of DAC chips for example). Though I think with the signature you have, you're with me on the need to reveal if a device is doing proper brickwall filtering ;) of which AKM devices are slow to do sadly. Final incosistency (as it's subjectivity based) is sometimes he recommends a device, sometimes doesn't, while most times we can see why, but if you get real nitty gritty, you'll see inconsistencies based on similar performing devices (even build quality as well). To me this last one doesn't matter, but to new people into audio, it could lead them a bit astray seeing as how recommendations are purely subjectively based at the time of testing it seems. So if your device is ugly, watch out ;P

The actual valid critiques you could have, are one in the realm as mine (a seemingly missing checklist, as he seems to do the measurements from memory or something), or it could be he's just rushing seeing as how speaker testing has him swamped seeing as how it's a whole new sector he's in and the transition to efficiency is taking a toll.

But this is a far cry from "Hmm but idk, he's probably paid by the audio deep state interest group to exclude I2S input measurements for X device" (Obviously I'm strawmanning your claim with that caricature, but you get my gist, and that being, your worries are based on insanely low probability of actually being a state of affairs).

As for being 'all about truth', well for that, you're not going to have anyone do the legwork for you. When measurement discrepancies show up, you have two choices. Either:

A) Deep State Amir

B) User error, or a new phenomena (like the thermal performance over time discovery) that now calls for new measurements to properly interpret how this discrepancy came into existence in the first place.

I'd like to know though (though who knows, you could just be lying as anyone else could), what camp do you now fall into after the conclusion of my post?
 
Last edited:

Inner Space

Major Contributor
Forum Donor
Joined
May 18, 2020
Messages
1,285
Likes
2,938
Should DAC "B" be recommended?

Are 15 inaudible dBs worth a hundred dollars? Depends who you ask. I know many members here are or were engineers, in all kinds of disciplines, working in the real world, where the answer would be no. Making something better than it needs to be is a waste of time and money. But deep down, engineers are ... engineers, fascinated by superlatives and limits and challenges. DAC B is pushing a threshold. Engineers all know some serious talent and thinking went into the achievement. Music at home ain't the day job. Of course a success like that is worth a hundred bucks.
 

Alice of Old Vincennes

Major Contributor
Joined
Apr 5, 2019
Messages
1,426
Likes
920
Talk about a disingenuous well written thread, simply meant to pit forum against forum. This has been done many times over, and explained so many times. I swear the Yggy threads always create such a fit. You really have to sit and read BOTH reviews and then see Jude's measurements and you can see how the scales on the graphs are different and what each reviewer finds as "ok" is just personal slant. What is wrong with that. If you want a device that can't do DSD, is barely 20 bit capable, has high distortion by design (shown in all reviews), has power main issues (shown by each reviewer) and allegedly requires a week to stabilize, then buy it. Most subjective reviews seem to like the $2,400 device. The Yggy is designed to be warm, and Schiit and most of the users, WANT the elevated 2nd order harmonics.

Input voltage, measuring equipment, testing methodology, hidden software upgrades (see Head-fi on how certain review units were upgraded-wink wink), paid bias (do you know how much money Head-fi makes per month for sponsorship) and the skill of the person using the same equipment will always vary. All of this has been explained over and over again. Amir is one person, just like all the other reviewers. No one person makes the standard. If you disagree, that is fine, just take the time to either read all the threads, or at least produce your own counter with detailed science.

When Amir has made a mistake, he gets mad, but guess what he has engaged with people, changed things MULTIPLE times, or at minimum allowed people to continue to explain why they think he got it wrong, instead of banning them (see SBAF and Head-fi).

I mean c'mon he bought a AP out of his own pocket. measured the Yggy multiple times, from multiple owners, different devices. Guess what Schiit did, bought an AP and started improving their manufacturing standards? Now look at the last few devices reviewed here, and how improved they are? We all win.

I bet the next Yggy, will measure far better than the current one, I mean look at the Modius, Asgard 3 and Heresy. No way these devices would have existed a few years ago.

Nobody is perfect, but no need to try and create fights on the internet. Audio forums are not gangs.
Censoring gangs.
 

Tks

Major Contributor
Joined
Apr 1, 2019
Messages
3,221
Likes
5,496
Not sure I'm following. I'll try to illustrate my point a bit more.
  • DAC "A" has been on the market for 6 months and has previously been reviewed & recommended on ASR.
  • DAC "B" is released and is reviewed at ASR.
  • DAC "A" and DAC "B" have similar features, functionality, and build quality.
  • The only notable difference is DAC "B" uses a more recent D/S chip and costs $100 more than DAC "A".
  • DAC "A" has a SINAD of 105 dB. DAC "B" is measured to have a SINAD of 120 dB.
Should DAC "B" be recommended?

The hypothetical changes vastly depending on the actual final price. If the device is $3000, I'd say for $100 more, you're better served sleeping easier at night (as people move from one OCD to another). But if one device is $100 in cost, and the other is $200 ($100 more as you laid out), I wouldn't say it worth a recommendation based on the notion that recommendations take target demographics considerations. A device with an MSRP of $100 is for a certain segment (doesn't mean it's performance or features are worse, but could just be worse casework). So I think if you're going to tell people of that budget to buy something nearly double the price, it better be the best damn casework, or include many more features. For folks in the market for $3,000 devices, I'd think you're weird if you want to skimp on 15dB's of performance for $100 on top of that MSRP.

So, which are you talking about.. or are you going to hold my feet to the fire with "what if the device costs $300, and the other is $400?" Because THAT's when we start getting into brain twisting territory, and for that I'd appeal to the majority here (though we'd need everyone's discretionary income disclosed, as I'm sure you would agree, what a billionaire recommends for dinner meals is a far cry what someone working as a small time farmer does).
 
Last edited:

Alice of Old Vincennes

Major Contributor
Joined
Apr 5, 2019
Messages
1,426
Likes
920
@solderdude

Do you have the graph that you showed a few times concerning audibility? I think that would come in handy for the fellow here.

---------------------------------------------------------------------------------------------------

And as for you Keith, I think you had some pretty tough time on the first page, and 3 pages in, you say:



So here's the thing. He does have conflicts of interest with certain brands that he discloses by relation to his company Madrona Digital that sells audio products. Some of those products are good, and some of those products are one standard deviation, if not, flat-out shovelware. He states this potential bias for any brand his company sells.

The thing is though, this portion of your question is irrelevant in terms of sensibility relating to why someone would care. I like that you are 'all about the truth', or 'all about the measurements'. And I'm sure you've gotten answers relating to the specific comparisons that've led you to make these skeptic claims here(as bad as those examples were like the Schiit one where testing levels were worlds apart for example). But here's the problems with your skepticism:

  1. His bias is irrelevant. If you're all about the measurements, then you would focus on that, even if he was straight up bias in totality.
  2. There are other people with AP machines, able to fully test and verify these results, we have a resident here that sometimes tests products Amir does.
  3. Manufacturers would call out some of the results if they were lies (though this is debatable considering some bigger AVR or botique manufacturers don't test their products like the morons, or lying market scum that they are of course).
  4. Incentive simply isn't there, fudging a few tests (unless you want to make the claim he fudges them all) would make every single expert that strolls around here, and company manufacturer that strolls around here, a bunch of played fiddles. There is simply no way someone would risk lying about something of this nature, in the same way any scientists caught lying EVEN ONCE about their research, have their reputation tarnished to perpetuity. Incorrect interpretations are always possible and are common at times for some esoteric measurement techniques, but those anomalies usually spark a new set of measurements to reveal more truth of the matter.
One such example of #4 was thermal performance over time of devices. And no we're not talking about idiotic "burn-in" (that somehow always results in a device that sounds better, and never worse comedicaly enough for subjectivists). But there was a large discrepancy revealed when user Wolfx-700 measured an SMSL M500 product and was revealed to show a swing in overall SINAD as time went on. There was also the Sabaj D5 that had proper thermal heatsinks on dies of the PCB, where it took some time before the ESS chip was hitting it's rated performance metrics. We now (not always, and I'll conclude with comments on this) get measurements regarding SINAD over a span of 15 minutes or so.

So while you are "all about the truth", that's fine. But if that's the case, then the bias shouldn't worry you even if he sold all products under the sun. One purposeful dishonest move, and he'd be finished. The calculus for that kind of risk makes no logical sense unless he thinks we're all 100% complete morons, and even then it wouldn't make much sense.

I said I would conclude on a portion of certain measurements.

There is one large discrepancy that does occur for measurements, and that is, number of tests run per device. Now of course there are some devices you simply can't test a certain way (devices without proper drivers, or just oddities of that nature). But there is an inconsistency I think has gotten worse ever since speaker testing started, and that is, missing test metrics that are included in some, but not all devices.

Like DAC's for instance, I don't remember the last time I've seen the thermal performance over time measurement. Likewise for DAC's I'm not seeing all the inputs being tested (Bluetooth I can forgive as it's an insane $7,000 module AP offers). But we're not seeing TOSLINK for example much of anymore. Also there was a new SINAD vs Level measurement that plots SINAD performance over the volume range of the device. That seems to also be MIA sadly. There's also the issue with output level testing (there is leeway here with this considering the power output of many devices and the amount of "volume steps" a device has), but the issue I've seen is a slight imprecision of output level. Somtimes we get 3.98V, sometimes 4.03V (which are fine), but then we get sometimes like 4.31V or something like that. Sure it's not a big deal, but I just don't see the reason for it (then again I don't have an AP so I don't know how easy it is to accurately set a volume level output). We also sometimes get MAX output measurements, and sometimes not. Other times, we get different Gain level measurements VS older devices (like medium gain being plotted against the device being tested for it's low gain performance). Other missing metrics also include digital filters (unexciting to most, but believe it or not there's almost no way of knowing what they are because manufacturers are morons that can't post the proper results, and we have seen devices with weird filter performance that violates spec sheets of DAC chips for example). Though I think with the signature you have, you're with me on the need to reveal if a device is doing proper brickwall filtering ;) of which AKM devices are slow to do sadly. Final incosistency (as it's subjectivity based) is sometimes he recommends a device, sometimes doesn't, while most times we can see why, but if you get real nitty gritty, you'll see inconsistencies based on similar performing devices (even build quality as well). To me this last one doesn't matter, but to new people into audio, it could lead them a bit astray seeing as how recommendations are purely subjectively based at the time of testing it seems. So if your device is ugly, watch out ;P

The actual valid critiques you could have, are one in the realm as mine (a seemingly missing checklist, as he seems to do the measurements from memory or something), or it could be he's just rushing seeing as how speaker testing has him swamped seeing as how it's a whole new sector he's in and the transition to efficiency is taking a toll.

But this is a far cry from "Hmm but idk, he's probably paid by the audio deep state interest group to exclude I2S input measurements for X device" (Obviously I'm strawmanning your claim with that caricature, but you get my gist, and that being, your worries are based on insanely low probability of actually being a state of affairs).

As for being 'all about truth', well for that, you're not going to have anyone do the legwork for you. When measurement discrepancies show up, you have two choices. Either:

A) Deep State Amir

B) User error, or a new phenomena (like the thermal performance over time discovery) that now calls for new measurements to properly interpret how this discrepancy came into existence in the first place.

I'd like to know though (though who knows, you could just be lying as anyone else could), what camp do you now fall into after the conclusion of my post?
Here your response

@solderdude

Do you have the graph that you showed a few times concerning audibility? I think that would come in handy for the fellow here.

---------------------------------------------------------------------------------------------------

And as for you Keith, I think you had some pretty tough time on the first page, and 3 pages in, you say:



So here's the thing. He does have conflicts of interest with certain brands that he discloses by relation to his company Madrona Digital that sells audio products. Some of those products are good, and some of those products are one standard deviation, if not, flat-out shovelware. He states this potential bias for any brand his company sells.

The thing is though, this portion of your question is irrelevant in terms of sensibility relating to why someone would care. I like that you are 'all about the truth', or 'all about the measurements'. And I'm sure you've gotten answers relating to the specific comparisons that've led you to make these skeptic claims here(as bad as those examples were like the Schiit one where testing levels were worlds apart for example). But here's the problems with your skepticism:

  1. His bias is irrelevant. If you're all about the measurements, then you would focus on that, even if he was straight up bias in totality.
  2. There are other people with AP machines, able to fully test and verify these results, we have a resident here that sometimes tests products Amir does.
  3. Manufacturers would call out some of the results if they were lies (though this is debatable considering some bigger AVR or botique manufacturers don't test their products like the morons, or lying market scum that they are of course).
  4. Incentive simply isn't there, fudging a few tests (unless you want to make the claim he fudges them all) would make every single expert that strolls around here, and company manufacturer that strolls around here, a bunch of played fiddles. There is simply no way someone would risk lying about something of this nature, in the same way any scientists caught lying EVEN ONCE about their research, have their reputation tarnished to perpetuity. Incorrect interpretations are always possible and are common at times for some esoteric measurement techniques, but those anomalies usually spark a new set of measurements to reveal more truth of the matter.
One such example of #4 was thermal performance over time of devices. And no we're not talking about idiotic "burn-in" (that somehow always results in a device that sounds better, and never worse comedicaly enough for subjectivists). But there was a large discrepancy revealed when user Wolfx-700 measured an SMSL M500 product and was revealed to show a swing in overall SINAD as time went on. There was also the Sabaj D5 that had proper thermal heatsinks on dies of the PCB, where it took some time before the ESS chip was hitting it's rated performance metrics. We now (not always, and I'll conclude with comments on this) get measurements regarding SINAD over a span of 15 minutes or so.

So while you are "all about the truth", that's fine. But if that's the case, then the bias shouldn't worry you even if he sold all products under the sun. One purposeful dishonest move, and he'd be finished. The calculus for that kind of risk makes no logical sense unless he thinks we're all 100% complete morons, and even then it wouldn't make much sense.

I said I would conclude on a portion of certain measurements.

There is one large discrepancy that does occur for measurements, and that is, number of tests run per device. Now of course there are some devices you simply can't test a certain way (devices without proper drivers, or just oddities of that nature). But there is an inconsistency I think has gotten worse ever since speaker testing started, and that is, missing test metrics that are included in some, but not all devices.

Like DAC's for instance, I don't remember the last time I've seen the thermal performance over time measurement. Likewise for DAC's I'm not seeing all the inputs being tested (Bluetooth I can forgive as it's an insane $7,000 module AP offers). But we're not seeing TOSLINK for example much of anymore. Also there was a new SINAD vs Level measurement that plots SINAD performance over the volume range of the device. That seems to also be MIA sadly. There's also the issue with output level testing (there is leeway here with this considering the power output of many devices and the amount of "volume steps" a device has), but the issue I've seen is a slight imprecision of output level. Somtimes we get 3.98V, sometimes 4.03V (which are fine), but then we get sometimes like 4.31V or something like that. Sure it's not a big deal, but I just don't see the reason for it (then again I don't have an AP so I don't know how easy it is to accurately set a volume level output). We also sometimes get MAX output measurements, and sometimes not. Other times, we get different Gain level measurements VS older devices (like medium gain being plotted against the device being tested for it's low gain performance). Other missing metrics also include digital filters (unexciting to most, but believe it or not there's almost no way of knowing what they are because manufacturers are morons that can't post the proper results, and we have seen devices with weird filter performance that violates spec sheets of DAC chips for example). Though I think with the signature you have, you're with me on the need to reveal if a device is doing proper brickwall filtering ;) of which AKM devices are slow to do sadly. Final incosistency (as it's subjectivity based) is sometimes he recommends a device, sometimes doesn't, while most times we can see why, but if you get real nitty gritty, you'll see inconsistencies based on similar performing devices (even build quality as well). To me this last one doesn't matter, but to new people into audio, it could lead them a bit astray seeing as how recommendations are purely subjectively based at the time of testing it seems. So if your device is ugly, watch out ;P

The actual valid critiques you could have, are one in the realm as mine (a seemingly missing checklist, as he seems to do the measurements from memory or something), or it could be he's just rushing seeing as how speaker testing has him swamped seeing as how it's a whole new sector he's in and the transition to efficiency is taking a toll.

But this is a far cry from "Hmm but idk, he's probably paid by the audio deep state interest group to exclude I2S input measurements for X device" (Obviously I'm strawmanning your claim with that caricature, but you get my gist, and that being, your worries are based on insanely low probability of actually being a state of affairs).

As for being 'all about truth', well for that, you're not going to have anyone do the legwork for you. When measurement discrepancies show up, you have two choices. Either:

A) Deep State Amir

B) User error, or a new phenomena (like the thermal performance over time discovery) that now calls for new measurements to properly interpret how this discrepancy came into existence in the first place.

I'd like to know though (though who knows, you could just be lying as anyone else could), what camp do you now fall into after the conclusion of my post?
Here y

Geee your response may be politically incorrect.
 

JohnBooty

Addicted to Fun and Learning
Forum Donor
Joined
Jul 24, 2018
Messages
637
Likes
1,593
Location
Philadelphia area
I don't know what's audible and what isn't when and under what circumstances.

"Threshold of audibility for a given listener in a given environment" is a large, gray, fuzzy area but we can make some definitive statements there.
  • if we're playing 44.1/16khz music we don't need more than 96dB of dynamic range or reproduction of frequencies > 22khz
  • we probably don't need reproduction of frequencies near 22khz anyway unless we have extremely unusual ears
  • improvements in cumulative playback chain SINAD past a certain point will be inaudible; that point is certainly somewhat low (I would argue it's likely to actually be < 80 in many environments; it's certainly not > 100
One thing is for sure, if equipment measures better it won't sound worse. Some people do like colorations but I can't imagine making a buying decision based on someone else's preferences.
I think we all agree those are bad. :D
Measurements, parts quality, build quality, fit and finish, ergonomics, functionality, and no obnoxious behavior (turn on or off thumps, switching noises, static, etc) are what carry buying decision weight. Well, and cost. I can't imagine how to judge audibility as a decision-making factor for electronics. Do you disagree?
I certainly agree with all the factors you listed, but I cannot fathom how one would not include "audibility" as one of their guiding factors.

Whether our budgets are $50 or $500,000 we're all working with budgets. While I wouldn't call somebody wrong if they had different priorities, it would be hard for me to think of somebody as an "audiophile" if they didn't put some serious thought into getting the most (audible) performance for their money.

Surely we all do that already? To some extent? Suppose my budget is $4,100. Should I spend $4000 on a Benchmark AHB2 and $100 on a pair of cheap Sony bookshelf speakers? Well, if it makes me happy, sure, but I'm probably going to be disappointed when the end result doesn't sound very amazing -- and everybody here would declare me legally insane. Hopefully I would be a little more savvy and make my selections with more of a focus on actual audible improvements.
 

Racheski

Major Contributor
Forum Donor
Joined
Apr 20, 2020
Messages
1,116
Likes
1,701
Location
Chicago
The hypothetical changes vastly depending on the actual final price. If the device is $3000, I'd say for $100 more, you're better served sleeping easier at night (as people move from one OCD to another). But if one device is $100 in cost, and the other is $200 ($100 more as you laid out), I wouldn't say it worth a recommendation based on the notion that recommendations take target demographics considerations. A device with an MSRP of $100 is for a certain segment (doesn't mean it's performance or features are worse, but could just be worse casework). So I think if you're going to tell people of that budget to buy something nearly double the price, it better be the best damn casework, or include many more features. For folks in the market for $3,000 devices, I'd think you're weird if you want to skimp on 15dB's of performance for $100 on top of that MSRP.

So, which are you talking about.. or are you going to hold my feet to the fire with "what if the device costs $300, and the other is $400?" Because THAT's when we start getting into brain twisting territory, and for that I'd appeal to the majority here (though we'd need everyone's discretionary income disclosed, as I'm sure you would agree, what a billionaire recommends for dinner meals is a far cry what someone working as a small time farmer does).
Good point - I should have specified that DAC "A" is $600 and DAC "B" is $700 (I have a specific example in mind which I will not reveal), and we are assuming a middle-class income.

FWIW, my answer is...it depends. In a market with perfect information and rational actors, the more expensive DAC should not be recommended for consumers because it has an objectively worse value proposition, assuming that 15 dB of SINAD has no practical benefit to an end user (outside of bragging rights). So from a pure consumer-oriented economics perspective, I say don't recommend.

From an engineering perspective, I see arguments for both sides. If the development team for DAC "B" had a goal to manufacture a device with at least 120 dB SINAD, whose to say that their SOTA engineering shouldn't be rewarded for moving the goal line forward. On the contrary, one could argue that DAC "B" was over-engineered and was a waste of resources.

I think that most people on ASR have both the consumer and engineer perspectives in mind, with various distributions in each category depending on the individual. With all that said, I want to emphasize how much I appreciate the ASR community for helping consumers make informed purchase decisions of audio devices. I don't want this to read as a rant against Amir's recommendations; my intention was to be "self critical" as Mr. Savage put it.
 

JohnBooty

Addicted to Fun and Learning
Forum Donor
Joined
Jul 24, 2018
Messages
637
Likes
1,593
Location
Philadelphia area
Not sure I'm following. I'll try to illustrate my point a bit more.
  • DAC "A" has been on the market for 6 months and has previously been reviewed & recommended on ASR.
  • DAC "B" is released and is reviewed at ASR.
  • DAC "A" and DAC "B" have similar features, functionality, and build quality.
  • The only notable difference is DAC "B" uses a more recent D/S chip and costs $100 more than DAC "A".
  • DAC "A" has a SINAD of 105 dB. DAC "B" is measured to have a SINAD of 120 dB.
Should DAC "B" be recommended?
Should DAC "B" recommended vs. DAC "A" in the abstract sense? Sure. It's a better performer. And that's how ASR recommendations for DACs and amplifiers generally work.

One just hopes that anybody actually buying a DAC has a realistic understanding of what "B" will and won't contribute to their listening experience vs. "A" so that they can choose to spend (or not spend) their extra $100 wisely.

Amir has generally decided that the latter consideration is outside of ASR's scope. And that's fine. Every successful action in life requires focus. Nobody can do it all, and he does a lot already.

One final thing I'd add, though, is that my concern with audibility is not abstract. I have seen newcomers to the hobby putting together systems where they spend like $200 on the DAC, $200 on an amp, and $79 on some Pioneer speakers and things like that. (edit for clarity: I've seen that elsewhere, not on ASR)

And really, you hate to see that. I hate to see that nearly as much as seeing folks squander money on pure snake oil like "audiophile" USB cables. I want to see them succeed and get the best result for their money and effort. While ASR is pretty explicitly not a beginner-oriented site, beginners do read it and jump to some wacky conclusions due to the figures published here.
 
Last edited:
OP
KeithPhantom

KeithPhantom

Addicted to Fun and Learning
Forum Donor
Joined
May 8, 2020
Messages
642
Likes
658
@solderdude

Do you have the graph that you showed a few times concerning audibility? I think that would come in handy for the fellow here.

---------------------------------------------------------------------------------------------------

And as for you Keith, I think you had some pretty tough time on the first page, and 3 pages in, you say:



So here's the thing. He does have conflicts of interest with certain brands that he discloses by relation to his company Madrona Digital that sells audio products. Some of those products are good, and some of those products are one standard deviation, if not, flat-out shovelware. He states this potential bias for any brand his company sells.

The thing is though, this portion of your question is irrelevant in terms of sensibility relating to why someone would care. I like that you are 'all about the truth', or 'all about the measurements'. And I'm sure you've gotten answers relating to the specific comparisons that've led you to make these skeptic claims here(as bad as those examples were like the Schiit one where testing levels were worlds apart for example). But here's the problems with your skepticism:

  1. His bias is irrelevant. If you're all about the measurements, then you would focus on that, even if he was straight up bias in totality.
  2. There are other people with AP machines, able to fully test and verify these results, we have a resident here that sometimes tests products Amir does.
  3. Manufacturers would call out some of the results if they were lies (though this is debatable considering some bigger AVR or botique manufacturers don't test their products like the morons, or lying market scum that they are of course).
  4. Incentive simply isn't there, fudging a few tests (unless you want to make the claim he fudges them all) would make every single expert that strolls around here, and company manufacturer that strolls around here, a bunch of played fiddles. There is simply no way someone would risk lying about something of this nature, in the same way any scientists caught lying EVEN ONCE about their research, have their reputation tarnished to perpetuity. Incorrect interpretations are always possible and are common at times for some esoteric measurement techniques, but those anomalies usually spark a new set of measurements to reveal more truth of the matter.
One such example of #4 was thermal performance over time of devices. And no we're not talking about idiotic "burn-in" (that somehow always results in a device that sounds better, and never worse comedicaly enough for subjectivists). But there was a large discrepancy revealed when user Wolfx-700 measured an SMSL M500 product and was revealed to show a swing in overall SINAD as time went on. There was also the Sabaj D5 that had proper thermal heatsinks on dies of the PCB, where it took some time before the ESS chip was hitting it's rated performance metrics. We now (not always, and I'll conclude with comments on this) get measurements regarding SINAD over a span of 15 minutes or so.

So while you are "all about the truth", that's fine. But if that's the case, then the bias shouldn't worry you even if he sold all products under the sun. One purposeful dishonest move, and he'd be finished. The calculus for that kind of risk makes no logical sense unless he thinks we're all 100% complete morons, and even then it wouldn't make much sense.

I said I would conclude on a portion of certain measurements.

There is one large discrepancy that does occur for measurements, and that is, number of tests run per device. Now of course there are some devices you simply can't test a certain way (devices without proper drivers, or just oddities of that nature). But there is an inconsistency I think has gotten worse ever since speaker testing started, and that is, missing test metrics that are included in some, but not all devices.

Like DAC's for instance, I don't remember the last time I've seen the thermal performance over time measurement. Likewise for DAC's I'm not seeing all the inputs being tested (Bluetooth I can forgive as it's an insane $7,000 module AP offers). But we're not seeing TOSLINK for example much of anymore. Also there was a new SINAD vs Level measurement that plots SINAD performance over the volume range of the device. That seems to also be MIA sadly. There's also the issue with output level testing (there is leeway here with this considering the power output of many devices and the amount of "volume steps" a device has), but the issue I've seen is a slight imprecision of output level. Somtimes we get 3.98V, sometimes 4.03V (which are fine), but then we get sometimes like 4.31V or something like that. Sure it's not a big deal, but I just don't see the reason for it (then again I don't have an AP so I don't know how easy it is to accurately set a volume level output). We also sometimes get MAX output measurements, and sometimes not. Other times, we get different Gain level measurements VS older devices (like medium gain being plotted against the device being tested for it's low gain performance). Other missing metrics also include digital filters (unexciting to most, but believe it or not there's almost no way of knowing what they are because manufacturers are morons that can't post the proper results, and we have seen devices with weird filter performance that violates spec sheets of DAC chips for example). Though I think with the signature you have, you're with me on the need to reveal if a device is doing proper brickwall filtering ;) of which AKM devices are slow to do sadly. Final incosistency (as it's subjectivity based) is sometimes he recommends a device, sometimes doesn't, while most times we can see why, but if you get real nitty gritty, you'll see inconsistencies based on similar performing devices (even build quality as well). To me this last one doesn't matter, but to new people into audio, it could lead them a bit astray seeing as how recommendations are purely subjectively based at the time of testing it seems. So if your device is ugly, watch out ;P

The actual valid critiques you could have, are one in the realm as mine (a seemingly missing checklist, as he seems to do the measurements from memory or something), or it could be he's just rushing seeing as how speaker testing has him swamped seeing as how it's a whole new sector he's in and the transition to efficiency is taking a toll.

But this is a far cry from "Hmm but idk, he's probably paid by the audio deep state interest group to exclude I2S input measurements for X device" (Obviously I'm strawmanning your claim with that caricature, but you get my gist, and that being, your worries are based on insanely low probability of actually being a state of affairs).

As for being 'all about truth', well for that, you're not going to have anyone do the legwork for you. When measurement discrepancies show up, you have two choices. Either:

A) Deep State Amir

B) User error, or a new phenomena (like the thermal performance over time discovery) that now calls for new measurements to properly interpret how this discrepancy came into existence in the first place.

I'd like to know though (though who knows, you could just be lying as anyone else could), what camp do you now fall into after the conclusion of my post?
Seriously, thanks for answering. Taking such a time to write an answer such like that is flattering at the least.

Actually, I expected to be roasted by some because I'm questioning something that first of all has been discussed to death (but never given a clear explanation, being this the reason for the existence of the debate) and I'm also questioning the very basis of the forum, which can be interpreted as an attack.

This forum has been of great help to provide information about products and to spread the methodologies of testing and functioning of equipment. As a community, you have a stronghold in measurements and with different members which can replicate and interpret the results, you have set up a public peer review system which even external agents can access freely. I have to thank everybody who has stopped and wrote an answer for me, being it positive or negative, all feedback has been received and considered. Now, with all of this information is time for me to learn and move on to learn even more, and this would not be possible without the support you provide here. Thanks.
 
Last edited:

JohnBooty

Addicted to Fun and Learning
Forum Donor
Joined
Jul 24, 2018
Messages
637
Likes
1,593
Location
Philadelphia area
[The Klipsch ProMedia computer speakers] me think of Sam Adams beer. It was my first foray back in the late 1980's into beer better than the crap being sold. I haven't bought it in a couple of decades, but thank that company for opening my eyes to what was possible.
As a rule, analogies suck. This one, though? Pretty close to perfection!
 
OP
KeithPhantom

KeithPhantom

Addicted to Fun and Learning
Forum Donor
Joined
May 8, 2020
Messages
642
Likes
658
Should DAC "B" recommended vs. DAC "A" in the abstract sense? Sure. It's a better performer. And that's how ASR recommendations for DACs and amplifiers generally work.
As a personal recommendation, I would consider every single case in specific before recommending, but before recommending I would make sure both devices already measure good. As an engineering feat, I believe you should be pushing the limit even if it does not make any sense in the present. Maybe in the future your useless research can lead to supporting a new discovery or implementation.

Also, for transparency sake, we should not be relying on the masking effects to hide distortion. The best way to have a transparent system is to fight distortion, and good engineering takes care of that.
 

pozz

Слава Україні
Forum Donor
Editor
Joined
May 21, 2019
Messages
4,036
Likes
6,827
Should DAC "B" recommended vs. DAC "A" in the abstract sense? Sure. It's a better performer. And that's how ASR recommendations for DACs and amplifiers generally work.

One just hopes that anybody actually buying a DAC has a realistic understanding of what "B" will and won't contribute to their listening experience vs. "A" so that they can choose to spend (or not spend) their extra $100 wisely.

Amir has generally decided that the latter consideration is outside of ASR's scope. And that's fine. Every successful action in life requires focus. Nobody can do it all, and he does a lot already.

One final thing I'd add, though, is that my concern with audibility is not abstract. I have seen newcomers to the hobby putting together systems where they spend like $200 on the DAC, $200 on an amp, and $79 on some Pioneer speakers and things like that. And really, you hate to see that. I hate to see that nearly as much as seeing folks squander money on pure snake oil like "audiophile" USB cables. I want to see them succeed and get the best result for their money and effort. While ASR is pretty explicitly not a beginner-oriented site, beginners do read it and jump to some wacky conclusions due to the figures published here.
I can't remember seeing a "give me a recommendation" thread where a newbie was explicitly told to buy the more expensive thing. Usually the advice is the opposite: spend money on speakers/headphones, getting a better DAC is not necessary, choose this cheap but very good alternative, make sure you know the difference between a DAC and an amp, don't worry about add-ons like PSUs, cables, filters and so on, certain differences you can make up for with EQ and other DSP.

It's not brought up too much anymore given all the options at every price range but the KTB and Atom used to be the "ASR combo". $200 for a flawless desktop setup, minus some cosmetics.

What else should be done? (I'm not being flippant.)
 

maverickronin

Major Contributor
Forum Donor
Joined
Jul 19, 2018
Messages
2,527
Likes
3,310
Location
Midwest, USA
We laugh at products like the Sony bookshelves with "supertweeters" that play frequencies no human can hear.

Why don't we have the same skepticism and pragmatism when it comes to other measurements (SINAD, etc) that, past a certain threshold, guarantee transparency? If there's a real audible difference, why not do listening tests?

I think it's because complete perfection (as compared to the limits of human hearing) is possible with the electronics but not with transducers. If many products can deliver that at a reasonable price (with DACs and head amps at least) then I think it's fair to criticize other similarly priced products with don't.

Imagine comparing those Sonys to the KEF R3 or the Revel M106 if those sold for only a few hundred dollars a pair instead of 2K. That's kind of what the DAC market is like...

if we're playing 44.1/16khz music we don't need more than 96dB of dynamic range or reproduction of frequencies > 22khz

Agreed on the ultrasonics, but more dynamic range is always useful for DSP headroom and digital volume control.
 

Sgt. Ear Ache

Major Contributor
Joined
Jun 18, 2019
Messages
1,895
Likes
4,162
Location
Winnipeg Canada
For me, the issue with dacs and amps at this point isn't "should I spend an extra hundred to get 10 dbs further below the threshold of audible distortion?" It's "should I spend an extra hundred (or thousand or more) and get something that's 10 dbs CLOSER to the threshold of audible distortion?" My interest is bang for the buck. If I can get something for $300 that measures better than something selling for $1000 - even if that improved measurement is not audible - I'm happy. And if we're talking about choosing from a list of items all around the same price range, again the best-measuring one (audibly or not) is how I make my choice.

So for me, would I recommend paying an extra hundred for option B compared to option A? No, I'd go for option C, the one that costs less than A and measures even better than B. lol...
 
Last edited:

SimpleTheater

Addicted to Fun and Learning
Forum Donor
Joined
Jun 6, 2019
Messages
928
Likes
1,812
Location
Woodstock, NY
For me, the issue with dacs and amps at this point isn't "should I spend an extra hundred to get 10 dbs further below the threshold of audible distortion?" It's "should I spend an extra hundred (or thousand or more) and get something that's 10 dbs CLOSER to the threshold of audible distortion?" My interest is bang for the buck. If I can get something for $300 that measures better than something selling for $1000 - even if that improved measurement is not audible - I'm happy. And if we're talking about choosing from a list of items all around the same price range, again the best-measuring one (audibly or not) is how I make my choice.

So for me, would I recommend paying an extra hundred for option B compared to option A? No, I'd go for option C, the one that costs less than A and measures even better than B. lol...
Exactly - assuming all things are equal. I'm looking at the RME ADI-2 DAC, not because its better sounding than what I currently have, but because it has a built-in EQ that can save 20 settings. I have closed back headphones and open back that sound different and there are times when I have to use the closed back so others around me aren't bothered by the sound emanating from my open back headphones. I'd love to simply hit a preset EQ setting when switching phones. Additionally I have desktop speakers that could use EQ as well. The RME ADI-2 costs about $1,150, which is quite a price hike over my JDS Element. The RME has better specs than my JDS Element, but if JDS built in EQ in the Element III, for under $900 I'd buy the JDS.
 

JohnBooty

Addicted to Fun and Learning
Forum Donor
Joined
Jul 24, 2018
Messages
637
Likes
1,593
Location
Philadelphia area
I can't remember seeing a "give me a recommendation" thread where a newbie was explicitly told to buy the more expensive thing.
Apologies. I should have been more clear. I've seen it elsewhere, certainly not on ASR! I'll edit my post for clarity.

I've been a moderator of r/BudgetAudiophile for a few years. I've seen hundreds and hundreds of posts from newcomers (and the many replies to each of them...) wanting advice on piecing together their first steps into hi-fi.

The trend of newcomers having mistaken spending priorities is pretty real; we've seen it over and over.

It's particularly wacky with fans of a certain YouTuber whose moniker begins with a "Z" for reasons I don't quite understand. He doesn't actively recommend those kinds of odd priorities, but his fans seem the most likely to have them.
 
Last edited:
Top Bottom