@solderdude
Do you have the graph that you showed a few times concerning audibility? I think that would come in handy for the fellow here.
---------------------------------------------------------------------------------------------------
And as for you Keith, I think you had some pretty tough time on the first page, and 3 pages in, you say:
So here's the thing. He does have conflicts of interest with certain brands that he discloses by relation to his company Madrona Digital that sells audio products. Some of those products are good, and some of those products are one standard deviation, if not, flat-out shovelware. He states this potential bias for any brand his company sells.
The thing is though, this portion of your question is irrelevant in terms of sensibility relating to why someone would care. I like that you are 'all about the truth', or 'all about the measurements'. And I'm sure you've gotten answers relating to the specific comparisons that've led you to make these skeptic claims here(as bad as those examples were like the Schiit one where testing levels were worlds apart for example). But here's the problems with your skepticism:
- His bias is irrelevant. If you're all about the measurements, then you would focus on that, even if he was straight up bias in totality.
- There are other people with AP machines, able to fully test and verify these results, we have a resident here that sometimes tests products Amir does.
- Manufacturers would call out some of the results if they were lies (though this is debatable considering some bigger AVR or botique manufacturers don't test their products like the morons, or lying market scum that they are of course).
- Incentive simply isn't there, fudging a few tests (unless you want to make the claim he fudges them all) would make every single expert that strolls around here, and company manufacturer that strolls around here, a bunch of played fiddles. There is simply no way someone would risk lying about something of this nature, in the same way any scientists caught lying EVEN ONCE about their research, have their reputation tarnished to perpetuity. Incorrect interpretations are always possible and are common at times for some esoteric measurement techniques, but those anomalies usually spark a new set of measurements to reveal more truth of the matter.
One such example of #4 was thermal performance over time of devices. And no we're not talking about idiotic "burn-in" (that somehow always results in a device that sounds better, and never worse comedicaly enough for subjectivists). But there was a large discrepancy revealed when user Wolfx-700 measured an SMSL M500 product and was revealed to show a swing in overall SINAD as time went on. There was also the Sabaj D5 that had proper thermal heatsinks on dies of the PCB, where it took some time before the ESS chip was hitting it's rated performance metrics. We now (not always, and I'll conclude with comments on this) get measurements regarding SINAD over a span of 15 minutes or so.
So while you are "all about the truth", that's fine. But if that's the case, then the bias shouldn't worry you even if he sold all products under the sun. One purposeful dishonest move, and he'd be finished. The calculus for that kind of risk makes no logical sense unless he thinks we're all 100% complete morons, and even then it wouldn't make much sense.
I said I would conclude on a portion of certain measurements.
There is one large discrepancy that does occur for measurements, and that is, number of tests run per device. Now of course there are some devices you simply can't test a certain way (devices without proper drivers, or just oddities of that nature). But there is an inconsistency I think has gotten worse ever since speaker testing started, and that is, missing test metrics that are included in some, but not all devices.
Like DAC's for instance, I don't remember the last time I've seen the thermal performance over time measurement. Likewise for DAC's I'm not seeing all the inputs being tested (Bluetooth I can forgive as it's an insane $7,000 module AP offers). But we're not seeing TOSLINK for example much of anymore. Also there was a new SINAD vs Level measurement that plots SINAD performance over the volume range of the device. That seems to also be MIA sadly. There's also the issue with output level testing (there is leeway here with this considering the power output of many devices and the amount of "volume steps" a device has), but the issue I've seen is a slight imprecision of output level. Somtimes we get 3.98V, sometimes 4.03V (which are fine), but then we get sometimes like 4.31V or something like that. Sure it's not a big deal, but I just don't see the reason for it (then again I don't have an AP so I don't know how easy it is to accurately set a volume level output). We also sometimes get MAX output measurements, and sometimes not. Other times, we get different Gain level measurements VS older devices (like medium gain being plotted against the device being tested for it's low gain performance). Other missing metrics also include digital filters (unexciting to most, but believe it or not there's almost no way of knowing what they are because manufacturers are morons that can't post the proper results, and we have seen devices with weird filter performance that violates spec sheets of DAC chips for example). Though I think with the signature you have, you're with me on the need to reveal if a device is doing proper brickwall filtering
of which AKM devices are slow to do sadly. Final incosistency (as it's subjectivity based) is sometimes he recommends a device, sometimes doesn't, while most times we can see why, but if you get real nitty gritty, you'll see inconsistencies based on similar performing devices (even build quality as well). To me this last one doesn't matter, but to new people into audio, it could lead them a bit astray seeing as how recommendations are purely subjectively based at the time of testing it seems. So if your device is ugly, watch out ;P
The actual valid critiques you could have, are one in the realm as mine (a seemingly missing checklist, as he seems to do the measurements from memory or something), or it could be he's just rushing seeing as how speaker testing has him swamped seeing as how it's a whole new sector he's in and the transition to efficiency is taking a toll.
But this is a far cry from "Hmm but idk, he's probably paid by the audio deep state interest group to exclude I2S input measurements for X device" (Obviously I'm strawmanning your claim with that caricature, but you get my gist, and that being, your worries are based on insanely low probability of actually being a state of affairs).
As for being 'all about truth', well for that, you're not going to have anyone do the legwork for you. When measurement discrepancies show up, you have two choices. Either:
A) Deep State Amir
B) User error, or a new phenomena (like the thermal performance over time discovery) that now calls for new measurements to properly interpret how this discrepancy came into existence in the first place.
I'd like to know though (though who knows, you could just be lying as anyone else could), what camp do you now fall into after the conclusion of my post?