In this AB testing, four people scored 10/10, so the difference was clearly audible. That’s what matters — it proved audibility, not preference.
Wow, 'proved', hey? Your biases are easily confirmed! And the way you frame it as "the difference was clearly audible", is highly misleading. What I think you meant to say, is "there appear to be a few outlier people who can tell them apart, but most of us cannot."
The only caveat is whether the files were processed perfectly; if dithering or level-matching wasn’t handled right, the difference could have come from artifacts rather than bit depth itself.
No, that is definitely not "the only caveat"! Let's start with the obvious caveat that it isn't a controlled test. There is no supervision. Anyone can cheat who wants to. They can cheat in any way they want to. I mean, look at the ego of one of the 10/10 respondents, who said and I quote, "
My hearing is extremely superior to anything humans can imagine", and "
My lineup is extremely beyond anything that people can even imagine", and "T
he differences are huge. It's like comparing a VW versus a Ferrari." Wow, referring to humanity in the third person! No wonder Archimago wondered if 'he' was some kind of AI-supplemented response. But one doesn't have to be an egomaniac to cheat. A lot of people just do it for fun.
The first thing an experimenter would do with those results is say to the outliers, "come on in for some further, supervised testing for verification". The last thing that would enter experimenters' minds from those results would be to say what you said and announce that we have proof.
And I note that the test only varied in bit depth and not sample rate, so it was 24/88.2 vs 16/88.2, so in a sense it was a test of noise floor audibility. And guess what, that is nothing new. Anyone can, and may very well have done in the survey, select the quietest passage they could find, and play that passage in isolation, A/B/A/B with the volume cranked up to insane levels (that would destroy both your hifi and your hearing in seconds if used for the louder passages), and listen for noise floor differences. Easy peasy. 10/10 every time. In fact, Amir used this very method to prove that he can hear high res vs standard res audio, and spoke about it in his video on that topic. Unlike you, though, he sensibly concluded that he would have no chance of distinguishing them when playing recorded music at home on his hifi.
And also, we have to allow for the odd person with special hearing abilities. It's not impossible. Mutation is a thing, right? Right?? (Those who deny evolution are excused from answering!

)
So we have caveats all over the place. And I do love it when readers like yourself draw the opposite conclusion to the experimenter, in this case Archimago's own conclusion, "
this test adds to the body of evidence that suggests the absence of audible benefits with "high-res" audio."
[to Fidji] Not sure where you count 4, Archi counts 2
"There were 2 respondents out of the 121 who selected Sample A (24-bits) as preferred and at a level of certainty of 10/10! "
He is including the 'other' two who scored 10/10 but preferred the sound of the 16 bit over 24 bit. LOL.
But even 4 of 121 could be lucky guesses?
With purely random guessing for 10 tests there would be about 1 in 1000 10/10 scores. So yes, 4 out of 121 is not impossible via guessing, but the chances are only 1 in 30-ish. I would conclude that it is most likely something other than randomness, but jumping straight from that to the conclusion Fidji drew is unnecessary. See my response to him in this post.
[edit: see posts #337 and 338 below. I have wrongly assumed that the scores related to accuracy in picking the 16 bit vs 24 bit]
Now we're coming to fully agreed points. Atmos via Lossy DD+ vs Atmos via TrueHD indicates a wider spread in data rates than 16/44 vs 24/88
It's a shame our music data is being compromised with DD+ lossy compression but I keep the hope alive that like with 2ch streaming, as time improves the situation of high speed internet, the need to use lossy forms will become less a necessity to the cost of streaming.
I agree in principle and disagree in practice.
I agree that lossy audio at 'sufficient' bitrates (ie the high range of normal: 320 kbps for stereo, 768 kbps for Atmos) is still not a guarantee that it would be completely indistinguishable from lossless in every way. But the way lossy artefacts manifest, if the bitrate is 'sufficient', is in the odd moment in a recording, not in the overall sound of the music itself. Yes, I am happy to say as a perfectionist, we don't want that 'odd moment'. But honestly, enjoying it less is unnecessary if the only perceivable differences are in the odd moment. (And like we see in the above Archimago experiment, and in other experiments of mp3 vs lossless stereo, people are just as likely to
prefer the one that they shouldn't!

)
I could easily imagine lossless Atmos distinguishing itself from the lossy DD+ variant in terms of spatial quality when the consumer has an extreme home system, let's say
better than 9.1.6. Maybe. I would like to see it tested, first.
As Floyd Toole has said (although he was referring to stereo bass in the subwoofer frequencies), "
much ado about even less".
And finally, I don't think we have to demand lossless Atmos music (at around 6000 kbps) over streaming services before we can be happy. Maybe they can move to a higher lossy bitrate: 1000, 1500, 2000?
cheers