• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Now That Atmos Is Everywhere… Real vs. Phantom Center in a 5.1 Music-Focused Setup

The best way to keep one's hat on is to remember that lossy compression is effectively inaudible above a certain bitrate. Atmos audio at 768 kbps is considered transparent within the industry and academia.

The argument "it must be lossless" is just another form of the "it must be high res" argument. Drawing massively from endless anecdotes, when put to the test with real music, it fails.

The argument about sketchy spatial remastering "done by AI" also reeks of anti-spatial propaganda. Done by AI from what? Apple forbids spatial uploads that are upmixed from stereo. You have to have access to the multitrack masters. If AI creeps into DAW software and enables a better result for the same time investment, then it isn't the weak link. The real issue is the same as for stereo: the general standard of mastering aka frequency of hits and misses.

cheers
 
15 years ago, testing showed 320 kbps CBR and ~250 kbps VBR mp3 was where blind tests often failed. Yet we've seen a push for higher and higher bitrates as if that wasn't a thing.
 
Yet we've seen a push for higher and higher bitrates as if that wasn't a thing.
Mostly $ driven, each time you can resell a recording due only to upsampled bitrate numbers it's a win-win for everyone.
(except the consumer) :facepalm:
 
15 years ago, testing showed 320 kbps CBR and ~250 kbps VBR mp3 was where blind tests often failed. Yet we've seen a push for higher and higher bitrates as if that wasn't a thing.
This is consistent with the data rates specified by Dolby for Atmos:

Edit - keeping in mind that the Dolby specification is for 5.1/7.1.4 channels of data, plus objects vs. stereo.

For Broadcast and OTT delivery
  • Dolby Digital Plus with Dolby Atmos encoders support data rates of 384*, 448, 576, 640, 768 and 1,024 kbps.
* At 384 kbps, the number of channels / objects is limited to 11.1.

For Blu-ray Disc
  • Dolby Digital Plus with Dolby Atmos encoders support data rates of 1,152, 1,280, 1,408, 1,512, 1,536 and 1,664 kbps
Reference:

 
Last edited:
Notation in other places is very different.

1,500.00 in $US would mean 1500.00 $US elsewhere. or 1.5E3 or 1.5k $US in engineering notation. Personally I'd prefer engineering notation and a universal monetary reference. :-)
 
For Blu-ray Disc
  • Dolby Digital Plus with Dolby Atmos encoders support data rates of 1,152, 1,280, 1,408, 1,512, 1,536 and 1,664 kbps
I wonder why so many extremely similar rates were considered?

Notation in other places is very different.

1,500.00 in $US would mean 1500.00 $US elsewhere.
Those two are one and the same.
But we don't put a coma where the period should be and vise versa ???
Talk about extremely confusing. :facepalm:
or 1.5E3 or 1.5k $US in engineering notation. Personally I'd prefer engineering notation and a universal monetary reference. :-)
Heck the Brit's are still using that dang metric thingee.
 
The real issue is the same as for stereo: the general standard of mastering aka frequency of hits and misses.

There is not much that can go wrong in mastering for Atmos, at least not when it comes to loudness, as all Dolby Atmos tracks must be kept under -18 Integrated LUFS to get a pass on release.
 
The best way to keep one's hat on is to remember that lossy compression is effectively inaudible above a certain bitrate. Atmos audio at 768 kbps is considered transparent within the industry and academia.

The argument "it must be lossless" is just another form of the "it must be high res" argument. Drawing massively from endless anecdotes, when put to the test with real music, it fails.

The argument about sketchy spatial remastering "done by AI" also reeks of anti-spatial propaganda. Done by AI from what? Apple forbids spatial uploads that are upmixed from stereo. You have to have access to the multitrack masters. If AI creeps into DAW software and enables a better result for the same time investment, then it isn't the weak link. The real issue is the same as for stereo: the general standard of mastering aka frequency of hits and misses.

cheers

and yet, there are people, that have proven ability to discern 16 bit vs Hi-Res.

BTW, you need to do it with proper neutral headphones and it is just “look how I can juggle 3 balls, while standing on 1 foot” type of useless ability, no way to make money from it. 100% proper mastering trumps any resolution difference and here most of the best digital variants/masterings are 16bit CD mixes before Loudness War. Beats “Remastered HiRes” every time on SQ

And re DD+ vs TrueHD - here it gets even more contentious, but it needs to be said - you need to have a proper gear. Period. Luckily we have industry standard in form of CEDIA RP22 https://cedia.org/site/assets/files/6057/cedia-cta_rp22_v1_2_sept_2023.pdf#page89

You need to be on Performance Level 3 Minimum, ideally with as much as possible parameters making it to Level 4. With 5.1.2 in living room you do not need to even bother and you can enjoy thousands of songs available on Apple Music as you are more than good with DD+ and 768kbps. So yes, for most of the people it will be the same and for lot of the source material it will be irrelevant as it does not make use of full TrueHD bandwith.

With right setup and once you get to something like this in terms of source material - extensive use of objects, dense mix, lot of layers of sound, then the limits of lossy compressed DD+ are clearly audible, as it truncates objects. You can even observe difference on such crude metrics as Dynamic Range between DD+ and TrueHD version. 2 attached pictures are same track, one DD+ on Apple Music and the other one TrueHD from BR Audio.

I do not like subjective descriptors, but the term I would use as differentiator is “resolution” - ability to clearly pick up more instruments in parallel. It also usually results in more dynamic presentation, subjectively. And before somebody asks - yes I have tried blind testing. with 10/10 result.
IMG_0876.jpeg
IMG_0875.jpeg
 
and yet, there are people, that have proven ability to discern 16 bit vs Hi-Res.
<snip>
and yet, if you read the referenced article, the 16/44 version was prefered (33%) over the 24/88 version (31%). The largest group (38%) had no preference. The same number of people (2) were totally sure that each version (16 vs 24) was better.

So, lower resolution and sampling rate sounds better (if you believe this test).

Of course, another possibility is the results were just random. In tests of preferences between glasses of identical juice, about 60-80% will make a choice and can describe why they like one over the other and 20-40% will correctly say there's no difference. I think it's likely that's what we're seeing in the archimago test.
 
<snip>
and yet, if you read the referenced article, the 16/44 version was prefered (33%) over the 24/88 version (31%). The largest group (38%) had no preference. The same number of people (2) were totally sure that each version (16 vs 24) was better.

So, lower resolution and sampling rate sounds better (if you believe this test).

Of course, another possibility is the results were just random. In tests of preferences between glasses of identical juice, about 60-80% will make a choice and can describe why they like one over the other and 20-40% will correctly say there's no difference. I think it's likely that's what we're seeing in the archimago test.
Unless I am reading it wrong this looks like it was a simple A/B test, not a true A/B/X test.

To really prove if there was an audible difference I think an A/B/X test would answer this question much easier assumming it was done properly with at least 10 trials per sample so you can hit a 95% confidence level.

A/B is more of a preference without a real control to see if one is really hearing a difference or not. A/B/X isn't preference at all, it is a difference test. Is there an audible difference between A/B such that a person can determine the identity of X. And you repeat that at least 10 trials per music sample.
 
<snip>
and yet, if you read the referenced article, the 16/44 version was prefered (33%) over the 24/88 version (31%). The largest group (38%) had no preference. The same number of people (2) were totally sure that each version (16 vs 24) was better.

So, lower resolution and sampling rate sounds better (if you believe this test).

Of course, another possibility is the results were just random. In tests of preferences between glasses of identical juice, about 60-80% will make a choice and can describe why they like one over the other and 20-40% will correctly say there's no difference. I think it's likely that's what we're seeing in the archimago test.

In this AB testing, four people scored 10/10, so the difference was clearly audible. That’s what matters — it proved audibility, not preference. The only caveat is whether the files were processed perfectly; if dithering or level-matching wasn’t handled right, the difference could have come from artifacts rather than bit depth itself.

On the bigger question of hi-res versus CD, I think that debate has been over for years. Most people can’t hear any difference in controlled tests, and streaming services don’t even push hi-res as a paid upgrade anymore. The real bottleneck isn’t resolution, it’s mastering. A crushed, low-DR master sounds bad no matter the format. That’s why I stopped caring about 16/44 vs 24/192 and instead focus on finding the best master available. Atmos has actually been a pleasant surprise here, since spatial mixes mostly avoid heavy compression and preserve dynamics. That alone makes them sound better than most 2CH “hi-res” releases, even lossy streaming Atmos.

But this was not most important part of what I wanted to say.

The SAME industry that was peddling “Hi-Res is better than CD” narrative for years, is today telling you “nah, you do not need lossless Atmos, lossy is good enough for you”.

For me, lossy DD+ versus TrueHD isn’t even a close call — the difference is obvious, not subtle. It’s the same as comparing Netflix “4K” to a proper UHD/Kaleidescape: both might be labeled 4K, but it is different 4K.. That’s why, while I don’t see any reason to pay extra for hi-res stereo, it still makes perfect sense to pay a premium for lossless Atmos or multichannel audio. The jump in quality is real and easy to hear.

I’d rather not put any trust in the industry suddenly acting in my best interests. But luckily, we have an alternative. I’m more than happy with my ~300 MCH/Atmos music and concert discs, and even happier that new releases are still coming out. For me, that’s where the real excitement is — not chasing lossless for its own sake, but enjoying properly mastered, lossless immersive formats that actually deliver something extra
 
Last edited:
In this AB testing, four people scored 10/10, so the difference was clearly audible. That’s what matters — it proved audibility, not preference.
I don't think that proves an audible difference. That 10 of 10 doesn't mean they proved they heard an audible difference on 10 trials. The 10/10 is self reported scale on how they perceived the difference between the two files. It is literally just someone saying, yup I heard a difference and rated their assurance of that at 10 on a scale of 1 to 10.

There is literally no mechanism in that test to run multiple trials to see if they could actually hear a difference between the two files.
 
In this AB testing, four people scored 10/10, so the difference was clearly audible. That’s what matters — it proved audibility, not preference. The only caveat is whether the files were processed perfectly; if dithering or level-matching wasn’t handled right, the difference could have come from artifacts rather than bit depth itself.
Not sure where you count 4, Archi counts 2
"There were 2 respondents out of the 121 who selected Sample A (24-bits) as preferred and at a level of certainty of 10/10! "
But even 4 of 121 could be lucky guesses?

The SAME industry that was peddling “Hi-Res is better than CD” narrative for years, is today telling you “nah, you do not need lossless Atmos, lossy is good enough for you”
.For me, lossy DD+ versus TrueHD isn’t even a close call — the difference is obvious, not subtle.
Now we're coming to fully agreed points. Atmos via Lossy DD+ vs Atmos via TrueHD indicates a wider spread in data rates than 16/44 vs 24/88
It's a shame our music data is being compromised with DD+ lossy compression but I keep the hope alive that like with 2ch streaming, as time improves
the situation of high speed internet, the need to use lossy forms will become less a necessity to the cost of streaming.
 
In this AB testing, four people scored 10/10, so the difference was clearly audible. That’s what matters — it proved audibility, not preference.
Wow, 'proved', hey? Your biases are easily confirmed! And the way you frame it as "the difference was clearly audible", is highly misleading. What I think you meant to say, is "there appear to be a few outlier people who can tell them apart, but most of us cannot."

The only caveat is whether the files were processed perfectly; if dithering or level-matching wasn’t handled right, the difference could have come from artifacts rather than bit depth itself.
No, that is definitely not "the only caveat"! Let's start with the obvious caveat that it isn't a controlled test. There is no supervision. Anyone can cheat who wants to. They can cheat in any way they want to. I mean, look at the ego of one of the 10/10 respondents, who said and I quote, "My hearing is extremely superior to anything humans can imagine", and "My lineup is extremely beyond anything that people can even imagine", and "The differences are huge. It's like comparing a VW versus a Ferrari." Wow, referring to humanity in the third person! No wonder Archimago wondered if 'he' was some kind of AI-supplemented response. But one doesn't have to be an egomaniac to cheat. A lot of people just do it for fun.

The first thing an experimenter would do with those results is say to the outliers, "come on in for some further, supervised testing for verification". The last thing that would enter experimenters' minds from those results would be to say what you said and announce that we have proof.

And I note that the test only varied in bit depth and not sample rate, so it was 24/88.2 vs 16/88.2, so in a sense it was a test of noise floor audibility. And guess what, that is nothing new. Anyone can, and may very well have done in the survey, select the quietest passage they could find, and play that passage in isolation, A/B/A/B with the volume cranked up to insane levels (that would destroy both your hifi and your hearing in seconds if used for the louder passages), and listen for noise floor differences. Easy peasy. 10/10 every time. In fact, Amir used this very method to prove that he can hear high res vs standard res audio, and spoke about it in his video on that topic. Unlike you, though, he sensibly concluded that he would have no chance of distinguishing them when playing recorded music at home on his hifi.

And also, we have to allow for the odd person with special hearing abilities. It's not impossible. Mutation is a thing, right? Right?? (Those who deny evolution are excused from answering! ;))

So we have caveats all over the place. And I do love it when readers like yourself draw the opposite conclusion to the experimenter, in this case Archimago's own conclusion, "this test adds to the body of evidence that suggests the absence of audible benefits with "high-res" audio."

[to Fidji] Not sure where you count 4, Archi counts 2
"There were 2 respondents out of the 121 who selected Sample A (24-bits) as preferred and at a level of certainty of 10/10! "
He is including the 'other' two who scored 10/10 but preferred the sound of the 16 bit over 24 bit. LOL.

But even 4 of 121 could be lucky guesses?
With purely random guessing for 10 tests there would be about 1 in 1000 10/10 scores. So yes, 4 out of 121 is not impossible via guessing, but the chances are only 1 in 30-ish. I would conclude that it is most likely something other than randomness, but jumping straight from that to the conclusion Fidji drew is unnecessary. See my response to him in this post.
[edit: see posts #337 and 338 below. I have wrongly assumed that the scores related to accuracy in picking the 16 bit vs 24 bit]

Now we're coming to fully agreed points. Atmos via Lossy DD+ vs Atmos via TrueHD indicates a wider spread in data rates than 16/44 vs 24/88
It's a shame our music data is being compromised with DD+ lossy compression but I keep the hope alive that like with 2ch streaming, as time improves the situation of high speed internet, the need to use lossy forms will become less a necessity to the cost of streaming.
I agree in principle and disagree in practice.

I agree that lossy audio at 'sufficient' bitrates (ie the high range of normal: 320 kbps for stereo, 768 kbps for Atmos) is still not a guarantee that it would be completely indistinguishable from lossless in every way. But the way lossy artefacts manifest, if the bitrate is 'sufficient', is in the odd moment in a recording, not in the overall sound of the music itself. Yes, I am happy to say as a perfectionist, we don't want that 'odd moment'. But honestly, enjoying it less is unnecessary if the only perceivable differences are in the odd moment. (And like we see in the above Archimago experiment, and in other experiments of mp3 vs lossless stereo, people are just as likely to prefer the one that they shouldn't! :cool: )

I could easily imagine lossless Atmos distinguishing itself from the lossy DD+ variant in terms of spatial quality when the consumer has an extreme home system, let's say better than 9.1.6. Maybe. I would like to see it tested, first.

As Floyd Toole has said (although he was referring to stereo bass in the subwoofer frequencies), "much ado about even less".

And finally, I don't think we have to demand lossless Atmos music (at around 6000 kbps) over streaming services before we can be happy. Maybe they can move to a higher lossy bitrate: 1000, 1500, 2000?

cheers
 
Last edited:
Unless I am reading it wrong this looks like it was a simple A/B test, not a true A/B/X test.

To really prove if there was an audible difference I think an A/B/X test would answer this question much easier assumming it was done properly with at least 10 trials per sample so you can hit a 95% confidence level.

A/B is more of a preference without a real control to see if one is really hearing a difference or not. A/B/X isn't preference at all, it is a difference test. Is there an audible difference between A/B such that a person can determine the identity of X. And you repeat that at least 10 trials per music sample.


good guide for starting your journey in ABX testing and understanding what was going in that specific test. I trust you are now able to connect the dots properly, as I think you were missing this part of information.
 
I am very aware of how to do A/B/X testing and have done so for decades with my QSC ABX Comparator.

The linked test for bit rate differences isn't an A/B/X test, it is a simple A/B test.

If one actually takes the time to read the procedure it flat out says:

"I'll also ask how much difference you heard - so if you're very confident, rate this as 10/10 ("obvious audible difference")"

So anyone can report that they heard a 10/10 difference and claim it was an obvious audible difference.

There is literally nothing in that test as a control to see if they could actually hear a difference. If it was an A/B/X test and they correctly identified the identity of X 10 out of 10 trials that is a very different thing.

But that DID NOT occur in the test you linked to.

This is why scientific research has peer review. To check procedures for flaws and to look for things that could have influenced the results beyond the test itself.
 
With purely random guessing for 10 tests there would be about 1 in 1000 10/10 scores. So yes, 4 out of 121 is not impossible via guessing, but the chances are only 1 in 30-ish. I would conclude that it is most likely something other than randomness, but jumping straight from that to the conclusion Fidji drew is unnecessary. See my response to him in this post.
Read the procedure, nobody scored 10 out of 10 on any trials.

The 10 out of 10 on the site is really just a likert scale on if a person thought there were audible differences between the two files

They are not the result of identifying X 10 out of 10 times.
 
Read the procedure, nobody scored 10 out of 10 on any trials.

The 10 out of 10 on the site is really just a likert scale on if a person thought there were audible differences between the two files

They are not the result of identifying X 10 out of 10 times.
Wow, thanks, that's interesting. I do seem to have been influenced, by Fidji claiming it was proof of audibility, into thinking it was the score of correctly identifying the differences.

Now it means even less than I thought!

cheers
 
The best way to keep one's hat on is to remember that lossy compression is effectively inaudible above a certain bitrate. Atmos audio at 768 kbps is considered transparent within the industry and academia.
I agree that lossy audio at 'sufficient' bitrates (ie the high range of normal: 320 kbps for stereo, 768 kbps for Atmos) is still not a guarantee that it would be completely indistinguishable from lossless in every way.
Actually, what is it now? Transparent? Not transparent?
https://en.wikipedia.org/wiki/Argument_from_authority . Read it thoroughly, and be honest to yourself.

I could easily imagine lossless Atmos distinguishing itself from the lossy DD+ variant in terms of spatial quality when the consumer has an extreme home system, let's say better than 9.1.6. Maybe. I would like to see it tested, first.
Actually for 1 row of seating 9.1.6 in normally shaped room is enough to get full spatial coverage in terms of angles. I have tried to add LS1 pair at some point and it was detrimental to the sound. Please refer to this guide, it might make some things easier to comprehend. Very practical guide, for those who think of getting proper immersive setup.

 
Read the procedure, nobody scored 10 out of 10 on any trials.

The 10 out of 10 on the site is really just a likert scale on if a person thought there were audible differences between the two files

They are not the result of identifying X 10 out of 10 times.
Archimago is a nice guy. Feel free to contact him directly via his webpage and clarify, instead of assuming.
 
Back
Top Bottom