• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Atmos finally decoded in PC/Mac

VoidX

Member
Joined
Jul 18, 2022
Messages
79
Likes
143
which is normally seen as IP theft
Oh, you're also doing a law degree, so let me help you pass your exams: supporting a standard through either reverse engineering or through a guide literally given to you by the manufacturer is not theft. The first is your own IP, the second is a granted right. Also, software cannot even be patented in the EU where I live.

I see the guys who made FFmpeg, which support DD+ through the same guides and TrueHD through reverse engineering, getting sued all the time by Dolby. Oh, wait, that is your reality, not this one.
 

edechamps

Addicted to Fun and Learning
Forum Donor
Joined
Nov 21, 2018
Messages
910
Likes
3,620
Location
London, United Kingdom
@VoidX I didn't realize at first that you single-handedly implemented an AC-4 (i.e. Atmos) decoder, including the immersive audio part, all by yourself, complete with a visualizer and renderer, and open source to top it all off. That's quite impressive. Congrats! I just gave it a try with some piece of E-AC-3 JOC (a.k.a "Dolby Digital Plus with Dolby Atmos"). Sadly I couldn't get it to work but hopefully it's a just a quick fix away. I hope the necessary specs (or reverse engineering) will come out for MLP/TrueHD Atmos eventually.

@sarumbear I don't know what's up with you, but you might want to calm down. As @VoidX explained, he didn't reverse engineer anything - he implemented a public specification (specifically this one and this one), which is quite literally direct engineering, not reverse engineering. @VoidX never accused you of advertising, he just said the Dolby paper you linked is advertising, which it clearly is (you can't seriously expect Dolby to be unbiased - they are trying to sell a product here). @VoidX never said Dolby was outright lying, he just said that the paper described the very best case scenario and that the full potential of the format doesn't seem to be used in content seen in the wild (again, none of this should come as a surprise to anyone). @VoidX reporting spec bugs to Dolby doesn't make him a "Dolby consultant" - that's ridiculous. You really need to slow down and actually read the posts you are responding to.

There is hard evidence that @VoidX knows the intricate details of the format very well (presumably better than anyone else in this thread), because he had to read and understand the full ~550-page spec in order to implement his decoder. There are very few people who are willing to go through the trouble of doing that kind of gruesome, tedious, deep-in-the-trenches work, especially for an open source project. That makes him a de facto expert on the subject. If you want to accuse him of not knowing what he's talking about, I would suggest you back up your claims with hard evidence (and no, Dolby's marketing materials don't count as "hard evidence"). Right now you are out of line and making a fool of yourself.
 
Last edited:

edechamps

Addicted to Fun and Learning
Forum Donor
Joined
Nov 21, 2018
Messages
910
Likes
3,620
Location
London, United Kingdom
By the way @VoidX, since I have you here… there's something I've been wondering about for quite some time now. The point of Audio Objects (i.e. Atmos) is to support non-standard speaker layouts. But let's say I do have a standard speaker layout (e.g. 5.1 placed according to ITU-R BS.775). Is there any benefit to being able to decode the object layer in this case? Since I'm using a standard layout, wouldn't the standard 5.1 channel decoding (e.g. by the current ffmpeg which doesn't support objects) give identical results to a full-featured Atmos decoder? If not, do we know if the difference is significant or negligible on typical content? I guess I could try to gather the evidence by myself by using your converter and comparing the output to the core 5.1 stream, but perhaps the answer is already out there…
 

prerich

Senior Member
Joined
Apr 27, 2016
Messages
312
Likes
228
Ok, let's say you're right. Thus, it has been established that the audio tracks of the streaming are of lower quality than those present on the discs.
So, let's get back to the root of the problem: if you want to decode Dolby Atmos tracks of the best possible quality, you need to decode those on the discs. And the tracks on the discs are encoded in Dolby TrueHD. That's it.;)
or a 1:1 rip of the file ;)
 

ThatM1key

Major Contributor
Joined
Mar 27, 2020
Messages
1,048
Likes
882
Location
USA

ThatM1key

Major Contributor
Joined
Mar 27, 2020
Messages
1,048
Likes
882
Location
USA

VoidX

Member
Joined
Jul 18, 2022
Messages
79
Likes
143
By the way @VoidX, since I have you here… there's something I've been wondering about for quite some time now. The point of Audio Objects (i.e. Atmos) is to support non-standard speaker layouts. But let's say I do have a standard speaker layout (e.g. 5.1 placed according to ITU-R BS.775). Is there any benefit to being able to decode the object layer in this case? Since I'm using a standard layout, wouldn't the standard 5.1 channel decoding (e.g. by the current ffmpeg which doesn't support objects) give identical results to a full-featured Atmos decoder? If not, do we know if the difference is significant or negligible on typical content? I guess I could try to gather the evidence by myself by using your converter and comparing the output to the core 5.1 stream, but perhaps the answer is already out there…
Dolby doesn't respect BS.775, their 5.1 placement according to the Atmos renderer is in the corners of the room + a center. To make Cavernize GUI as true to Atmos as possible, it's using channel positions extracted from Dolby's own 9.1.6 downmix demos.

I never tried remixing content to 5.1, but the result should be somewhat different, because: Objects can be mixed from any channel using 10 different filters (making 50 or 70 possible slices to mix together). Imagine the special case where you have a sound in a specific subband on both the left front and left front top channels. The downmix should have both sounds on the front left, but the JOC needs these to be placed on separate channels. In these cases, it might deliberately downmix some objects to wrong channels, after which Atmos provides the fix. There are very noticeable effects of this, like the removal of some frequencies from the speech in Core Universe, which sounds like a compression artifact. This can be masked though, I just need to find a good method for it.
 

Miker 1102

Active Member
Forum Donor
Joined
May 21, 2021
Messages
235
Likes
127
Man, what a hell of a thread to read.

When I used to use my PC as a HTPC, I would just use VLC to bitstream the audio to my AVR and let it decode. Nowadays I use my LG 4K UHD Player because using a PC as a HTPC is pretty horrible experience. You can't even output Dolby Vision on a regular PC unless you got a "Dolby

There is absolutely no way a large corporation would try to inflate their numbers to please investors. That's literally impossible that the industry working with their product is seeing numbers that are lackluster compared to official ones. Engineers must have united to upload videos and make visualizers that disprove Dolby's claims just to make them look bad. Actually, there's no lie there. You can set up 24.1.10, it just won't have too many active channels.
Good answer.
 

ThatM1key

Major Contributor
Joined
Mar 27, 2020
Messages
1,048
Likes
882
Location
USA

VoidX

Member
Joined
Jul 18, 2022
Messages
79
Likes
143
Sadly I couldn't get it to work but hopefully it's a just a quick fix away.
Now it's completely implemented, the latest downloadable version plays DD+ Atmos audio in the visualizer without the old workaround. This makes the decoder complete, now I want to make it real-time in any media player. Can you help me getting started with a DirectShow filter? You're the expert of that. ;)

Is there any benefit to being able to decode the object layer in this case?
There is a funny addition to this question. When you decode the PCM data of Dolby test tones, the levels are off, one of the surrounds is mixed at a seemingly random gain. Applying Atmos metadata fixes this, the test tones are the same level after remixing it from 5.1 to 5.1.
 

krabapple

Major Contributor
Forum Donor
Joined
Apr 15, 2016
Messages
3,169
Likes
3,717
What "fair listening test"?

Blinded and level matched for starters.

I've mixed and listened to hundreds movies in my life and I can recognize a lossy track from a lossless one, especially in scenes with greater sound content. Of course, you are free not to believe me. But that does not authorize you to discredit my opinion. Do you think a lossy track is indistinguishable from a lossless one? Perhaps this is true, if you listen to it with a poor sound system or even with TV speakers.

If you knew anything about the history of lossy audio compression and the testing thereof, you would know that it is 'perhaps true' even on very good listening setups. Even on headphones.


Or if the listener does not have the necessary listening education.

Here, you have a point -- if one is well trained in the sound of specific artifacts that different levels of lossy compression can create, one certainly has a better chance of telling lossy from lossless. But at high quality settings with a good codec -- because remember, though you keep not acknowledging it, 'lossy audio' is not all the same -- even that training is not something that would let you walk into a room with a surround system and immediately say, oh, that's one's lossy, that one's not. You'd still need to listen very carefully and pretty likely not in the manner you normally would.

That's part of the 'fairness' too. People like you claim they identify lossy vs lossless effortlessly, without much in the way of forensic effort. A fair test is one where you could do that consistently when the two audio sample are 'blinded' and level matched, and the lossy codec/setting is high quality.

Good luck, btw, separating out level matching as a variable for lossless vs Dolby Digital vs DTS. But you really must do that, if your argument is that you hear 1) a degradation and 2) it's due to *lossiness* -- data reduction itself -- not simple level mismatch.


Lossy tracks are a sign, one of many, of these times of cultural impoverishment, where audio listening takes place with a mobile phone, a tablet, or a TV. Sad but true.
The listening quality lies elsewhere.

Lossy encoding based on psychoacoustic modelling was a sign of *human inventiveness* and was a critical when bandwidth (in the internet) and storage (on physical discs) were at a premium.
 

krabapple

Major Contributor
Forum Donor
Joined
Apr 15, 2016
Messages
3,169
Likes
3,717
Of course it's correct that validating those observations in the scientific sense requires controls for bias, etc. It's also observable that ASR catechism invokes chanting "ABX, DBT" to suppress outbreaks of anecdata. But as they say, absence of evidence isn't evidence of absence. And your knowledge and experience may reflect reality. So that triggers an automatic stalemate.

Utter nonsense. By this logic every scientific result, regardless of its rigor, is equally strongly countered by the truism 'but it could still be wrong'. Or by the idea that 'you can't prove I'm wrong'.
Russell's teapot: You say microscopic teacups orbit the sun. I say, the fact that I can't 'prove' there aren't is not reason enough for me to believe it.
Or Hitchens' maxim: that which is asserted without evidence, can be dismissed without evidence.

But you could counter to Hitch: "I assert I heard a difference between A and B, under condition X; that's 'evidence'". But is it good evidence? If X is a condition where false positives are well-established to flourish (sighted listening) and/or there are well established reasons (measured or psychoacoustic) to expect difference be difficult or impossible to hear, then no, it's not good evidence. The burden of proof is on *you*, the listener, to provide better evidence.


@krabapple is going in far too convoluted here to deconstruct without effort, but "sounds worse" involves differentiation and preference, which are separate issues: "profoundly false claim" is the usual adversarial tosh, which you can safely ignore.

Both difference and preference are subject to 'sighted' bias, and thus properly tested under controlled conditions -- e.g., blind. So, 'if it's lossy, it's going to sound worse' remains profoundly false as an axiom.

@voodooless is on firmer ground: your experience isn't proof, in and of itself. Where the burden of proof lies depends on the context. Logically we take it step by step: TrueHD is lossless (can be verified technically); lossless signal can be differentiated from lossy (ditto); people can perceive the difference (observable, requires controlled test for proof); people prefer lossless (ditto, also requires sufficient n to generalise to a population).

You left out so many qualifying conditions as to make this 'logic' simply 'tosh'.
 
Last edited:

krabapple

Major Contributor
Forum Donor
Joined
Apr 15, 2016
Messages
3,169
Likes
3,717
I can’t understand how can an audible phenomena be quantified to be submitted as proof other than results of controlled listening tests.

Such tests were done by the Fraunhofer Institute in extreme detail during development of MP3. They published papers and one of the researchers have even wrote a book. Compression can be heard. They had the proof in abundance.

No one claims compression can't ever be heard.

That doesn't mean all claims of 'I heard it' should be accepted as true.
 

krabapple

Major Contributor
Forum Donor
Joined
Apr 15, 2016
Messages
3,169
Likes
3,717
When you reduce to those words, of course you are correct then again people do hear differences between amplifiers that are supposed to wires with gain and no compression can be measurable and do that on blind tests. You can't hear compression is false but "you’ll really have a have a damn hard time hearing any difference. Certainly not an obvious night and day one." is true. On your earlier posts you were implying the former.

But amplifiers *can* sound different, in blind tests, under some conditions. No one says amplifiers can *never* sound different. And they certainly measure different. But they typically don't sound different. The measured differences aren't big enough to hear, and the amps aren't being stressed, and they don't have 'coloration' designed in (tubes). If you assert you heard an amp difference, you'll need to explain more.

Cables can sound different (e.g. due to large difference in gauge and length). No one says that's impossible. And they certainly *measure* different. But typically, cables don't sound different. Because typically gauges and length differences aren't big enough to matter. If you assert you heard an cable difference, you'll need to explain more.

Do you see yet where the analogy to lossy audio lies? Lossy compression can make audio sound different. But that doesn't mean it always should/does, to you. If you assert you heard a difference due to lossiness, you'll need to explain more.
 

krabapple

Major Contributor
Forum Donor
Joined
Apr 15, 2016
Messages
3,169
Likes
3,717
It depends on:
1) Your listening environement and audio equipment
2) Your listening skills/education.

Let's expand (1) properly to include these under 'environment'
a) listening sighted or not
b) specifics of the lossy encoding
c) isolating lossiness as the only variable
 
Top Bottom