• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required as is 20 years of participation in forums (not all true). There are daily reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Atmos finally decoded in PC/Mac

edechamps

Addicted to Fun and Learning
Forum Donor
Joined
Nov 21, 2018
Messages
870
Likes
3,330
Location
London, United Kingdom
Yes, you can't tell the OS that you need more channels than 8, it won't grant you an output, even if there are way more channels. This is only possible on very low level.

That's not true. There is hard evidence of all Windows audio APIs (MME, DirectSound, WASAPI - both shared and exclusive - and WDM-KS) working with at least 10-channel output with a real hardware audio device, as long as the audio device is configured properly. You can verify this for yourself using a virtual driver such as Virtual Audio Cable. I don't think it's easy to find actual hardware exposing this many channels as a single Windows audio device, though.
 

VoidX

Member
Joined
Jul 18, 2022
Messages
71
Likes
119
That's not true. There is hard evidence of all Windows audio APIs (MME, DirectSound, WASAPI - both shared and exclusive - and WDM-KS) working with at least 10-channel output with a real hardware audio device, as long as the audio device is configured properly. You can verify this for yourself using a virtual driver such as Virtual Audio Cable. I don't think it's easy to find actual hardware exposing this many channels as a single Windows audio device, though.
This is why I said low level specifically. There's no other way than directly using the Windows API.
 

edechamps

Addicted to Fun and Learning
Forum Donor
Joined
Nov 21, 2018
Messages
870
Likes
3,330
Location
London, United Kingdom
This is why I said low level specifically. There's no other way than directly using the Windows API.

I don't know what you mean by "low level". In software engineering, the term "level" is short for "level of abstraction" and is typically applied to APIs, so your message doesn't make sense to me. If you're saying that merely using an API is "low level", then everything is "low level".

WDM-KS is the lowest-level Windows audio API.
WASAPI is one level above WDM-KS.
DirectSound and MME are one level above WASAPI.

Then there are even higher-level APIs like DirectShow or Media Foundation but they're less relevant because they're really media playback APIs, not audio APIs and their use is entirely optional (plenty of apps output audio without going through them).

DirectSound and MME are high-level as far as Windows audio APIs are concerned yet they can handle 10-channel output just fine.
 

VoidX

Member
Joined
Jul 18, 2022
Messages
71
Likes
119
I don't know what you mean by "low level". In software engineering, the term "level" is short for "level of abstraction" and is typically applied to APIs, so your message doesn't make sense to me. If you're saying that merely using an API is "low level", then everything is "low level".

WDM-KS is the lowest-level Windows audio API.
WASAPI is one level above WDM-KS.
DirectSound and MME are one level above WASAPI.

Then there are even higher-level APIs like DirectShow or Media Foundation but they're less relevant because they're really media playback APIs, not audio APIs and their use is entirely optional (plenty of apps output audio without going through them).

DirectSound and MME are high-level as far as Windows audio APIs are concerned yet they can handle 10-channel output just fine.
Only the following are considered high level, literally by Microsoft:
  • DirectSound
  • DirectMusic
  • Windows multimedia waveXxx and mixerXxx functions
  • Media Foundation
None of these go beyond 8 channels.
 

prerich

Active Member
Joined
Apr 27, 2016
Messages
172
Likes
100
This is just based on subjective reviews. it's meaningless.

Where is the proof then? It can't be in the format. Because they all encode 24 bit audio just fine. There is no reason that DD+ would have less dynamic range, other than that it is a different mix or different metadata applied.
Interesting post I found here https://www.reddit.com/r/hometheater/comments/hw91vs
I think I might go home this week and do some comparisons between Disney + Marvel and my identical lossless Marvel titles. This may prove interesting.
 

Digimaster

Member
Joined
Sep 30, 2022
Messages
16
Likes
4
An incredibly ignorant thing to claim. Psychoacoustic modelling works. Codecs matter. Compression quality options matter.
What a bad reply. Even idiots know that there is scientific research behind lossy compression codecs. They are useful? Yes. Are they perfect to the point of being able to replace a lossless encoding? In my opinion and not only (there are evidences of this, thanks @prerich for your good posting), no.
Please pay attention to your writing, it is offensive.
 

voodooless

Master Contributor
Forum Donor
Joined
Jun 16, 2020
Messages
6,119
Likes
10,143
Location
Netherlands
Are they perfect to the point of being able to replace a lossless encoding? In my opinion and not only (there are evidences of this, thanks @prerich for your good posting), no.
Again, your opinion… that is not proof.

And the only thing @prerich showed is that the versions are clearly not the same. That is not proof that the lossy codec would actually be worse sounding if it were to compress the exact same audio tracks. In of itself it is an interesting find though. But has everything to do with creating an artificial difference between streaming vs physical media, not with lossy vs lossless.
 

edechamps

Addicted to Fun and Learning
Forum Donor
Joined
Nov 21, 2018
Messages
870
Likes
3,330
Location
London, United Kingdom
Only the following are considered high level, literally by Microsoft:
  • DirectSound
  • DirectMusic
  • Windows multimedia waveXxx and mixerXxx functions
  • Media Foundation
None of these go beyond 8 channels.

DirectSound has been shown to play 10 channels. There is hard evidence with complete trace logs that show DirectShow properly streaming 10-channel audio, down to individual API calls. You can try this yourself using e.g. Virtual Audio Cable and, say, the FlexASIO DirectSound backend, or (presumably) any other DirectSound-based app that lets you to play 10-channel files. The only requirement is that the audio interface be properly configured in the Windows audio control panel for 10-channel output.

DirectMusic is deprecated and not used by any modern software as far as I know. It's also designed for specific use cases (sampling, MIDI) which are not relevant here.

"Windows multimedia waveXxx and mixerXxx functions" is just another name for MME. MME, despite being incredibly old (1991) does seem to work with 10-channel output.

I already mentioned Media Foundation. It's a higher level playback API, i.e. it's more of a media player framework and thus seems less relevant here. Some video player apps use Media Foundation, but many don't. For example VLC uses DirectSound, while DirectShow players like MPC-HC typically use DirectSound by default but can be customized to use pluggable audio renderers that can in turn use the Windows audio API of their choice.

Audio only, no problem. Correct. Please check this thread:

The problem has nothing to do with audio vs. video. Video players use audio APIs to play audio. One example of one piece of software struggling to play one file does not make a rule.
 
Last edited:
OP
retro

retro

Active Member
Joined
Sep 22, 2019
Messages
164
Likes
283
The problem has nothing to do with audio vs. video. Video players use audio APIs to play audio. One example of one piece of software struggling to play one file does not make a rule.

Ok. So do you have an example of what Win software actually can..?
 

edechamps

Addicted to Fun and Learning
Forum Donor
Joined
Nov 21, 2018
Messages
870
Likes
3,330
Location
London, United Kingdom
Ok. So do you have an example of what Win software actually can..?

I haven't tried to play a 10+ channel file in any video player app if that's what you're asking. That's not the point I want to make. I'm just responding to claims that high-level Windows audio APIs such as DirectSound cannot be used for >8-channel output. These claims are not true. If a particular app is unable to play a >8-channel file, that's a problem with the app itself, not a problem with the Windows audio API it's using.
 

prerich

Active Member
Joined
Apr 27, 2016
Messages
172
Likes
100
Again, your opinion… that is not proof.

And the only thing @prerich showed is that the versions are clearly not the same. That is not proof that the lossy codec would actually be worse sounding if it were to compress the exact same audio tracks. In of itself it is an interesting find though. But has everything to do with creating an artificial difference between streaming vs physical media, not with lossy vs lossless.
Well then, can we say that there's an audible, measurable difference between physical media and streamed media ....in terms of dynamic range? I wonder does Dolby Digital Plus take away audible information due to the diminished bitrate and size available?
 
OP
retro

retro

Active Member
Joined
Sep 22, 2019
Messages
164
Likes
283
I haven't tried to play a 10+ channel file in any video player app if that's what you're asking. That's not the point I want to make. I'm just responding to claims that high-level Windows audio APIs such as DirectSound cannot be used for >8-channel output. These claims are not true. If a particular app is unable to play a >8-channel file, that's a problem with the app itself, not a problem with the Windows audio API it's using.

Ok. But the fact is, on Windows, NO app or software can play more than 8+ channels in a combined MKV. Again, audio only works 100%.
Yes, I think I have tried them all. If not, please advice..? MacOs works. Linux works. Not Windows.
 

krabapple

Major Contributor
Joined
Apr 15, 2016
Messages
2,174
Likes
2,409
What a bad reply. Even idiots know that there is scientific research behind lossy compression codecs. They are useful? Yes. Are they perfect to the point of being able to replace a lossless encoding? In my opinion and not only (there are evidences of this, thanks @prerich for your good posting), no.
Please pay attention to your writing, it is offensive.

Nice goalpost moving. As if anyone was talking about, or even defining, 'perfection'. What's required is proof that one really hears a difference that is due to lossy data compression.

Do 'even idiots' know that in normal listening there are codecs and settings where most people simply *cannot* identify such a difference, in a fair, that is, level matched blind, test? You tell me.

Which isn't to say no audible difference could ever be detected. For example, if you take a signal difficult to encode by even the best codecs at the best settings, and subject it to forensic A/B listening with headphones, and focus on just a small segment where a sharp transient occurs, playing it over and over, you may start to be able to tell a difference verifiable an ABX test. You may through such means train yourself to be able to identify such difference under such conditions. Such tedious forensic listening to a tiny bit of audio is not normal listening. It doesn't mean afterwards that you could walk into a room with a surround system, plop yourself down and within seconds or minutes casually be able to ID audio based purely on its *lossiness* or not. And it sure doesn't mean someone with zero training for lossy artifacts could do it. Those who claim they can do so are indulging in clownish audiophile braggadocio.

So NO, it's would not be enough to just say 'if it's lossy, it's going to sound worse'. That would be a profoundly false claim, easily disproved by fair listening test.


prerich's posts did not prove your point. He did not isolate the variables that would be required to prove you point about *lossiness*.
 
Last edited:

Digimaster

Member
Joined
Sep 30, 2022
Messages
16
Likes
4
So NO, it's would not be enough to just say 'if it's lossy, it's going to sound worse'. That would be a profoundly false claim, easily disproved by fair listening test.
What "fair listening test"? I've mixed and listened to hundreds movies in my life and I can recognize a lossy track from a lossless one, especially in scenes with greater sound content. Of course, you are free not to believe me. But that does not authorize you to discredit my opinion. Do you think a lossy track is indistinguishable from a lossless one? Perhaps this is true, if you listen to it with a poor sound system or even with TV speakers. Or if the listener does not have the necessary listening education.
Lossy tracks are a sign, one of many, of these times of cultural impoverishment, where audio listening takes place with a mobile phone, a tablet, or a TV. Sad but true.
The listening quality lies elsewhere.
 

Digimaster

Member
Joined
Sep 30, 2022
Messages
16
Likes
4
Again, your opinion… that is not proof.

And the only thing @prerich showed is that the versions are clearly not the same. That is not proof that the lossy codec would actually be worse sounding if it were to compress the exact same audio tracks. In of itself it is an interesting find though. But has everything to do with creating an artificial difference between streaming vs physical media, not with lossy vs lossless.
Ok, let's say you're right. Thus, it has been established that the audio tracks of the streaming are of lower quality than those present on the discs.
So, let's get back to the root of the problem: if you want to decode Dolby Atmos tracks of the best possible quality, you need to decode those on the discs. And the tracks on the discs are encoded in Dolby TrueHD. That's it.;)
 

sarumbear

Master Contributor
Forum Donor
Joined
Aug 15, 2020
Messages
5,899
Likes
5,328
Home Atmos is not ready for true 3D, only height.
What about the surround beds, aren’t they add the two remaining dimensions?
 

sarumbear

Master Contributor
Forum Donor
Joined
Aug 15, 2020
Messages
5,899
Likes
5,328

Axo1989

Major Contributor
Joined
Jan 9, 2022
Messages
1,036
Likes
891
What "fair listening test"? I've mixed and listened to hundreds movies in my life and I can recognize a lossy track from a lossless one, especially in scenes with greater sound content. Of course, you are free not to believe me. But that does not authorize you to discredit my opinion. Do you think a lossy track is indistinguishable from a lossless one? Perhaps this is true, if you listen to it with a poor sound system or even with TV speakers. Or if the listener does not have the necessary listening education.
Lossy tracks are a sign, one of many, of these times of cultural impoverishment, where audio listening takes place with a mobile phone, a tablet, or a TV. Sad but true.
The listening quality lies elsewhere.

Of course it's correct that validating those observations in the scientific sense requires controls for bias, etc. It's also observable that ASR catechism invokes chanting "ABX, DBT" to suppress outbreaks of anecdata. But as they say, absence of evidence isn't evidence of absence. And your knowledge and experience may reflect reality. So that triggers an automatic stalemate.

@krabapple is going in far too convoluted here to deconstruct without effort, but "sounds worse" involves differentiation and preference, which are separate issues: "profoundly false claim" is the usual adversarial tosh, which you can safely ignore.

@voodooless is on firmer ground: your experience isn't proof, in and of itself. Where the burden of proof lies depends on the context. Logically we take it step by step: TrueHD is lossless (can be verified technically); lossless signal can be differentiated from lossy (ditto); people can perceive the difference (observable, requires controlled test for proof); people prefer lossless (ditto, also requires sufficient n to generalise to a population).
 
Last edited:

sarumbear

Master Contributor
Forum Donor
Joined
Aug 15, 2020
Messages
5,899
Likes
5,328
@voodooless is in firmer ground: your experience isn't proof, in and of itself. Where the burden of proof lies depends on the context. Logically we take it step by step: TrueHD is lossless (can be verified technically); lossless signal can be differentiated from lossy (ditto); people can perceive the difference (observable, requires controlled test for proof); people prefer lossless (ditto, also requires sufficient n to generalise to a population).
I can’t understand how can an audible phenomena be quantified to be submitted as proof other than results of controlled listening tests.

Such tests were done by the Fraunhofer Institute in extreme detail during development of MP3. They published papers and one of the researchers have even wrote a book. Compression can be heard. They had the proof in abundance.
 
Top Bottom