• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

UAC2 in Windows 10/11 - do we really need "dedicated" DAC driver?

KomoGomolaa

Member
Joined
Jun 10, 2022
Messages
26
Likes
12
I am starting this thread to discuss our opinions/experiences regarding the need for a "dedicated" driver for a DAC under Windows 10/11 because we already have UAC2 there.

Let me briefly explain my current Windows DAC setup - it's the Topping DX7 Pro+ under Windows 10. I have been using the Thesycon-supplied Topping driver since the very start, but lately I've been getting small but annoying sound glitches more often than desired. The glitch can be described simply as when I do even some very normal tasks, like opening the Chrome browser and navigating to some URL, all of a sudden there would be a static pop, or a slight crackle in the sound being played in the background (while running Spotify, for example), like doubling the samples, etc. - you probably know what I mean. I have not been able to fix this by "upping" the Playback Buffer Size inside the Thesycon DAC control panel... (and it's interesting that this hasn't happened under ASIO playback to me with that Thesycon driver - never.)

And you know what? Recently I came upon Archimago's measurements of a "dedicated" DAC driver vs. the generic UAC2 one. Archimago's measured difference? Nothing, nada, perfectly equal. So I decided to give it a try on my system!

I've uninstalled the "official" Topping driver and my DX7 Pro+ was instantly recognised by Windows 10 and is working perfectly fine without it. The outcome? To my surprise and happiness, described sound glitches are totally gone, sound playback is more stable and I'm not really able to hear any difference in sound quality. (Yes, the UAC2 implementation under Windows is limited to 32bit/384kHz, but I don't miss this since all my "needs" are in the 16bit/24bit/44.1kHz domain. And I strongly don't believe that "upping" resolution would improve quality either.)

The interesting part? Did you know that Microsoft contracted Thesycon to develop the native UAC2 driver for its OS? I didn't. Thesycon is credited by Microsoft as the official author of UAC2 driver under Windows OS.

So I think I won't stick to "dedicated" driver from Thesycon for my DAC anymore, because I'm just enjoying it under UAC2 it as it is. Please share your experiences too.
 
Last edited:

MaxwellsEq

Major Contributor
Joined
Aug 18, 2020
Messages
1,627
Likes
2,423
My Sabaj A10d 2022 worked out of the box with the Windows 11 native UAC2 driver. I installed the Sabaj driver, but did not notice any benefit.
 

phofman

Senior Member
Joined
Apr 13, 2021
Messages
489
Likes
319
Yes, the UAC2 implementation under Windows is limited to 32bit/384kHz
IMO that is limitation of the window audio subsystem. My tests of wasapi exclusive showed no real limits to the samplerate of the stock windows UAC2 driver.

 

Ron Texas

Master Contributor
Joined
Jun 10, 2018
Messages
6,074
Likes
8,908
Stock Windows driver working here. Only ASIO requires a proprietary driver in my experience.
 

Trell

Major Contributor
Joined
May 13, 2021
Messages
2,752
Likes
3,285
RME recommends using the default Microsoft UAC2 driver for their ADI-2 DAC FS and not their ASIO driver for common usage. Their ASIO driver is only really needed when updating the firmware for normal usage.

I use the ASIO driver as I also have an audio interface (RME Fireface UCX II) connected to it using optical, so then I need the sample rate and buffer size to be the same for both devices. The UCX II needs an ASIO driver, though.
 
Last edited:

Chr1

Addicted to Fun and Learning
Forum Donor
Joined
Jul 21, 2018
Messages
793
Likes
601
Can anyone explain any advantages ASIO has? Been using it for years as I remember it being recommended to me, but can't remember why? Thanks.
 

DVDdoug

Major Contributor
Joined
May 27, 2021
Messages
2,916
Likes
3,831
Can anyone explain any advantages ASIO has?

From what I've read ASIO was mainly designed for low latency (less delay).. That's important when you are recording and monitoring through your computer because you don't want an audible delay in your headphones. It's not so important when listening to Beatles recording with 60 years of "latency". It could also be important when gaming. It's also not important if you are recording and your monitoring path doesn't go through the computer. (If you are listening to a backing track while recording that latency can be automatically compensated for to keep both recorded tracks aligned.)

But... Latency is mostly caused by buffers. Buffers are needed with multitasking operating systems so the audio can flow smoothly in and out while the system is interrupted to do something else, and a buffer is a delay. The operating system is always multitasking even if you are only running one application. A larger buffer is a longer delay and it allows the system more time to finish what it's doing before getting back to moving/processing the audio. If the buffer doesn't get re-written in time you get buffer underflow and a glitch. (Buffer overflow when recording.) So the size of the buffer you can get-away with depends on how fast your computer is and whatever else it is doing at the same time, or what it's doing. in the background. With ASIO you can adjust the size of the buffers and I think there is always a way to access the setting whereas with Windows there is usually no way to change it. Again, for playback a few milliseconds of latency doesn't matter and there is nothing wrong with a big buffer and everything is usually OK.

Also, I believe that ASIO doesn't resample. But, I've read some conflicting information about that so it may depend on the application or driver. That's obviously good if you want "bit perfect" 96kHz playback... But it won't play if your soundcard/interface doesn't support 96kHz. (The bit depth will always be adjusted up or down to match the hardware.)

Windows drivers will happily resample (without telling you) so you can play a 384kHz file on any-old soundcard just like you can print a high resolution photo on a low resolution printer (and Windows won't warn you). Same thing with recording... With Audacity you can record at 384kHz even if your soundcard/interface doesn't support it.

I have a saved link to this page about the various audio protocols.

P.S.
A dedicated Windows driver (not ASIO) may give you access to hardware features (maybe EQ or something?) that's not accessible with the Microsoft-supplied drivers.
 
Last edited:

Chr1

Addicted to Fun and Learning
Forum Donor
Joined
Jul 21, 2018
Messages
793
Likes
601

From what I've read ASIO was mainly designed for low latency (less delay).. That's important when you are recording and monitoring through your computer because you don't want an audible delay in your headphones. It's not so important when listening to Beatles recording with 60 years of "latency". It could also be important when gaming. It's also not important if you are recording and your monitoring path doesn't go through the computer. (If you are listening to a backing track while recording that latency can be automatically compensated for to keep both recorded tracks aligned.)

But... Latency is mostly caused by buffers. Buffers are needed with multitasking operating systems so the audio can flow smoothly in and out while the system is interrupted to do something else, and a buffer is a delay. The operating system is always multitasking even if you are only running one application. A larger buffer is a longer delay and it allows the system more time to finish what it's doing before getting back to moving/processing the audio. If the buffer doesn't get re-written in time you get buffer underflow and a glitch. (Buffer overflow when recording.) So the size of the buffer you can get-away with depends on how fast your computer is and whatever else it is doing at the same time, or what it's doing. in the background. With ASIO you can adjust the size of the buffers and I think there is always a way to access the setting whereas with Windows there is usually no way to change it. Again, for playback a few milliseconds of latency doesn't matter and there is nothing wrong with a big buffer and everything is usually OK.

Also, I believe that ASIO doesn't resample. But, I've read some conflicting information about that so it may depend on the application or driver. That's obviously good if you want "bit perfect" 96kHz playback... But it won't play if your soundcard/interface doesn't support 96kHz. (The bit depth will always be adjusted up or down to match the hardware.)

Windows drivers will happily resample (without telling you) so you can play a 384kHz file on any-old soundcard just like you can print a high resolution photo on a low resolution printer (and Windows won't warn you). Same thing with recording... With Audacity you can record at 384kHz even if your soundcard/interface doesn't support it.

I have a saved link to this page about the various audio protocols.

P.S.
A dedicated Windows driver (not ASIO) may give you access to hardware features (maybe EQ or something?) that's not accessible with the Microsoft-supplied drivers.
Thanks!

Yes, think I will stick with the Topping ASIO drivers as they seem to have no issues and are still compatible with MathAudio RoomEQ DSP via Foobar2000 thankfully.
 

phofman

Senior Member
Joined
Apr 13, 2021
Messages
489
Likes
319
ASIO started in the 90s, long time before MS introduced wasapi in Win Vista. Until wasapi there was no low-latency direct-access driver model in windows.

IMO today wasapi exclusive is a better option. It allows the same direct access to the soundcard DMA buffer (provided the driver does so, but that's up to the vendor, the wasapi excl. layer allows it). It allows one process to communicate with multiple audio devices, unlike ASIO whose original client library uses static variables for the connection and multiple accesses need to be hacked (e.g. https://www.diyaudio.com/community/threads/diana-a-software-distortion-analyzer.315785/post-7121977 ). But IMO most importantly it does not bind the capture and playback into one bufferSwitch callback like ASIO which disallows reliable use of different capture and playback devices, running with independent clocks (e.g. https://audiosciencereview.com/foru...est-spec-adc-chip-currently.13469/post-871503 )
 

Trell

Major Contributor
Joined
May 13, 2021
Messages
2,752
Likes
3,285
ASIO started in the 90s, long time before MS introduced wasapi in Win Vista. Until wasapi there was no low-latency direct-access driver model in windows.

IMO today wasapi exclusive is a better option. It allows the same direct access to the soundcard DMA buffer (provided the driver does so, but that's up to the vendor, the wasapi excl. layer allows it). It allows one process to communicate with multiple audio devices, unlike ASIO whose original client library uses static variables for the connection and multiple accesses need to be hacked (e.g. https://www.diyaudio.com/community/threads/diana-a-software-distortion-analyzer.315785/post-7121977 ). But IMO most importantly it does not bind the capture and playing into one bufferSwitch callback like ASIO which disallows reliable use of different capture and playback devices, running with independent clocks (e.g. https://audiosciencereview.com/foru...est-spec-adc-chip-currently.13469/post-871503 )

RME has had multi-client ASIO drivers for a very long time, and any other manufacturer not supporting that today is indicative of pretty poor drivers. Not everyone can write good drivers, just saying.

Your ASR link is for the ASIO4ALL "wrapper" ASIO driver and how is that relevant to other drivers?
 
Last edited:

phofman

Senior Member
Joined
Apr 13, 2021
Messages
489
Likes
319
RME has had multi-client ASIO drivers for a very long time, and any other manufacturer not supporting that today is indicative of pretty poor drivers. Not everyone can write good drivers, just saying.
IIUC the multi-client ASIO mode means that the driver accepts connections from multiple processes. But I am talking about a single process talking to multiple ASIO drivers/devices - e.g. one for capture and another one for playback. That is not possible because a process can have only one instance of a library loaded, and the driver data structures in the ASIO client library are static, i.e. only one instance possible. One of the solution would be modifying the C code in the ASIO SDK to make the data structures non-static, or introduce another process which would talk to the second device and implement properly thread-safe passing samples between the two processes (a rather complex task).
Your ASR link is for the ASIO4ALL "wrapper" ASIO driver and how is that relevant to other drivers?
It shows that if two devices are linked to the ASIO switchBuffer callback, it by design results in glitches. The reason is obvious - the callback API is designed to pass equal amount of data in both directions at each callback, however each device runs at a different pace. The use case is e.g. using REW with different capture and playback devices, both via ASIO. Recent versions of REW have Wasapi Exclusive connectors where Wasapi API works independently with capture and playback devices, allowing the REW generator thread to run at a different pace than the REW capture thread, each using a different device, if needed.
 

phofman

Senior Member
Joined
Apr 13, 2021
Messages
489
Likes
319
I suspect that I am better off with ASIO as it works flawlessly for me and I don't actually want other programs making noises through my sound card...
Wasapi exclusive bypasses the windows audio subsystem just like ASIO. Author of the great FlexASIO connector made a very nice diagram explaining the situation https://github.com/dechamps/FlexASIO/blob/master/BACKENDS.md

I tried to show where ASIO lacks behind WASAPI exclusive. If your use case does not hit the ASIO design limitations, ASIO is perfectly fine.
 

Trell

Major Contributor
Joined
May 13, 2021
Messages
2,752
Likes
3,285
IIUC the multi-client ASIO mode means that the driver accepts connections from multiple processes. But I am talking about a single process talking to multiple ASIO drivers/devices - e.g. one for capture and another one for playback. That is not possible because a process can have only one instance of a library loaded, and the driver data structures in the ASIO client library are static, i.e. only one instance possible. One of the solution would be modifying the C code in the ASIO SDK to make the data structures non-static, or introduce another process which would talk to the second device and implement properly thread-safe passing samples between the two processes (a rather complex task).

It shows that if two devices are linked to the ASIO switchBuffer callback, it by design results in glitches. The reason is obvious - the callback API is designed to pass equal amount of data in both directions at each callback, however each device runs at a different pace. The use case is e.g. using REW with different capture and playback devices, both via ASIO. Recent versions of REW have Wasapi Exclusive connectors where Wasapi API works independently with capture and playback devices, allowing the REW generator thread to run at a different pace than the REW capture thread, each using a different device, if needed.

But here you assume that that everyone is using the ASIO SDK and that is not necessarily true as they can implement their own driver as long as it conforms to the ASIO Interface Specification. I guess they can take the ASIO DSK source code and modify as needed, as well, as long as they conform to the license.

On my Windows PC I've two different ASIO drivers: One for the RME Fireface UCX II and another for the RME ADI-2 DAC FS. Both works at the same time.

Some (most?) applications using ASIO have a built-in restriction of using just one driver for input/output, but not all applications have that.

Here is one example here on ASR where one reviewer is using RME ADI-2 Pro and RME Fireface UCX II at the same time with their ASIO drivers:

>>>
NB: This is a measurement I wanted to achieve for a long time but could never have to work.
Here it does, since the ADI-2 and UCX II have 2 different drivers. And MultiInstrument then allows to set a different clock for each device for a measurement. :)
<<<

 
Last edited:

phofman

Senior Member
Joined
Apr 13, 2021
Messages
489
Likes
319
But here you assume that that everyone is using the ASIO SDK and that is not necessarily true as they can implement their own driver as long as it conforms to the ASIO Interface Specification. I guess they can take the ASIO DSK source code and modify as needed, as well, as long as they conform to the license.
I am just showing official ASIO issues, and a few ways around (quite obvious, no rocket science). There are several SW products which can communicate with multiple ASIO devices (I know about Diana and one DAW whose name was mentioned in related discussions on diyaudio and which I forgot), and clearly use some workarounds. But I dare to say that majority of projects compile against the stock client library provided by ASIO SDK and have the limitation.


On my Windows PC I've two different ASIO drivers: One for the RME Fireface UCX II and another for the RME ADI-2 DAC FS. Both works at the same time.
Of course, but do they work from the same process at the same time? E.g. capturing and playing to different devices in Audacity or in REW.

Some (most?) applications using ASIO have a built-in restriction of using just one driver for input/output, but not all applications have that.
That restriction is inherent to the way client part of ASIO SDK is designed, not imposed by the applications. I gave an example of applications where the authors circumvented this restriction (with some major effort). My post was comparing ASIO to wasapi exclusive - there is no extra effort for talking directly to multiple devices in wasapi, the API is designed for that. Having to do so much extra work to implement such an obvious feature like having a different capture and playback device is a flaw in the API design, IMO.
 

Trell

Major Contributor
Joined
May 13, 2021
Messages
2,752
Likes
3,285
I am just showing official ASIO issues, and a few ways around (quite obvious, no rocket science). There are several SW products which can communicate with multiple ASIO devices (I know about Diana and one DAW whose name was mentioned in related discussions on diyaudio and which I forgot), and clearly use some workarounds. But I dare to say that majority of projects compile against the stock client library provided by ASIO SDK and have the limitation.

But again here they are using a ASIO SDK. Other implementations may not have those particular issues.

Of course, but do they work from the same process at the same time? E.g. capturing and playing to different devices in Audacity or in REW.

For Audacity (ASIO not enabled) I can setup the UCX II as playback device and the ADI-2 DAC as recording device. The ADI-2 DAC can record from the SPDIF input and I've connected the UCX II to the ADI-2 DAC using ADAT (optical). I tested that just now to be sure I recalled correctly.

Note that normally I use the ADI-2 DAC for monitoring while recording from the UCX II as the ADI-2 is for "zero" latency monitoring with the UCX is setup to delay the mic input by 200 ms for a simple lip synch during video calls.

That restriction is inherent to the way client part of ASIO SDK is designed, not imposed by the applications. I gave an example of applications where the authors circumvented this restriction (with some major effort). My post was comparing ASIO to wasapi exclusive - there is no extra effort for talking directly to multiple devices in wasapi, the API is designed for that. Having to do so much extra work to implement such an obvious feature like having a different capture and playback device is a flaw in the API design, IMO.

Again, this is the ASIO SDK used. Some manufacturers put much effort into making excellent drivers, others, not so much.

Agree that for most consumers using WASAPI/UAC2 for playback using a DAC or DAC/HP amp combo works very well, and is what RME recommends for the ADI-2 DAC FS as I wrote earlier.
 
Last edited:

phofman

Senior Member
Joined
Apr 13, 2021
Messages
489
Likes
319
For Audacity (ASIO not enabled) I can setup the UCX II as playback device and the ADI-2 DAC as recording device. The ADI-2 DAC can record from the SPDIF input and I've connected the UCX II to the ADI-2 DAC using ADAT (optical). I tested that just now to be sure I recalled correctly.

Well, my Audacity does not support concurrent capture and playback from two different ASIO devices:

1679858705590.png


It's just what the PortAudio source code says:



PortAudio (the audio API library used by many projects, such as Audacity) is a standard ASIO client, using ASIO SDK (if compiled with it). It cannot run duplex with two different ASIO devices.
 

phofman

Senior Member
Joined
Apr 13, 2021
Messages
489
Likes
319
Of course concurrent capture and playback requires to have monitoring enabled in audacity. Without this option, only one device is used at a time - either capturing, or playback.

1679859572787.png
 

Trell

Major Contributor
Joined
May 13, 2021
Messages
2,752
Likes
3,285
Well, my Audacity does not support concurrent capture and playback from two different ASIO devices:

View attachment 274964

It's just what the PortAudio source code says:



PortAudio (the audio API library used by many projects, such as Audacity) is a standard ASIO client, using ASIO SDK (if compiled with it). It cannot run duplex with two different ASIO devices.

Here you go with setup as described in earlier using two different drivers, and as I wrote earlier I don't have ASIO enabled in Audacity as I use the official release. Still works, though, but not all manufacturers drivers are created equal.

Again there seems to be a limitation of the ASIO SDK used, assuming you've built Audacity yourself with ASIO support. Is there anything in the ASIO Interface Specifications that precludes "duplex with two different ASIO devices."?

You wrote “two different ASIO devices” but RME can support multiple ASIO devices using the same driver, though, so I guess that would work with your example. Note that not all the RME devices uses the same driver, of course.

1679859883894.png
 
Last edited:

phofman

Senior Member
Joined
Apr 13, 2021
Messages
489
Likes
319
Here you go with setup as described in earlier, and as I wrote earlier I don't have ASIO enabled in Audacity as I use the official release. Still works, though, but not all manufacturers drivers are created equal.
?? You are using WASAPI instead of ASIO in audacity. I do not understand your argumentation in favour of ASIO behavior in audacity when you are using audacity without ASIO support. Of course Wasapi, unlike ASIO, supports duplex to different drivers in audacity - see https://audiosciencereview.com/foru...-need-dedicated-dac-driver.42810/post-1533247
Again there is a limitation of the ASIO SDK used, assuming you've built Audacity yourself with ASIO support. Is there anything in the ASIO Interface Specifications that precludes "duplex with two different ASIO devices."?
I do not know any other officially supported and recommended way to access ASIO API on the client side but using the official ASIO SDK. And that SDK precludes one process talking to two different ASIO drivers for technical reasons explained in the post above. There are complicated workarounds - hacking the SDK library or using multiple processes or maybe other methods.

You can talk to two audio devices using one ASIO driver (e.g. ASIO4All or FlexASIO), but because it's just one driver, one switchBuffer callback used, any two independently clocked devices will experience glitches eventually.
 

Trell

Major Contributor
Joined
May 13, 2021
Messages
2,752
Likes
3,285
You are using WASAPI instead of ASIO in audacity. I do not understand your argumentation in favour of ASIO behavior in audacity when you are using audacity without ASIO support. Of course Wasapi, unlike ASIO, supports duplex to different drivers in audacity
You’re the one that asked about Audacity, though, and I’m not going to build one with ASIO support to test as I don’t really use that software.

In any case, as I wrote earlier, WASAPI/UAC2 works just fine for many/most. For many audio interfaces ASIO is needed to take advantage of all its features, though, or at least have the driver installed.
 
Top Bottom