• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

RME Digiface USB as a digital mixer & router for my home office stereo setup? (How close can I get?)

rcstevensonaz

Addicted to Fun and Learning
Forum Donor
Joined
Nov 27, 2020
Messages
526
Likes
431
I recently had an "Aha Eureka, Wow!!!" revelation... only to be followed a few days later by a "Wait, Oh Crud!!!" realization. Reaching out to this list for any insights on whether it is possible to shift from Oh Crud back to Aha Eureka.

Some context about my my home office stereo environment:
  • I have several input sources (Home PC, Work laptop, WiiM Pro, SBv3, turntable, CD/DVD, Alexa Input, etc.) that I want to be always active (barge through), hence I use a stereo mixer for my inputs rather than the traditional input selector as found on most preamps.
  • I also have four distinctly different listening points — and want each of them to always be active so that I can easily shift between them without fiddling:
    1. Active speakers on my desktop while sitting in my desk chair
    2. Headphones while sitting in my desk chair
    3. Active speakers for listening while sitting in the comfy chair in the corner
    4. Headphones while sitting in the comfy chair in the corner
  • Many of the inputs are actually digital sources, but I am currently putting a DAC on each of them so the audio can be routed into the stereo mixer.
The challenge:
I'm buying a DAC for every digital audio source so that it can be routed into the stereo mixer, even though it is then converted back to digital for RoomEQ processing.

The "Aha Eureka, Wow" revelation:
I could instead just route my digital sources (plus the ADC output of the analog stereo mixer) into an RME Digiface USB.

The audio chain flow would then be:
  • Send the digital sources and the AD'ed analog into the Digiface USB, which then makes any input-side leveling adjustments and creates submixes
  • Route the submix(es) out to a PC to perform the RoomEQ & Bass Management DSP calculations
  • The PC then routes back four separate submixed streams, each uniquely EQed for each of the four different listening positions
  • The Digiface USB then sends each of those four streams out to the respective optical output channel
The attached diagram shows this audio Nirvana:
Digiface USB Mix & Route - Daft 01.png
But the "Wait, Oh Crud" realization:
  • Some of my input sources are variable rates (i.e., can change from 48kHz to 96kHz or 192kHz depending on what is being played). However, the Digiface USB does not have rate detection that can automatically change when the input stream changes.
  • Some of my inputs arrive with different frequency rates (e.g., 44.1, 96, 192), but all audio going to the USB Record (Send) has to be the same rate and the Digiface USB does not have Source Rate Conversion (SRC)
  • Similar issue on the output side: some devices support different rates, but the USB Play (Receive) can only support a single sample rate and has to be the exact same sample rate as on the Send side. This issue isn't so bad on the output side since I would only need to default down to 96 kHz ... except that I'm facing a challenge on the input side where lowest common sample rate is 44.1 kHz
  • [Edit] As noted by @voodooless, there is also the issue that each of the S/PDIF clocks on the upstream devices are different.
Is my understanding of the fundamental flaw in my plan correct? That is, Digiface USB does not support variable rate and/or mixing together different sample rates on the input-side? And, are there any suggested workarounds that are easy to implement?
 
Last edited:
The RME Digiface USB does support sample rate change on-the-fly but only when used in ASIO mode (not with WASAPI)
I happen to have one for sale :)
Message me if you are interested
 
Sample rate changes does not mean it handles mixed sample rates. And even if the sample rates are the same, the clock domains are not. It cannot mix them properly.

You’ll need something like ADI-642 for that.
 
Sample rate changes does not mean it handles mixed sample rates. And even if the sample rates are the same, the clock domains are not. It cannot mix them properly.
I forgot about the clocking issue as well.

With something like the Digiface USB, is there anyway to handle input from different sources that do not share the same clock master? E.g., to tell those upstream consumer devices to pull their source clock from the Digiface USB (which in turn is slaved to something else in the chain that is the master, e.g, an RME ADI-2 Pro)?

Or this that simply a non-starter right off the get-go?
You’ll need something like ADI-642 for that.
A bit overkill I fear — I can just buy a lot more DACs at that rate. :cool:

Besides, my devices are all S/PDIF (typically supporting either Optical or Coax) and it is not clear the ADI-642 has support for S/PDIF.
 
The RME Digiface USB does support sample rate change on-the-fly but only when used in ASIO mode (not with WASAPI)
Just to be clear, I'm talking about the upstream devices on the Optical Input. For example, the WiiM Pro shifting from 44.1 to 192 as I change the inbound streaming service between LMS, Spotify, Quboz, etc. I think that is a completely separate issue from being able to change the the rate over the USB connection between the Digiface and the PC. Or is that what you mean as well?
 
Just to be clear, I'm talking about the upstream devices on the Optical Input. For example, the WiiM Pro shifting from 44.1 to 192 as I change the inbound streaming service between LMS, Spotify, Quboz, etc. I think that is a completely separate issue from being able to change the the rate over the USB connection between the Digiface and the PC. Or is that what you mean as well?

No, I only meant playback - sample rate change works automatically in that scenario
But if you have multiple devices with different sample rates on your inputs to the Digiface then that won't work..... :(
 
Last edited:
Besides, my devices are all S/PDIF (typically supporting either Optical or Coax) and it is not clear the ADI-642 has support for S/PDIF.
The AES inputs are SPDIF compatible. Sadly no optical inputs other than MADI.

I realize it's overkill, but there aren't very many options I'm afraid.
 
No, I only meant playback - sample rate change works automatically in that scenario
But if you have different sample rates on your inputs to the Digiface then that won't work..... :(

EDIT: if you have one device as the input and the sample rate changes, then that shall work fine, too
You set the clock source to the Input and the sample rate will auto-change if you are in ASIO mode and if your host DAW supports it (usually they should)
 
EDIT: if you have one device as the input and the sample rate changes, then that shall work fine, too
You set the clock source to the Input and the sample rate will auto-change if you are in ASIO mode and if your host DAW supports it (usually they should)
That is a helpful clarification. And may be something that I do eventually fall back to. Though not as robust as my primary hope using the Digiface to mix multiple digital consumer devices *in a home stereo environment*. Emphasis added because I am aware my use case is not what the Digiface USB was designed to handle. Just trying to find out what creative options (if any) exist to get me as close as possible.
 
IMO the whole setup could be handled by linux pipewire. Multiple SPDIF receivers over USB for capture, multiple simultaneous outputs, switchable/mixable/routable as needed. All async I/O devices are adaptively resampled to one common rate.

Pipewire can run DSP filters, e.g. https://github.com/Audio4Linux/JDSP4Linux looks quite interesting (not tested though) or CDSP in the chain. Great presentation of such solution with performance measurements is https://embedded-recipes.org/2023/w...o-systems-Philip-Dylan-Gleonec-compressed.pdf video

Pipewire supports the USB gadget async feedback natively (e.g. like CDSP) https://gitlab.freedesktop.org/pipewire/pipewire/-/blob/0.3.84/spa/plugins/alsa/alsa-pcm.c#L584

If HDMI input audio or eARC were needed, the whole thing could run on one of the many RK3588 devices available, e.g. http://radxa.com/products/rock5/5itx/ which has all the required HW.

Of course it would take some linux configuration and integration work, probably some kernel configuration too for the RK3588, no ready-made tutorials for such combo available.
 
Motu has the 8d and the LP32
Thanks for that suggestion! The MOTU 8D is a very viable option, and I think would solve this use case since it provides sample rate conversation on all four of the input channels (the LP32 does not), DSP capabilities, and can even run in standalone mode.

MOTU 8D User Guide (pg 23): Digital I/O with SRC: The 8D provides two pairs each of AES3 and S/PDIF digital input and output in all standard sample rates from 44.1 to 96 kHz. All inputs are equipped with sample rate conversion for troublefree digital transfers, even when the 8D and other connected devices are not resolved with one another.

On the flip side: it is now 7 years since the MOTU 8D was released (similar to Digiface USB), it only supports up to 96 kHz sample rate, and it sells for $595. It would also pull me away using TotalMix and ARC USB, though I'm not sure how critical that is since MOTU provides equivalent software.
 
I have a question about Sample Rate Conversion (e.g., on a MOTU 8D, RME ADI-2 Pro FS R, XMOS S/PDIF card, etc.) when the input source is an S/PDIF device that is not synchronized. I assume the SRC process would account for normalizing both sample rate and clock alignment:
  • Normalize sample rate — e.g., 48 kHz -> 96 kHz
  • Clock alignment — e.g., S/PDIF device's clock cycles are running 7 ms ahead of internal clock
My question is whether the SRC's clock alignment mechanism will time-align the audio streams using interpolation, or simply synchronizes the audio by adding a 7 ms delay?
  • Interpolation: In a recording or mixing environment where various audio tracks (e.g., vocals, instruments, drum kits, sound effects) need to be tightly time-aligned for coherence, I could image the SRC process would use resampling to interpolate from the packets arriving from the S/PDIF, and then pass along a sample representing how that wave form will be at 7 ms after the last packet.
  • Delay: In an environment where autonomous audio streams from different devices are simply being patched in (e.g., playing music from a CD between sets) and it doesn't matter that the music is delayed by 7 ms, the SRC process can align the clock simply by holding off 7ms before adding the S/PDIF device's audio into the master mix.
So in the case of a Delay approach (assuming the input was already also at 96 kHz and no sample change), the output stream would be bit perfect with a 7 ms initial gap. Whereas in the Interpolation method, the output stream would be time aligned with all other audio sources but not bit perfect. But, I couldn't find any details on how time alignment is being achieved in the different manuals that I reviewed.
 
Last edited:
My question is whether the SRC's clock alignment mechanism will time-align the audio streams using interpolation, or simply synchronizes the audio by adding a 7 ms delay?
7 ms delay would be no issue, nor would the system know. There is no absolute time reference.

There is only the two clock domains, where both clocks aren’t in sync, they are simply not running at the same speed. So the ASRC tries to compute the sample values of the source signal in the target clock domain. In order to do this a PLL is used to stabilize the incoming clock and figure of the time difference. This is what the algorithm will use. How exactly this is implemented can differ. There are multiple ways to get it done, but usually it involves oversampling to a high rate doing some more filter magic, and then downsampling to the new rate.

The process will never be bit perfect, not even when the sample rates are the same. The clocks just aren’t equal, so there is always something to compensate for.
 
7 ms delay would be no issue, nor would the system know. There is no absolute time reference.

There is only the two clock domains, where both clocks aren’t in sync, they are simply not running at the same speed. So the ASRC tries to compute the sample values of the source signal in the target clock domain. In order to do this a PLL is used to stabilize the incoming clock and figure of the time difference. This is what the algorithm will use. How exactly this is implemented can differ. There are multiple ways to get it done, but usually it involves oversampling to a high rate doing some more filter magic, and then downsampling to the new rate.

The process will never be bit perfect, not even when the sample rates are the same. The clocks just aren’t equal, so there is always something to compensate for.
Thanks @voodooless. I still have questions, but this is off-topic of the OP's original topic... so I'll move that discussion to a separate thread later.
 
Can something like the following diagram work for setting up proper Master / Slave clocking between a Digiface USB and the various input and output S/PDIF devices?

Where all devices are locked at 96 kHz as a common sample rate:
  • an RME ADI-2 Pro FS R provides the master clock (connected on an output S/PDIF port)
  • the Digiface USB will become a slave, using the ADI-2 Pro for the clock source
  • any devices that attach to an input S/PDIF port would be configured to become a slave, using the Digiface USB as its clock source
  • similarly (but not shown) any other output S/PDIF devices would also use the Digiface USB for their clock as well
Digiface USB Mix & Route - Daft 02.png

And if this doesn't work, is there some other approach for master/ slave clocking that would allow it to work for the set of devices shown on the diagram?
 
any devices that attach to an input S/PDIF port would be configured to become a slave, using the Digiface USB as its clock source
That is not how it works. To be a clock slave, you’ll need to get the clock from the master. Your digital devices have no way to do that.
 
That is not how it works. To be a clock slave, you’ll need to get the clock from the master. Your digital devices have no way to do that.
That was what I suspected...

So regarding the above diagram, would it work if the Digiface USB is set as the clock master and the ADI-2 Pro is set as slave? In that case, everything could reference the Digiface USB as clock master.

And if that does work, will it then break if the ADI-2 Pro is also connected to a USB host in multi-channel mode? That is, will the ADI-2 Pro change to using USB host as master clock (and not the Digiface USB anymore) if USB is connected?
 
Last edited:
So regarding the above diagram, would it work if the Digiface USB is set as the clock master and the ADI-2 Pro is set as slave? In that case, everything could reference the Digiface USB as clock master.
If you want all incoming SPDIF streams to be synchronous, all the SPDIF transmitters generating the stream must be clocked from common master clock. Which is not possible for most SPDIF sources.
 
If you want all incoming SPDIF streams to be synchronous, all the SPDIF transmitters generating the stream must be clocked from common master clock. Which is not possible for most SPDIF sources.
Yea, got the message that the S/PDIF devices would need to be able to set themselves as slave to the Digiface "loud and clear" several comments back. And I am aware that very few consumer products have that capability. That is why I dropped most of my devices from the diagram, and presume the ones remaining can be set to be slave to the Digiface.

And then even further, that it could only work if the Digiface USB itself is acting as the master clock.

My question now is about whether the ADI-2 Pro can be a slave to a Digiface that is being the master clock? And if it can, does the answer change if the ADI-2 Pro is also connected by USB to a host PC or Mac?
 
Back
Top Bottom