• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Optical SPDIF to AES XLR conversion - Clock transparency

Jitter is a non issue as long as your PMC 6 can lock on to the signal.
Get the Hosa and forget it.


SIGNAL FLOW
Audio enters the monitor via an XLR connector which can accept analogue
or digital AES3 signals. Analogue signals pass through a low-noise, lowdistortion
balanced input with variable gain staging, followed by a high
performance 24-bit 96kHz ADC. The digital input accepts 16- or 24-bit AES3
signals up to 192kHz. The crossover, user-control functions and non-invasive
driver protection systems are implemented in a powerful 32-bit DSP engine
before being distributed to independent 24-bit 96kHz D-A converters. The
signal path is completed by dedicated state-of-the-art Class-D power
amplifiers directly coupled to each drive unit, reducing frequency response
irregularities and inter-modulation distortion.
 
Adaptive
In this mode the timing is generated by a separate clock.
A control circuit (sample rate guesser) measures the average rate of the data coming over the bus and adjusts the clock to match that.
Since the clock is not directly derived from a bus signal it is far less sensitive to bus jitter than synchronous mode, but what is going on the bus still can affect it.
It’s still generated by a PLL that takes its control from the circuits that see the jitter on the bus.
Just a note on the adaptive mode. IIUC the theory is that synchronous mode derives clock from the USB transport, the adaptive one from the host side, and the async one from the device side. Therefore the adaptive mode would be ideal for passing master-clock streams such as incoming SPDIF, RTP (including AES67/Dante), internet streaming etc. The USB host would simply pass the data to the device at the incoming rate, as the USB device already slaves its clock to the USB stream data rate.

However, the practical reality in Windows/OSX/Linux is different because audio devices are clock masters through their audio APIs. There is no (I know of) implemented method for the audio clients to adjust clock of the USB host driver for an adaptive device. The drivers are able to variably pace the data stream because in the async mode they have to respond to the rate control coming from the device over USB messages. But for adaptive mode the drivers simply split the incoming data stream into USB frames as evenly as possible (i.e. e.g. 48 audio frames every 1ms USB frame), making the USB controller the clock master just like in the synchronous mode.

If someone has some more info on this, I would be happy to learn. There is no such control in linux alsa snd-usb-audio, and I have not seen any such control in windows wasapi for the USB driver either. As of rate-adjustment control, I know only about the linux USB gadget driver (the async-mode device side), linux alsa loopback, and OSX Blackhole loopback - btw all these controls nicely supported by CamillaDSP.
 
I have not seen any such control in windows wasapi
It has nothing to do with the audio driver.
It is part of UAC1 & 2
What I understand: in isochronous mode, a quasi real time strem is generated. The host sends just enough data to the DAC to maintain the sample rate. The bus speed is of course fixed, Full Speed in case of UAC1 and High Speed in case of UAC2.

In adaptive mode the buffer management is done by the USB receiver. It lowers of higher the speed of the clock driving the DA conversion to avoid buffer under/over-run. Indeed it adapt to the incoming stream.

In asynchronous mode, the USB receiver is again the one doing the buffer management. This time it sends control commands to the host. Send a bit less or more data. Now the DAC can use a free running clock.
Adaptive mode adapt the speed of the conversion to the data rate.
Asynchronous synchronization adapt the data rate to the clock speed.

Practice is probably more complex as modern DAC's use ASRC allowing them to use a free running clock regardless of the protocol used.
 
It has nothing to do with the audio driver.
You are talking from the POV of the device, I am talking about the host side.
Adaptive mode adapt the speed of the conversion to the data rate.
Actually both in sync and adaptive the bitclock and masterclock in the device for the DAC must be generated somehow. Data are sent in short chunks every usb frame, such chunky signal cannot be used as a clock without some PLL in both cases. Adaptive mode - PLL based on incoming bits, sync mode - PLL based on incoming USB frames (e.g. 48 times SOF rate to get the 48k audio frame clock)

That's what happens on the device side. But the host side must somehow generate the bitstream rate. While the synchronous mode says "send 48 audio frames every 1ms usb frame for 48kHz", the adaptive mode definition has no such prescription since the device is able to recover any bitstream rate (within some specced range around the base data rate, of course). Yet in practice the drivers which are always clock masters in the PC audio APIs do not allow fine-tuning the rate, and work just like in the synchronous mode - 48 frames every usb frame, or nine 44-frame packets followed by one 45-frame packet for 44.1kHz. While the async-mode frame rate (i.e. the number of audio frames in each USB frame) is controlled by the device, the adaptive-mode rate is not controlled by the host software as needed (e.g. to fit an incoming clock-master RTP stream), but is fixed-clocked by the USB controller, just like in the synchronous mode. Even though the adaptive mode was designed and intended to allow slaving the whole chain to the incoming master-clock data stream, the existing host-side implementations create a new clock domain. And software must resample between the incoming clock domain and the USB host driver clock domain, no adaptation of the rate on the USB line happens.
 
Last edited:
If I may add [...] The incoming jitter cannot be eliminated completely, although to a large extent, depending on the async resampling method.
Well, if I also may add: technically, it would absolutely be possible to eliminate it completely also in conjunction with "non-feedback-to-the-sender synchronous" - methods such as S/PDIF by using large enough* buffers.

* in order to compensate for the drift of the data rate due to the jitter. In the very long run, one would always underrun or overflow the buffer, but for the usual playback times, it shouldn't be a big issue buffer size-wise if the (hardcore audiophile) user accepts the delay between start and actual playback.
 
I'm not a fan of trying to hear differences on a YouTube video [...]
Any media acts like a "filter" due to its imperfections, but as long as its quality or resolution is higher than what is about to be differentiated, it can be a legit attempt.

As for the serious and trustworthy source Julian Krause, the demonstrated effects are severe enough; the opposite example is that GoldenSound guy where one is supposed to hear differences between high-end DACs through a "lot worse anyway" - Youtube video, which besides the virtual impossibility to distinguish between modern DACs in the first place, may be considered to be the climax of ridiculousness and absurdity.
 
One additional question regarding volume control: I see that the SPDIF on my sound interface has a volume control. My question is, when the signal is converted to AES, will it preserve the volume levels, or will it send at a constant volume? I've read on a few forums that AES doesn't control volume, and I'm curious if this will be an issue.

Thanks!
 
One additional question regarding volume control: I see that the SPDIF on my sound interface has a volume control. My question is, when the signal is converted to AES, will it preserve the volume levels, or will it send at a constant volume? I've read on a few forums that AES doesn't control volume, and I'm curious if this will be an issue.
The volume control acts on the digital samples, conversion from SPDIF to AES changes nothing.
 
Back
Top Bottom