DDF
Addicted to Fun and Learning
- Joined
- Dec 31, 2018
- Messages
- 617
- Likes
- 1,360
ASR is not (yet?) in the business of testing audio systems. That is to say, holistically.
The measurements presented here are of a device-under-test, akin to a unit or module test in software.
IMO, it is neither fair nor useful to expect otherwise. However, I'm sure contributions to that effect would be welcomed.
@DDF, the problems you've discussed sound like OS scheduler problems.
It also sounds like you were/are saddled with equipment that is known to be problematic. Maybe only discovered after the fact, but still...
At some point, one might ask "what is my time worth?", and simply research and purchase more appropriate gear.
General purpose OS kernels are wondrously complex beasts!
All the more so considering they're expected to support the zoo of hardware they're interfaced with.
Audio is a soft real-time application: best-effort only, as there are no severe consequences to failure.
AFAIK, glitches have never caused a fatality
To compound this, isochronous usb transfers (audio) are not required to support re-tries upon error detection (CRC checksum failure).
Error correction is not a part of the protocol, although it could conceivably be tacked on (eg: Hamming ECC or similar) at the expense of extra bandwidth consumption and end-point complexity.
Glitches may not be fatal but this PC had me on the ledge a few times.
These issues unfortunately aren't so uncommon (see the archimago post, or even from just today https://www.audiosciencereview.com/...io-grade-usb-2-0-port.7399/page-2#post-172712) Injecting noise on the USB (conducted emissions) would be more a test of the DUTs ability to reject outside impairments than a real system test. I look at it as the same philosophy used in jitter rejection tests already conducted. There's a precedent and a need but I'm definitely being greedy.
I was a very early audio designer for VoIP and had to work out the jitter buffers and handling packet loss from first principles, managing variable latency, priority flags & unscheduled interrupts etc. I'd love to understand more about how USB audio is handled: packet sizes, is the ASIO buffer the only buffer etc. I've read anecdotally that any USB audio data lost would be large enough to have to cause a click, but that depends on the smallest sized packet supported, which I've yet to find, and how the usb interface itself deals with lost packets (ie does xmos interpolate or zero stuff or...?)
I'd love to understand more about how USB audio is handled: packet sizes, is the ASIO buffer the only buffer etc. I've read anecdotally that any USB audio data lost would be large enough to have to cause a click, but that depends on the smallest sized packet supported, which I've yet to find, and how the usb interface itself deals with lost packets (ie does xmos interpolate or zero stuff or...?)
Under my watch , we ditched the Windows XP audio pipeline completely for Vista and later. We had very strict performance criteria to not slow down any system with the new pipeline so the algorithms are not state of the art. They are a huge step up from XP though.
In the context of the main discussion -- claimed sound differences from different audio path implementations and player software -- all the above details on USB etc is pretty much irrelevant.
We have three cases here to consider:
1) The transmission itself is broken (like dropped/corrupted packets, buffer underruns etc), this is trivial. And it is really seldom an issue in the real world, IME. At least not with simple 2-channel playback even if high-res formats.
2) The sample stream reaching the actual DAC chip is *not* correct, not bit-identical to the source data, but we assume a correct transmission. This also is trivial and it not far-fetched that this kind of error can audible (especially resampling). It might be difficult to assert bit-perfect transmission, though. Again, RME devices have a nice built-in check for bit-perfect data with know bit patterns, provided in special bittest .WAVs by RME.
3) The sample stream reaching the actual DAC chip is 100% bit-perfect and free of any transmission errors. I'm tempted to think this should be the normal case for most users when a bit of care is applied. Still it seems that people report sound differences changing from ASIO to WASAPI, from FOOBAR to JRIVER and from FLAC to WAV, sourcing data from HDD vs SSD, what have you, ... yet all of this yields always the same correct input data for the DAC.
So the only way for the analog output to change is a, let's face it, mediocre implementation of the DAC where slightly(!) different patterns of jitter and analog signal quality (of the actual low-level digital link), EMI pollution, and USB supply noise in case of USB, disturb the DAC so much that it puts out significantly different analog waveforms. On top of that we might have slightly different amounts of balancing currents if there is electrical connection of source and DAC grounds which can disturb an incompetent DAC as well (but mostly affects unbalanced cabling as I've shown in an article here on ASR some time ago).
While it is not completely impossible that a) a DAC is that bad at noise rejection and b) a source PC is also very fragile and reacts to minor setup changes (like swapping player software) with actual -- measurable -- significant noise differences on the digital audio interface mechanism used, I would think this really is very far-fetched and without proper measurements and blind listening test the credibility of the majority of these reports is zero.
The real question (with a long intro): The author of “virtual audio cable” - Eugene Muzychenko - who has more knowledge about windows audio in his left little finger than I have overall - was unsuccessful in providing a wasapi implementation with full duplex in exclusive mode (in Portaudio). I tried to do something about it, but I was unable to find documentation about H2 set up one single stream with input and output.
Is that you Eugene? I have absolutely no problem with vac, only with pa/wasapi.I'm not sure exactly what you're asking (are you having a problem with Virtual Audio Cable? With PortAudio? With WASAPI in general?), but, just FYI, I strongly suspect there is at least one bug in PortAudio that causes glitches (discontinuities) when using WASAPI Exclusive in full-duplex mode. I might come around to fixing it in PortAudio at some point, but I've been procrastinating on that because the PortAudio codebase is not exactly fun to work with.
For now, the hunt continues regarding how small the bit error could be and how dacs handle it. Thanks for the USB PHy spec, but after a few minutes reviewing, it doesn't seem to answer these questions. BTW I used to be an Ethernet HW designer and I don't think these questions would be answered at the physical layer. Until this is known, everything else is heresay.
Is that you Eugene? I have absolutely no problem with vac, only with pa/wasapi.
Not sure what else you are looking for. The protocol specifies a packet size of up to 1024 bytes to be transmitted every 125us. The number of bytes in the frame depends on the number of channels, sampling frequency, and size of samples. In addition, the receiver can request a change in the number of samples to help control buffer overflow/underflow. There's no error recovery in the protocol, just error detection. If an error is detected it's up to the receiver to either throw away the whole frame, or to to do something else, like maybe interpolate the data.
At 44.1k sample rate a frame will normally contain 5 to 6 samples. Throwing this away due to a CRC error will result in a drop-out of about 0.1 msec of audio.