AnalogSteph
Major Contributor
Isochronous requires regenerating the master clock from the USB data stream.1. Regarding UAC1, I am oblivious to any potential ramifications of using isochronous operation. Anything to keep an eye out for there?
Asynchronous operation means that the clock is generated internally depending on selected sample rate and the data stream is adjusted depending on feedback.
These approaches are sufficiently different that I'd think making a device that can do both isn't entirely trivial.
I can't quite find the article on the design challenges of the original BB PCM17xx right now, but here's another that gives some insight into the perils of isochronous USB audio:
It's really quite a pain, and I suspect the whole jitter problem is the reason why a bunch of USB audio codecs barely make it past 16-bit performance levels.
UAC2 makes use of asynchronous transfers instead, and that's pretty much required for high-performance USB audio.
Some other isochronous audio interfaces include HDMI and S/P-DIF. HDMI seems to be rather on the annoying side as well, as there have been AV receivers with gnarly digital performance due to high jitter. S/P-DIF is still a bit of a challenge but seems more manageable.
It's traditionally been more of a thing on the low-end side, as more filter taps meant that they took up more die area and hence increased cost. (This is also why filters have traditionally been of the half-band variety, recognized by being 6 dB down at fs/2. Their set of coefficients is mirrored so basically you only have to store half of them.)4. In regards to upsampling, I have read a little bit and it sounds interesting. Seems to offset some of the job of the external DAC to allow similar results with less aggressive filtering? is this more of a budget min/max method or something you find people doing across the spectrum of low/high end?
This was arguably even more relevant on the recording side, where a top-flight ADC (e.g. AK5394A) might have less than ±0.001 dB of periodic passband ripple and a stopband rejection of 110 dB past 24.1 kHz, while a consumer-grade part (e.g. AK5358/59) might have to make do with ±0.04 dB and barely more than 68 dB of stopband rejection that aren't even reached until 25.7 kHz, so aliasing rejection up to 18.4 kHz isn't exactly super great and even worse above that. If you have a source full of ultrasonic nasties (like vinyl) that's not good news. The level of periodic ripple is concerning, too.
Oversampling the ADC to 96 kHz (or even 192) immediately makes both a lot less critical - the pre-echo associated with periodic ripple in FIR filters moves in closer where it's harder to detect, and there's a lot less ultrasonics going around past 56 or 112 kHz (plus whatever aliasing is getting through would be spread out across the spectrum a lot more).
I have a 2011-ish Dell laptop with an IDT 92HD93 codec that I thought had wonky treble unless I used an increased sample rate of 96, preferably 192 kHz. The only thing I could find that was measurably concerning in any way was about ±0.05 dB of periodic ripple in the digital filter (a value that I would have considered a bit high but not acutely concerning previously). Dynamic range and distortion checked out fine. By 192 kHz, the range up to 20 kHz is basically ruler flat and sounds like it. I meant to recreate either the filter response or an EQ to compensate for it at one point, but only got partway into that and never finished it.
In more recent times, a focus on decreased latency (critical in live monitoring) has been another factor - more taps also mean higher latency. Even AKM's current top-flight AK557x ADCs have a choice of a total of one filter that's a bit meh at 96 kHz and below (±0.03 dB of rather high-frequency ripple, 85 dB stopband).
DACs typically sport less than ±0.005 dB of periodic ripple these days, and I wouldn't be worried about that. (Still, they made digital filters citing <±0.00005 dB periodic ripple and 110 dB stopband by the late 1980s. Even Realtek onboard audio is at ±0.0005 dB level.) You can get away with much higher levels in an IIR filter, as those have no pre-echo. They also sport lower average latency (if non-constant group delay, so not ideal for the measurement crowd). They were not typically used prior to the early 2000s or so due to the issue of accumulating rounding errors in feedback filters, and higher computational accuracy has made them viable.
To me, HD600s sound like "good speakers with a bit more treble", so chances are your hunch of being used to something decidedly not neutral is correct. Do check them out on lowly onboard audio or your phone though and make sure they're not night and day different there... sometimes there are jacks and plugs that don't particularly like each other, and then you end up without a ground connection and only get (L-R) out, which sounds super phasey and weird. I think there was some concern about these kinds of tolerance issues with the jacks in the C200, you may want to check the review thread again.I have two weeks to test the headphones and see how I feel, everything sounds a kind of flat to me. I really do wish my Hyper X Cloud 2's hadn't kicked the bucket so I could compare, its likely I am just used to a more coloured sound however even in that department what is there isn't hugely impressive to me. Everything seems a bit distant which may be the open back nature but even what is there doesn't sound very detailed.
This is the DPLL bandwidth setting for the ESS DAC, see datasheet. It should be irrelevant for (async) USB audio and can be left at 1. This is mostly needed for sources with rather jittery S/P-DIF output like LG and some Samsung TVs where you tend to be faced with periodic audio dropouts from the DPLL unlocking/resyncing if the setting isn't adjusted. It's an issue that was noticed when AKM S/P-DIF receivers were replaced by more rustic Cirrus parts (with a larger PLL bandwidth) following the 2020 AKM factory fire.There is a DP setting which defaults to 4 but no clue what it really does, clock stability and input tolerance according to the manual.