Mine showed up today and the extra voltage is just what I wanted, thanks again for the review. Subjectively it sounds the same as the Apple dongle with about half the gain on the volume knob.
I am curious - did you ever measure what 15% on Windows volume slider really means? For example in AIMP volume slider the 15% give around -16 dB (unless you activate the log option), which matches good enough. But with the volume slider on the right of the taskbar in Windows 11 set to 15% I get -43 dB (aka real 0.7%). IMHO any such % values indicate sloppy implementations and are quite useless.This benefit can be huge since quite a few enthusiasts are into sensitive IEMs. Some non-planar headphones are quite sensitive, too. For example, I use Sony MDR-MA900 (12 ohm rated) with the JCally JM20 Max. Even with a -5 dB EQ preamp cut, my Windows volume slider is at 15% for some loud recordings. Also my modded Sennheiser PX100 II is very sensitive (even with a -8 dB preamp cut) and I mostly use 15% to 20% on my Windows volume slider.
I agree. I also learned that there's no fixed correspondence b/w Windows volume % and different USB audio devices, meaning the actual dB level of Windows volume % depends on each device. When I get a chance, I will measure the JM20 MAX's output levels corresponding to Windows % values.I am curious - did you ever measure what 15% on Windows volume slider really means? For example in AIMP volume slider the 15% give around -16 dB (unless you activate the log option), which matches good enough. But with the volume slider on the right of the taskbar in Windows 11 set to 15% I get -43 dB (aka real 0.7%). IMHO any such % values indicate sloppy implementations and are quite useless.
Not really sure, but I believe Windows volume percentage ‘curve’ is based on: 100% -0 dB; 52% -10 dB (1/10 power, ~half perceived loudness); 27% -20 dB (1/100 power); 14% -30 dB (1/1000 power)I am curious - did you ever measure what 15% on Windows volume slider really means? For example in AIMP volume slider the 15% give around -16 dB (unless you activate the log option), which matches good enough. But with the volume slider on the right of the taskbar in Windows 11 set to 15% I get -43 dB (aka real 0.7%). IMHO any such % values indicate sloppy implementations and are quite useless.
You can select between percentage and dB display in Windows Sound control panel window (at least in WIndows 10).Not really sure, but I believe Windows volume percentage ‘curve’ is based on: 100% -0 dB; 52% -10 dB (1/10 power, ~half perceived loudness); 27% -20 dB (1/100 power); 14% -30 dB (1/1000 power)
Of course, Windows has no idea of the actual volume—that’s up to the device implementation.
According to the specification sheet on page 34, Class H is implemented as follows when switching to the next higher voltage supply:How about we let other members express theirs, if anyone is still reading our posts?
I'm pretty sure this combination is not good for producing high quality sound. I have never come across a high-sensitivity headphone among the best quality ones. Active use of the DAC as a preamplifier will surely degrade its performance. So, using the regular amp gain switches is still much better.This benefit can be huge since quite a few enthusiasts are into sensitive IEMs. Some non-planar headphones are quite sensitive, too. For example, I use Sony MDR-MA900 (12 ohm rated) with the JCally JM20 Max. Even with a -5 dB EQ preamp cut, my Windows volume slider is at 15% for some loud recordings. Also my modded Sennheiser PX100 II is very sensitive (even with a -8 dB preamp cut) and I mostly use 15% to 20% on my Windows volume slider.
You were better off using an APU for this test or using cross-correlations. ADC noise makes it impossible to directly measure accurately the noise of such quiet DUTs. On my prototype 9039s (it is slightly noisier than the production samples) the noise goes down to -125 dB, I gave you this measurement in your thread about SMSL DL200.I measured the signal-to-noise ratios of the JM20 MAX and E1DA 9039S in stepped sine tone tests
Of course, if you want the best possible low-level performance, you can have an alternative solution, as studied here by @Rja4000. But practically, we cannot deny the utility of digital volume control in applications like dongle-type devices.I'm pretty sure this combination is not good for producing high quality sound. I have never come across a high-sensitivity headphone among the best quality ones. Active use of the DAC as a preamplifier will surely degrade its performance. So, using the regular amp gain switches is still much better.
I examined your suggested method on my bench extensively. Before presenting my results, here is one critical thing we want to consider. Your measurement does not represent a realistic situation given the fact that CS43131 is very sensitive to strong ultrasonic signals. The higher the signal frequency, the more sensitively it responds---see my measurements below showing something not obvious in your measurements. Yes, I agree that it may be viewed as a design flaw. But the question is, how much does it affect realistic performance? What music/audio content has -1 dB signal at 40,000 Hz?You were better off using an APU for this test or using cross-correlations. ADC noise makes it impossible to directly measure accurately the noise of such quiet DUTs. On my prototype 9039s (it is slightly noisier than the production samples) the noise goes down to -125 dB, I gave you this measurement in your thread about SMSL DL200.
In itself your measurement is quite accurate, but not very representative. All recordings contain intrinsic noise, which is higher than quantization noise. If you look at the best microphones, their SNR only goes up to ~90 dBA. I suggest using this mix to measure the CS43131 DR: 1 kHz sine -60 dBFS + 40 kHz sine -1 dBFS + white noise -120 dB. If there are concerns that this noise has affected the result, there is an option to subtract the signal noise mathematically.
Here is the spectrum of the test signal in its pure form without conversions
View attachment 446931
It is impossible to use notch in the presence of 40 kHz -1 dBFS tone, so I put ADC into mono mode and activate cross-correlations. Checking the setup on a standard DR signal:
View attachment 446930
The resulting DR is 129.4 dBA. So there are no issues with the setup.
Next I feed the mix described above:
View attachment 446932
Taking into account a slight signal suppression by the lower ADC sensitivity, the noise at the DAC output is -96 dB. The influence of -120 dBFS white noise added by me can't exceed 0.02 dB, I don't see any sense to take it into account further. The reason for such a noisy result is that the white noise added by me blocked the noise-shaping work approximately as it will happen when playing real recordings instead of synthetic test signals.
According to this test we get DR = 36.0 + 60 = 96 dB
or DR = 100.9 - 62.7 + 60 = 98.2 dBA.
Do you still think that the DR = 130 dBA obtained by AES17 rules is not a fake?![]()
Quite a chunk of debate on this topic comes from the definition of Dynamic Range and its AES17 measurement procedure. Looking at this topic in better perspective now, I can say that the name we have given to this technique, "Dynamic Range Enhancement" is a misnomer, because DR is defined as the ratio of the loudest, clean signal to the noise floor of a device. If you want to be faithful to this definition, the result of DR measurement of CS431xx using the AES17 procedure (implemented in AP as well) is considered not valid because DRE in the chip tricks the method.
But as you guys agreed, the effect of DRE is real, and if implemented correctly with no audible, adverse side effect, should be beneficial.
So, from now on, I suggest calling this technique "Adaptive Signal-to-Noise Ratio Enhancement" or ASNRE.![]()
Thanks for adding your input. I am (I'm sure also @nick_l44.1 is) aware of this information on the datasheet. But according to the test results described by the reference-audio-analyzer website, the crunchy behavior observed there does not seem to be due to this time-limited transition (in either direction). It's just illogical to explain it that way.According to the specification sheet on page 34, Class H is implemented as follows when switching to the next higher voltage supply:
View attachment 446891
The "high dv/dt transient" may clip the outputs at the current (lower) voltage supply, and my uneducated guess is that this "transitory clipping" may be what the "Russian website" (reference-audio-analyzer.pro) shows.
And there is no delay here. The aforementioned 5.5 second delay only occurs when switching to the next lower voltage supply (see page 35).
However, I could be wrong.
There's no analog volume control supported in the CS43131. I am not a user of the UAPP but heard that UAPP can access a device's internal volume control. I do not know if there's any difference in noise performance b/w the OS volume attenuation and the DAC's digital attenuation. Maybe @CedarX can answer?Quick question about gain staging the JM20 Max.
When I connect it to my phone, the USB Audio Player Pro app pops up and the hardware volume is at -15dB by default. Is that digital volume inside the CS43131 that's in line with the OS volume or is it separate analog attenuation?
I have the old Earstudio ES100 and that one has a separate analog attenuator. I was wondering how does the internal volume in the JM20 Max work.
I think it's too early to draw conclusions. I assumed that the white noise would block the shaping from working, but it looks like that's not the case. Even before your post I was unable to repeat the same experiment for the THD+N@1kHz test.So, which represents this dongle's realistic performance?
Oh, sorry! I was incorrect, you didn't say exactly that. Just in my opinion the results of DR AES17 and DRE performance are very closely related. The question was purely rhetorical. The most interesting task is to find a way to cheat noise-shaping.So, going forward, there's no need to debate on this particular topic
Sure, no simple tests can uncover the entire picture of what is happening in CS431xx. I am beginning to think it is not worth doing this painful 'reverse engineering'---there's simply no obvious way to fully test it without the original designer's level of knowledge. There are potentially multiple factors intertwined: SNR Enhancement, Class H operation, and noise shaping. Only very few people like us must be interested. Most (even technical) people will just look at the standard set of tests (like what Amir does) and subjectively evaluate its sound quality.I think it's too early to draw conclusions. I assumed that the white noise would block the shaping from working, but it looks like that's not the case. Even before your post I was unable to repeat the same experiment for the THD+N@1kHz test.
I have no problem concurring with this part, which is obvious even in my measurements posted much earlier.the results of DR AES17 and DRE performance are very closely related.
Take a look.Sure, no simple tests can uncover the entire picture of what is happening in CS431xx.
Thinking back about this, i just remembered the very bad findings by L7audiolab on 2 old Tempotec implementation of the CS43131, the Sonata E35 and E44Exactly. The pattern that looked like an early rise in THD+N is mainly due to the DRE being phased out gradually (when the output level increases):
![]()
Without knowing the exact measurement settings or the Temptec's design surrounding the DAC chip, it is difficult to give a definite answer. But one thing clear is, in these stepped THD+N tests, the DRE or adaptive noise shaper in CS43131 cannot make THD+N increase without contribution from distortion. The worst case would be flat THD+N relative to fundamental tones. The rising THD+N must be due to rapidly rising THD (i.e., not due to the N part). I suspect there must be some design flaw in its power supply circuit.Thinking back about this, i just remembered the very bad findings by L7audiolab on 2 old Tempotec implementation of the CS43131, the Sonata E35 and E44
E35
![]()
E44 even worse
![]()
Now i wonder, if this was also due to DRE, it seems a very bad implementation of it, then some registers on the chip must exist where you can manage DRE behavior, and here they messed them completely.
But I could be totally off path.
Well, I guess it's the case, those dongle probably had flaws in powers supply or firmware, as said my technical knowledge is limited and my main goal is to put on the table as much as possibile additional information that could help drawing the pictureWithout knowing the exact measurement settings or the Temptec's design surrounding the DAC chip, it is difficult to give a definite answer. But one thing clear is, in these stepped THD+N tests, the DRE or adaptive noise shaper in CS43131 cannot make THD+N increase without contribution from distortion. The worst case would be flat THD+N relative to fundamental tones. The rising THD+N must be due to rapidly rising THD (i.e., not due to the N part). I suspect there must be some design flaw in its power supply circuit.