• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Yamaha CDX-393 Review (CD Player)

(...)
Now, can we actually do better than that still...? Yes, yes we can - the 999.91 Hz he suggested gives basically perfect-sounding white noise. So we want a prime multiple of a small fraction of 1 Hz, eh? In this case, that's 0.01 Hz, meaning a 100 second interval.
I had an algorithm search for all primes between 999999900 and 1000000099 and got lucky, it found 9.
Code:
999999929, 999999937, 1000000007, 1000000009, 1000000021, 1000000033, 1000000087, 1000000093, 1000000097
(...)
Quick follow-up on this one. For all the below FFTs, I measured WAV files directly, not through a device. I created the WAV files with Audacity or REW.


I created two test tones with REW, without dither. The first one at 997Hz and @-0.01dBFS:

1728977117012.png


The second one at 999.91Hz and @-0.01dBFS:

1728977215431.png


And you were right, the 999.91Hz is self dither, which is very interesting for our purpose of testing devices.


Now, here below is a 997Hz with rectangular dither (from Audacity):

1728978786553.png


We lose 3dB of resolution due to the dither.


And the same 997Hz @-0.01dBFS from REW instead of Audacity (with dither again):

1728979308502.png


You can see that the noise floor raised again by 1.7dB because of the different dither from REW.


Last and not least, this is the view of a 1kHz @-0.01dBFS without dither:

1728980603244.png


As you said previously, we see a nice slew of 100 Hz harmonics, which reduces the THD measurement by 22dB compared to 999,91Hz!


Conclusion

I think the 999.91Hz is a really nice choice. No need to dither and so we save 3dB of Noise when compared to Rectangular dither from Audacity. It means that straight from the -0.01dBFS tone, we can get a reliable measurement of SNR (although it should be measured with -60dBFS test tone as per the AES). We also get 5dB more for THD measurement, in case a DAC would be good enough to go below -130dB THD :)

I suppose we have a winner for the new pseudo 1kHz measurement.
 
Last edited:
Very interesting! Thanks to you and AnalogSteph for your works and insight.

As an aside, that also shows why properly documented test methodology and instruments are very important to correctly interpret published specifications or test reports, as the source files of digital signal may have an influence on the results.
 
Looks like REW's ENOB calculation may be about half a bit off. As per
it should be
{\displaystyle \mathrm {ENOB} ={\frac {\mathrm {SINAD} -1.76}{6.02}},}

and there should be about 0.84 dB of difference between 20 and 22.05 kHz.
From your 999.91 Hz example, SINAD = -THD+N = 98.7 dB - 0.84 dB.
Using the above formula that would be 15.96 bits for the full 22.05 kHz bandwidth or close enough to 16, as you would expect. Within 20 kHz it should be 16.10 bits then, which is decidedly not 16.6.

@JohnPM?
 
there should be about 0.84 dB of difference between 20 and 22.05 kHz
REW's figures are for the selected bandwidth, 20 Hz .. 20 kHz in the case of that measurement. The ENOB calculation is correct.
 
RTA will not let me go beyond 20'947Hz (it's a WAV file at 16bits/44.1kHz):

1729001277124.png


So with the full formula:

1729007757414.png


We get:

1729008135970.png

And:
1729008338561.png

There's something that escapes me.



Multitone seems to let me go to 22'050Hz and shows 16.7bits:

1729002399204.png


Note that MT sees the tone at 999.948Hz.

BH7, 32 averages, 256k FFT and AES17 notch in both cases.

Test file created with Audacity (Generate -> Tone -> Freq=999.91Hz, Amplitude=0.999, no dither, 16bits/44.1kHz).
 
Last edited:
RTA will not let me go beyond 20'947Hz
Explanation is in the help:

The upper limit for data used in distortion calculations is 95% of the Nyquist frequency (i.e. 95% of half the sample rate) or the Distortion Low Pass, if enabled. The lower limit is the first FFT bin (DC is excluded) or the distortion High Pass, if enabled.
 
REW's figures are for the selected bandwidth, 20 Hz .. 20 kHz in the case of that measurement. The ENOB calculation is correct.
You sure?

In that very same 20 kHz bandwidth, we are seeing a SINAD of 98.7 dB. According to
that's 16.1 bits - not 16.6.

Half a bit would be 3 dB. I don't see that happening just from a 10% or so bandwidth reduction.

I suspect some sort of "fudge factor".
 
The extra half a bit in REW comes from the rms level of a full scale sine being -3.01 dB relative to digital full scale. That's likely a misinterpretation of the intention of "Fullscale amplitude", I can correct that in the next build.
 
Ah. So that's basically an inconsistency then, given that the level is displayed correctly otherwise.

There are (or at least were) some similar things going on with the level meter in Audacity. Mind you, how are you supposed to explain to somebody that a square wave has a +3 dBFS amplitude and a 0 dBFS peak level? Makes it impractical to read, too.
 
The extra half a bit in REW comes from the rms level of a full scale sine being -3.01 dB relative to digital full scale. That's likely a misinterpretation of the intention of "Fullscale amplitude", I can correct that in the next build.
Thank you John.

And since we are here, may I ask that you add a toggle on/off for the full formula in RTA?

The reason is that when testing a device, we need some headroom in the ADC, to prevent clipping but also reduce distortion at the input of the measuring interface (be it a Cosmo, Motu, RME, etc...) to ensure precise measurements. And with the full formula, REW will calculate the ENOB as per what the driver sees in terms of "full amplitude", but it's the full amplitude of the measuring interface, not of the device being measured.

So with the standard/simple formula only based on the calculated THD+N/SINAD, we would get the ENOB of the device under test, not the one of the interface (ADC).

Thank you!
 
@NTTY thank you for these old CD player measurements. This is very interesting to me.

If it was possible, I'd love to see measurements of the dCS RingDAC as implemented in the Arcam Alpha 9, CD92, and CD23 (there may be others). I auditioned the Alpha 9 back in the day, but the stars haven't aligned for me to bring a working unit home yet, unfortunately.
 
@tpd Thank you!
I just made an offer no later than yesterday on a CD36, but not sure it will fly.m (EDIT: it did not)
I did not know about the dCS ring DAC being used in other players. I see 5 Arcam have it. If I get the chance to find one, I’ll got for it. They also use the PMD100/PMD200 oversampling filter from Pacific Microtronics, and I want to test it too :)
 
Last edited:
As you said previously, we see a nice slew of 100 Hz harmonics, which reduces the THD measurement by 22dB compared to 999,91Hz!
...which increases the THD measurement by 22dB. :)

However, thanks for this thread and the one with the CD test audio files., quite good writings for ASR forum.
 
...which increases the THD measurement by 22dB. :)

However, thanks for this thread and the one with the CD test audio files., quite good writings for ASR forum.
THD indeed increases by 22dB which means it decreases our capability to measure distorsion from a CD player by 22dB, because of the one present on the test file ;)
 
(999.99989 or 1000.00007 Hz did not yield the desired noise for some reason. Are we hitting float32 mantissa limits there? Seems so, even 999.9991 is actually 999.99908447265625.
I think it's rather that they are so close to 1k, that after rounding to 16-bit they start forming much shorter patterns. For example, when you take 999.9991 Hz and round it to 8-bit, it also starts forming shorter patterns and is no longer so self-dithering:

fft.png
 
I think it's rather that they are so close to 1k, that after rounding to 16-bit they start forming much shorter patterns.
Yeah, even in 16 bit the residual noise spectrum gets a bit choppy at 999.9991 Hz, although I still can't hear it. At 1000.003 Hz it doesn't look super pretty either. No major complaints about 1000.03 or 999.91 Hz.

As a new strategy, I tried finding prime multiples of common FFT bin widths around 1 kHz. 65537 turned out to be prime, so I tried to get as close as possible to
1 kHz * 65537/65536 = 1000.0457770656829836752876872197 Hz as Audacity would let me, which would be 1000.045777 Hz. I like the result at least as much as the aforementioned.
1 kHz * 32771/32768 ~= 1000.091553 Hz turned out to be kind of a bust though, with visible peaks every few kHz.
1 kHz * 16381/16384 ~= 999.816895 Hz kinda looked like 1000.003.
1 kHz * 8191/8192 (look Ma, a Mersenne prime!) ~= 999.877929 Hz looks good, arguably a bit better than "properly" rounding up to 999.877930 Hz.
1 kHz * 4093/4096 ~= 999.267578 Hz is pretty good, nothing too special but fine.

The residual of 997 Hz looks and sounds worse than any of the above. 1000.7 has what looks to be very low-level harmonics. 1000.9 looks a bit uneven, not bad. They both sound fine though. 997.3 and 996.7 Hz seem about equally good.

Next I tried a prime fraction of 1 kHz.
10007/10009 * 1 kHz ~= 999.800180 Hz is another good one.
10009/10007 * 1 kHz ~= 1000.199860 Hz, ditto.
99991/99989 * 1 kHz ~= 1000.020002 Hz gives no reason for concern either.
99989/99991 * 1 kHz ~= 999.979998 Hz sort of reminds me of 1000.7, a bit meh in terms of subharmonic peaks but generally decent.
1000039/1000037 * 1 kHz ~= 1000.002000 Hz was probably asking for a bit much, it still sounds fine but the spectrum is not the prettiest looking.
100003/99991 * 1 kHz ~= 1000.120011 Hz is kind of decent with the slightest hint of odd-order harmonics.

It would be nice if there were a more formal (and, ideally, automated) way of determining and evaluating good candidates, "eyeballing the spectrum" is not exactly precision engineering. There's definitely not an infinite number since we're ultimately limited by the float32 representation of a number with 6 decimals, but that's still quite a lot.
 
It would be nice if there were a more formal (and, ideally, automated) way of determining and evaluating good candidates, "eyeballing the spectrum" is not exactly precision engineering.
I have this program (gist) that counts the number of unique samples and finds where a 10-sample pattern from the beginning appears again. It's probably more suited to discard bad candidates though.
(and hopefully there isn't any blunder in it :) )
Code:
         Frequency: 1000.000000
    Unique samples:       441
 Repeated subrange:       441

         Frequency: 993.000000
    Unique samples:      7185
 Repeated subrange:     14700

         Frequency: 997.000000
    Unique samples:     20543
 Repeated subrange:     44100
From your candidates, all use all possible 65535 sample values and most have repeating pattern after 10-20 seconds. Notable outliers, 92 and 262 seconds:
Code:
         Frequency: 999.816895
    Unique samples:     65535
 Repeated subrange:   4061604

         Frequency: 1000.199860
    Unique samples:     65535
 Repeated subrange:  11571600
but I'm not sure if that makes them any better.

Also, the program is doing rounding half-away-from-0. AFAIK ffmpeg and Audacity do half-to-even. Not sure how much difference this can make.
 
Last edited:
Back
Top Bottom