• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Denon DCD-3560 Review (CD Player)

Interesting, and thanks for the AES paper which proves there were BB R&D in Japan. I started reading the paper but most of it is too technical for me, maybe one day I’ll reach that level of understanding :)
 
As I wrote in my above post #27 as follows,
....
Just for your possible interest and reference, I also keep ONKYO DAC-1000(S) (I bought brand new one in 2011 or 2012) which has two of Burr-Brown PCM1795 (max. 32 bit 192 kHz) in L and R channel independently, as I shared in my post here and here.
....
In those years and at least until around 2010, ONKYO too might have some valuable relationship with Burr-Brown and/or Burr-Brown Japan... So far, I myself have no info at all, though.
 
Last edited:
(ii) the newer BB PCM1738, which was designed by Burr Brown Japan, also existed in a Sony part number (the CXD9657 which made its debut in the SCD-XA777ES) possibly built in Sony Semiconductor factories
Fascinating. Are there any other known cases like that? There's a whole bunch of Sony ICs from like the mid-'90s onwards for which there just doesn't seem to be any documentation available. I still don't know what a CXA8042AS is all about, for example. It would make a ton of sense if a lot of these had better-known counterparts.
 
Fascinating. Are there any other known cases like that? There's a whole bunch of Sony ICs from like the mid-'90s onwards for which there just doesn't seem to be any documentation available. I still don't know what a CXA8042AS is all about, for example. It would make a ton of sense if a lot of these had better-known counterparts.
As far as the monaural Sony CXA8042 "Current Pulse" chip, or its more modest stereo version for lower end application, ie the CXA8055 (to be found in the Sony CDP-XB930), are concerned, the only piece of information I know of that must be related to this chips is a patent application by Sony about a current buffer circuit to improve power supply rejection ratio out of voltage output CMOS PWM digital to analogue converter chips Sony used to build at the time. I understand a side benefit of this current buffer is to provide an additional analogue gain at the following current to voltage conversion stage to improve further the signal to noise ratio.
It is my understanding that CXA8042 and CXA8055 are custom integrated circuits built by Sony to implement the ideas divulged in the patent application.

By the way, Sony neatly used the possibility to make the current source inside CXA8042 variable to design a digitally operated analogue volume control in the SDP-EP9ES processor. A most interesting design for the time, I think, and an easy way to apply volume control on 1 bit stream for those uninterested or unaware of the possibility to do volume control by digital computation directly on a 1 bit data-stream.

As for any other cases as the CXD9657/PCM1738, it is most probable that two other Sony digital to analogue converters of the early 2000, the stereo CXD9674 and the 6 channels CXD9675 (to be found in some DVD players such as DVP-NS705, NS905, NS930, NS999ES...) are identical to AKM DAC chips : the AK4383 and AK4357 respectively. The commonalities between the Sony and AKM chips have been tracked down by some Russian electronicians for ages.

I also know of the case of an FM demultiplexer integrated circuit which is mostly identical under a Sanyo and a Sony part number. If memory serves me right, it is David Rich or perhaps Peter Aczel who gave that information in one issue of The Audio Critic. He who brought the case to light also wrote that, at the time, it was usual pratice for two corporations to share the burden of the developpement of a new IC, the prominent partner benefitting a short-lived exclusivity to exploit the IC under its own brand name.

Edition of 01/05/2025 and 01/09/2025 (citation and wrong hyperlink corrected) :

I just found the exact issue of The Audio Critic in which this story is told. It is issue #25 issue #24, page 62 of the magazine (54 of the pdf file), first column, §4 and the article is by David Rich. The FM demultiplexer chips are the Sanyo LA3450 and the Sony CXD1064S.
 
Last edited:
@Scytales : Super interesting, much more than a cross-over between two Netflix series, I love it!

On my side and to come back to the Denon 3560, now that I know the two trimming pots are on 1st and 2nd MSB, I followed the instructions of BB for the adjustment, especially on the 1st MSB, which… does not work at all. And that is one more proof, on my perspective, that there’s a transformation of the digital signal “à la” BurrBrown co-linear way.

Adjusting the MSB per the PCM58 BB guide, I got nowhere (no impact on low levels and I got higher distortion at full scale). Following my thinking based of the split between positive and negative side of the sine, I achieved 12dB less distortion at full scale and 10dB less at -12dBFS.

Maybe I’ll write a standard routine about how to adjust distortion for the Denon DCD-3560 and DCD-3500/3500G :)
 
Last edited:
Denon DCD-3560 - Adjustment procedure for the BurrBrown trimming pots.

I will be talking about the 2 MSB (Most Significant Bit) adjustments of the Denon DCD-3560, which applies to the Denon DCD-3500RG as well. Everything I wrote here, I can’t confirm 100%, but many clues and my testing drove me to the below understanding. Should there be obvious mistakes in my text, please be so kind to let me know, so I can update it.

The PCM58P DACs MSB adjustment procedure for the Denon DCD-3560 is complex (it’s probably why it’s not documented by Denon), so don’t go for it unless you fully understand the below, else you would only make things worse.

I think Denon had to add the trimming pots to balance the output of the two DACs per channel, considering the unique and complex conversion architecture, but not as a convenience for the end users to play with them trying to improve the THD, although that’s exactly what I did. I’m sharing this experience.


Introduction

The Denon DCD-3560 uses 4 mono converters from BurrBrown, the PCM58P in its most linear “K” version (with a typical THD of -100dB at full scale).

The PCM58P is an ancient R2R 18bits DAC, one of the best of its kind. At the time, BurrBrown offered laser trimming from factory and so the achieved linearity was SOTA. To go further in pure performances, BurrBrown also offered an optional manual trimming. The PCM58P was the first to allow MSB (Most Significant Bit) manual trimming to improve low levels precision, and bit2, bit3, bit4 trimming to lower the distortion near full scale.

The Denon DCD-3560 offers an undocumented possibility to trim bit1 (MSB) and bit2. But, when I went for it, I realized it did not work for Bit1 and I guess it is because of the specific and unusual DAC architecture. Denon did not provide a procedure because of the adjustment complexity (my guess), and anyways had to manually adjust the trimming pots before letting the CD Player leave the factory (else the distortion would have been way too high, per my testing).

The below addresses how to perform the trimming adjustments, based on detailed description of the DAC architecture and the function of the BurrBrown trimming pots in that context. It requires a very precise live distortion analyzer, able to display the harmonic content of the distortion, up to H5 minimum, and fast enough to monitor live the effects of the trimming. I personally used REW for the purpose, with a high precision ADC (a Cosmos E1AD or a Motu Mk5).


The basics

The PCM code uses a Binary Two’s Complement representation of the digital signal. It is a signed code meaning that the first bit (aka MSB for Most Significant Bit) identifies the positive from the negative sides of the signal. The remaining bits identify the amplitude of respective positive and negative sides as shown below (source : Digital Audio and Compact Disc Technology - Sony Service Center Europe):

BTC.jpg


In this representation, you see that around zero, the Binary Two’s Complement (BTC) code sees all bits switching. You can also clearly see that the first bit on the left is the one of the sign, and that is the MSB.

In addition, the Binary Two’s complement (BTC) code is asymmetrical. The lowest value is -32768 (Hex code is 8000) and the highest is +32767 (Hex code 7FFFF), because there is only one value that defines 0 (Hex code 0000). So there is one lowest value difference between the highest minimal (-FS: negative Full Scale) and highest maximal value (+FS: positive Full Scale).

Assuming someone would like to have a 2V peak to peak output from this DAC, we can calculate, for an 18bits DAC, that the smallest symmetrical signal would be 2V / 2power18 = 76.3µV. That means the negative full-scale value would be -1V, and the positive full-scale value would be 0.99999237V. The difference between the two is called “1LSB”, and it is 76.3µV in this example.

The BurrBrown PCM58P offered an optional manual MSB adjustment. It is indeed important to make sure that the MSB is precisely represented because it will influence the analog output at every crossing point to 0. Since the audio signal is made of sine waves that permanently go from negative to positive, it means they cross the zero point all the time. It makes a real difference with low level signals, around the bipolar zero (BPZ), less so with high level ones since 1LSB variation (or 76.3µV in the example) has much less importance with a full-scale signal than with very small levels. Yet, for maximum performance, we want the MSB to be very precisely represented. By the way, Philips had similar requirements before stamping their famous TDA1541A with a “Crown”, and one of them was that the MSB precision should be <0.5LSB.

As per BurrBrown’s instructions, the MSB adjustments of the PCM58P should be performed either by comparing the output voltages at +FS and -FS to ensure the 76.3µV (in my example) difference is obtained, or (simpler) by sending a low-level signal (eg -80dBFS) and adjusting the trimming pot live until minimum distortion is seen. The PCM58P offered 3 additional adjustments (for bit 2, 3 and 4), to minimize distortion with high level signals, and they had to be adjusted with a full-scale signal.

Unfortunately, as per my tests, the above adjustment procedure does not apply to the Denon DCD-3560, and that’s because of the way the BurrBrown DACs are implemented.


Denon “LAMBDA” - Theory of operation

Now that we understand the importance of the MSB and the need for precision, time has come to explain how Denon (with probable assistance of BurrBrown) wanted to resolve the MSB “issue” to improve the precision of the conversion at lower levels (their intention was to reach equivalent 20bits of resolution).

The problem, that not only them identified, was that for low level signals, the zero crossing point (called Bipolar zero: BPZ) was generating a relatively high distortion compared to the signal to output. Even for the lowest symmetrical sine wave, were only the one Lowest Significant Bit (LSB) would change, the MSB would too. Because of the BTC code, all bits would change from 1 to 0 and vice versa too. Any small MSB error together with the potential glitch generated by all bits switching from 0 to 1 was the concern.

Some DACs solved the problem by a digital offset (with a compensating analog offset) to move away from BPZ. Shifting the code at higher digital level, and therefore output, had the benefit to reduce the BPZ error. By shifting 10 codes higher, the BPZ error was made 10% smaller, which was smart.

But instead of bit shifting, Denon went the hard way of simply removing the zero crossing and the BPZ error. For that purpose, I understand they inserted a signal code change between the output of the oversampling filter (which was a standard NPC) and the input of the dual mono DACs per channel. They called it “Ladder-form Multiple Bias D/A”, aka LAMBDA. The high-level architecture is shown below:

1736359104081.png


Converging explanations in documentations provided by Denon and BurrBrown, that I already reported here, make me think their idea was to split the Binary Two’s Complement code in two: the positive code for the positive side of the signal to go to one DAC, and the negative code of the negative side of the signal to go to another DAC, and that for each channel. The main objective was to remove the MSB change.

They did it by changing the PCM code as shown below (from the AES Convention paper: 20 Bit "Colinear" DAC, a Solution to Low Level Problems):

Colinear.jpg


BurrBrown obviously contributed to this architecture as this is the same concept that was later released by them in the PCM63 and PCM1702, what they called “Colinear”. The above table comes from the AES convention paper written by Fred J. Highton and Toshio Murota from Atsugi Japan, which talks about the “new PCM63” with previews on achieved performances, and demonstrates the presence of a BB R&D center in Japan. You can also consult an old US patent describing the BB colinear concept: Modified sign-magnitude DAC and method, if you wish to be more into the details.

You can see, from the above figure, that the BTC code is changed to feed one “lower DAC A” and another “upper DAC B”. Since the code change allows for two representations of 0, it is necessary to account for the asymmetry of the initial BTC code. That is why there is +1LSB shown for the lower DAC.

Another representation of the same code change, can be found in the PCM63 datasheet which, again, was later released by BurrBrown. It might be easier to understand:

1736359035850.png


As you can see, with the new dual code, there’s simply no BPZ, by definition. In addition, the new generated code is basically the same for the upper DAC A and lower DAC B. This is what BurrBrown leveraged in their subsequent DAC chips, as it simplifies the internals, reducing size and cost of production. But the Denon still uses pre-colinear DACs, the PCM58P, which will only accept an unmodified BTC code. So that's why I think it's a simple split of the code that Denon did to feed the two PCM58P per channel, not the full change as shown above, but the idea is the same, and that's about to remove the BPZ in each DAC.

The downside of the method, for Denon, is of course the increased costs associated to the need of two DACs per channel instead of one. Not only that, but all code changes/split prior to feeding the DACs needed some specialized computation, that meant more components.

The PCM58-K in use in the Denon DCD-3560 pre-dates the PCM63/PCM1702, but Denon implemented the concept, using a specific circuitry which they called “LAMBDA”. They used a specific gate array (undocumented TI CF37606 components and other logical gates) as shown below (from the service guide of the Denon DCD-3500RG), one channel shown only:

IMG_0116.jpeg


Each of the dual TI CF37606, as per the schema of the player, feeds the upper and lower PCM58P-K DACs per channel. They also drive a high-speed Toshiba switch at the output of the BB PCM58P-K, probably to shut the upper DAC output when necessary(?).

The dual DACs per channel are therefore fed with a split BTC code for the upper and lower part of the signal. The asymmetry of the initial signal had to be accounted for, as you see in the above table. Theoretically, the weight of 1LSB needs to be added to the lower DAC, at 0 and when the upper DAC is involved. That means the lower DAC should always be active, but upper DAC could be shut to reduce output noise (is it the reason for the high-speed switch at the output? I’m not sure). The two DACs output are paralleled to reconstruct the entire signal.

I think that another advantage of the above was to virtually increase by one bit the DAC’s resolution. The PCM58P was an 18bits DAC, but with this architecture, one less bit is required to be processed by the DAC, because the sign no longer needs to be part of the code, it simply becomes which DAC processes the lower or upper part of it. With two DACs processing half of the wave, and dropping the no longer necessary signed bit (the MSB), 19bits data can be processed with 18bits DACs. This, together with the oversampling filter running at 8x instead of 4x in the previous generations, was how Denon could claim a real 20 bits conversion.

The downside is that this requires two DACs and more processing. In addition, the gain adjustment between the upper and lower DACs was critical for good operation, not to create massive amount of non-linearities (and distortion). That’s the reason why Denon had to go for the most linear K version of the PCM58P, AND to manually set the trimming pots of the BB on top before sending the Player to their proud (I hope) owners.

And voilà, if one doubts the CD Players of this time deserve their reputation of over-engineering, I guess you have the answer. After trimming mine, I got very low levels of THD (for the period) between -108dB and -103dB, with per the books max SNR, and an exceptional linearity down to -110dB (with shaped dither) with less than 1dB deviation.


Procedure for gain adjustments between DACs and minimized THD

So… time has come to talk about the procedure to adjust the two trimming pots per DACs. The below is not so much complex but requires a lot of time, patience and dexterity. Be warned.

First, to get access to the trimming pots:
  • Remove the top cover
  • Remove the screws of the servo card
  • Lift the servo without unplugging it
  • Remove the copper plate below the servo
  • Position the servo car so it stays plugged in and allows access to the trimming pot (example given here)
  • The trimming pots are: for right channel VR601, VR602, VR605 and VR606, and for left channel VR501, VR502, VR505 and VR506
  • Take a picture of their current position so you can return pretty much to them in case you get lost
  • Connect a distortion analyzer at the output of the player, one that allows live reading of harmonic distortion changes, up to H5 minimum
  • Have a test CD to play a single sine tone around 1khz, from -6dBFS to 0dBFS AND from -12dBFS to -9dBFS
The adjustments must be performed per channel. The mentioned trimming pots are connected to bit 1 and bit2 adjust pin of the BB PCM58. Theoretically the bit 1 is the signed bit, but no longer with that architecture, that is why the standard procedure documented by BurrBrown does not apply (and does not work).

Let’s start with right channel:
  1. Identify VR601 and VR605 and let’s balance the upper and lower sections of the DACs, with Bit1 adjust of the PCM58:
  2. Play a 1kHz test tone near full scale (0fBFS, minimum -6dBFS).
  3. Take a snapshot of the current distortion measurement so you have a view of the factory trimming results. You’ll now be trying to do better.
  4. Rotate VR601 a very small step in one direction, look at the distortion pattern changes. It’s ok if it increases. You should see a change in H3 and H5, one will increase and the other will decrease.
  5. Rotate VR605 the opposite direction and verify if the distortion pattern cancels (on H3 and H5). It must cancel a potential imbalance of distortion that you might have created in step 4). This is to confirm upper and lower DACs gain balance.
  6. If the resulting total distortion did not decrease compared to the snapshot you took at step 3), then rotate VR601 back to where it was, and a tiny bit further.
  7. Rotate VR605 the opposite direction of what you did with VR601 in step 5). Check if the THD decreased.
  8. Repeat all the above multiple times until you find a good balance which will show through reduced harmonics. The lack of balance will show itself with H3 and H5 and you need to land at a similar level for both.
  9. You will also see that H2, as well as H4, are not so much impacted by the adjustments, but they will increase too if you go completely off track, at which point the THD will be too high, that means worse than -90dB.
  10. If you get lost, use the picture you took of the trimming pots, go back to these, and start all over again.
  11. Identify VR602 and VR606, and let’s further reduce distortion with high level signals, with Bit2 adjust of the PCM58:
  12. Play a 1kHz test tone between -12dBFS and -6dBFS.
  13. Take a snapshot of the current distortion measurement so you have a view of the starting point.
  14. Rotate VR602 a very small step in one direction, look at the distortion pattern changes. It’s ok if it increases slightly. You should see a smaller change than at bit1 adjust, and of a different pattern.
  15. Rotate VR606 the opposite direction and verify if the distortion pattern cancels. It must cancel a potential imbalance of distortion that you might have created in step 14).
  16. If the resulting total distortion did not decrease compared to the snapshot you took at step 13), then rotate VR602 back to where it was, and a tiny bit further.
  17. Rotate VR606 the opposite direction of what you did with VR602 in step 2.e. Check if the THD decreased.
  18. Repeat all the above multiple times until you find a good balance which will show through reduced harmonics. The sweet spot is extremely tiny, so don’t be surprised if it takes time to get there, be patient.
Repeat the same with left channel using VR501, VR502, VR505 and VR506.

Eventually, you should obtain the below results:
  • THD (without Noise) is below -100dB (from -6dBFS to 0dBFS) calculated on 9 harmonics.
  • THD (without Noise) is below -90dB @-12dBFS calculated on 9 harmonics.
Note for self: I think a voltage drop of 0V across R670 and R671 should indicate the null point of the trimming 100k resistors, while a voltage drop of +-7.62mV would correspond to a 1LSB in current (because 1mA/131’072 * 1kOhm = 7.62mV, were +-1mA is the analog output range of the PCM58 and 2power18 / 2 = 131’072). To test later (this would be one more clue to the implemented architecture).


Measurements

I wanted to finalize this long text with some measurements about tangible benefits, not only of the Denons's idea about removing the zero digital zero crossing from the equation (LAMBDA concept), but also about the MSB trimming activity that I enjoyed.

For the below measurements, I used my newly acquired Cosmos E1AD (Grade O) and its scaler. It is more precise than my previous Motu Mk5.

First, on small levels representation perspective, I reused a measurement of Stereophile. It is a sine tone (997Hz) @-90.31dBFS which is the smallest symmetrical signal we can represent with 16bits data (see More than we hear for more information). At this very low level, only 3 states can be represented, the -1, 0 and 1. So you should see a square in band limited, that is with the typical ringing due to the Gibbs Phenomenon. The below is the measurement with the Denon:

Denon-DCD3560_997Hz_-90.31dBFS_XLR_R.jpg


It's extremely precise and good. Of course that means the Denon CDC-3560 does not suffer too much from low level noise. The 3 DC levels are very well represented. That level of precision was rare, at the time. So the Denon delivers on the promise of low level precision, around bipolar zero (BPZ).

The low level linearity should be benefitting as well from this. So, with the help of shaped dither noise added to the signal, we can go way below the theoretical -96dB of the Audio CD. Let's try with -110dBFS:

1736524014065.png


For this view, I calibrated the Cosmos to be at -1.43dBFS input when playing 0dBFS test tone. At -110dBFS, we should therefore get -111.43dBFS, and we get -111.46dBFS. So the deviation is only an incredibly low 0.03dB. The software computes 19bits of resolution in our most sensible frequency area, before the noise shaping kicks in (6kHz). In these conditions, the Dynamic Range is 113dB, I'm tempted to say WOW :)

Last but not least, at full scale (999.91Hz @0dBFS, no dither), I got the below results:

Denon-DCD3560_999.91Hz_0dBFS_XLR_LR_AfterTrimming.jpg


Left and Right channels are shown but only the Left channel is represented in the dashboard. The right channel has a little less noise and a little more distortion, but not at all levels. So they are close. The THD is as low as -107dB (0.0004%). The left channel keeps the distortion below -100dBr up to -12dBFS signal, which is very good. Right channel does 5dB worse, but nobody will be able to hear that.

Last one, a view of the THD vs Frequency of the best channel of the Denon compared to the best channel of the Sony CDP-3337ESD which uses a non conventional conversion too, with two Philips TDA1541A fed with a staggered digital signal (THD vs Frequency @-12dBFS):

1736527526000.png


It speaks by itself.

Conclusion

I already wrote, if you were suspicious (I was) about the ability of these overengineered Japanese CD players to make a tangible difference, or if it was just marketing BS, you now have the answer. What I measured is crazy good for a device that is 35 years old...

So far, it's the best R2R conversion I've measured, but that is a very unusual implementation we have here. Now, let's try to find if someone else did better (I'm searching for other contenders, the Denon is out of reach of all the other R2R type converters I own). I think it would be fun to compare to the subsequent BB PCM63P-K in a CD Player that allowed trimming Bit1 and Bit2 of the colinear conversion. There are not many...

I really enjoyed testing all of this. It was also quite a learning for me.
 
Last edited:
Very thorough and clear explanations. Interesting read, even from a purely intellectual point of view, for the theory of operation of the Denon and latter multibit Burr Brown DAC chips is very well described !
 
Denon DCD-3560 - Adjustment procedure for the BurrBrown trimming pots.

This is a draft for the 2 MSB adjustments of the Denon DCD-3560, which applies to the Denon DCD-3500RG as well. Everything I wrote below, I can’t confirm 100%, but many clues and my testing drove me to that understanding. Should there be obvious mistakes in my text, please be so kind to let me know, so I can update it.

The PCM58 DACs MSB adjustment procedure for that Denon is complex (it’s probably why it’s not documented by Denon), so don’t go for it unless you fully understand the below, else you would only make things worse.

I think Denon had to add the trimming pots to balance the output of the two DACs per channel, considering the unique and complex conversion architecture, but not as convenience for the end users to play with them trying to improve the THD, although that’s exactly what I did. I’m sharing this experience.


Introduction

The Denon DCD-3560 uses 4 mono converters from BurrBrown, the PCM58 in its most linear “K” version (with a typical THD of -100dB at full scale).

The PCM58 is an ancient R2R 18bits DAC, but one of the best of its kind. At the time, BurrBrown offered laser trimming from factory and so the achieved linearity was the best. To go further in pure performances, BurrBrown also offered an optional manual trimming of some bits to achieve better THD results and low-level linearity.

The PCM58 was the first to offer not only an MSB (Most Significant Bit) manual trimming, but also bit2, bit3 and bit4 trimming to improve the distortion results near full scale.

The Denon DCD-3560 offers an undocumented possibility to trim bit1 and bit2. But, because of the specific and unusual DAC architecture, the recommended trimming procedure from BurrBrown does not apply. Denon did not provide a procedure because of the adjustment complexity (my guess), and anyways had to manually adjust the trimming pots before letting the CD Player leave the factory (else the distortion would have been way too high).

The below addresses how to perform the trimming adjustments, based on detailed description of the DAC architecture and the function of the BurrBrown trimming pots in that context. It requires a very precise live distortion analyzer, able to display the harmonic content of the distortion, up to H5 minimum, and fast enough to monitor live the effects of the trimming. I personally used REW for the purpose, with a high precision ADC (a Cosmos E1AD or a Motu Mk5).


The basics

The PCM code uses a Binary Two’s Complement representation of the digital signal. It is a signed code meaning that the first bit (aka MSB for Most Significant Bit) identifies the positive from the negative part of the signal. The remaining bits identify the amplitude of respective positive and negative sides like shown below:

View attachment 419680

In this representation, you see that around zero, the Binary Two’s Complement (BTC) code sees all bits switching. You can also clearly see that the first bit on the left is the one of the sign, and that is the MSB.

In addition, the Binary Two’s complement (BTC) code is asymmetrical. The lowest value is -32768 (Hex code is 8000) and the highest is +32767 (Hex code 7FFFF), because there is only one value that defines 0 (Hex code 0000). So there is one lowest value difference between the highest minimal (-FS: negative Full Scale) and highest maximal value (+FS: positive Full Scale).

Assuming someone would like to have a 2V peak to peak output from this DAC, we can calculate, for an 18bits DAC, that the smallest symmetrical signal would be 2V / 2power18 = 76.3µV. That means the negative full-scale value would be -1V, and the positive full-scale value would be 0.99999237V. The difference between the two is called “1LSB”, and it is 76.3µV in this example.

The BurrBrown PCM58 offered an optional manual MSB adjustment. It is indeed important to make sure that the MSB is precisely represented because it will influence the analog output at every crossing point to 0 (that means many times). It makes a real difference with low level signals, around the bipolar zero, less so with high level ones since 1LSB variation (or 76.3µV in the example) has much less importance with a full-scale signal than with very small levels. Yet, for maximum performance, we want the MSB to be very precisely represented. By the way, Philips had similar requirements before stamping their famous TDA1541A with a “Crown”, and one of them was that the MSB precision should be <0.5LSB.

The PCM58-K of the Denon DCD-3560 offered the MSB adjustment which should be performed either by comparing the output voltages at +FS and -FS to ensure the (76.3µV in my example) difference is obtained, or (simpler) by sending a low-level signal (eg -80dBFS) and adjusting the trimming pot live until minimum distortion is seen. The PCM58 offered 3 additional MSB adjustments (for bit 2, 3 and 4), to minimize distortion with high level signals and they had to be adjusted with a full-scale signal sent to the BurrBrown.

Unfortunately, as per my tests, the above adjustment procedure does not apply to the Denon DCD-3560, and that’s because of the way the BurrBrown DACs are implemented.


Denon “LAMBDA” - Theory of operation

Now that we understand the importance of the MSB and the need for precision, time has come to explain how Denon (with probable assistance of BurrBrown) wanted to resolve the MSB “issue” to improve the precision of the conversion at lower levels (their intention was to reach equivalent 20bits of resolution).

The problem, that not only them identified, was that for low level signals, the zero crossing point (called Bipolar zero: BPZ) was generating a relatively high distortion compared to the signal to output. Even for the lowest symmetrical sine wave, were only the one Lowest Significant Bit (LSB) would change, the MSB would too. Because of the BTC code, all bits would change from 1 to 0 and vice versa too. Any small MSB error together with the potential glitch generated by all bits switching from 0 to 1 was the concern.

Some DACs solved the problem by a digital offset (with a complementary analog offset) to move away from BPZ. Shifiting the code at higher digital level, and therefore output, had the benefit to reduce the BPZ error. By shifting 10 codes higher, the BPZ error was made 10% smaller, which was smart.

But instead of bit shifting, Denon went the hard way of simply removing the zero crossing and the BPZ error. For that purpose, they inserted a signal code change from the output of the oversampling filter (which was a standard NPC) and the input of the dual mono DACs per channel. They called it “Ladder-form Multiple Bias D/A”, aka LAMBDA. The high-level architecture is shown below:

View attachment 419686

Their idea was to split the Binary Two’s Complement code in two: the positive code for the positive side of the signal to go to one DAC, and the negative code of the signal to go to another DAC and that for each channel.

They did that by changing the PCM code as shown below (from the AES Convention paper: 20 Bit "Colinear" DAC, a Solution to Low Level Problems):

View attachment 419682

BurrBrown obviously contributed to this architecture as this is the same concept that was later released by them in the PCM63 and PCM1702, what they called “Colinear”. The above table comes from the AES convention paper written by Fred J. Highton and Toshio Murota from Atsugi Japan,, which talks about the “new PCM63” with previews on achieved performances.

You can see that the BTC code is changed to feed one “lower DAC A” and another “upper DAC B”. Since the code change allows for two representations of 0, it is necessary to account for the asymmetry of the initial BTC code. That is why there is +1LSB is shown for the lower DAC.

Another representation of the same code change, can be found in the PCM63 datasheet which, again, was later released by BurrBrown. It might be easier to understand:

View attachment 419685

As you can see, with the new dual code, there’s simply no BPZ, by definition.

The downside of the method, for Denon, is of course the increased costs associated to the need of two DACs per channel instead of one. Not only that, but all code changes prior to feeding the DACs needed some specialized computation, that meant more components.

The PCM58-K in use in the Denon DCD-3560 pre-dates the PCM63/PCM1702, but Denon implemented the concept, using a specific circuitry which they called “LAMBDA” They used a specific gate array (undocumented TI CF37606 components and other logical gates) as shown below (from the service guide of the Denon DCD-3500RG):

Each of the dual TI CF37606, as per the schema of the player, feeds the upper and lower PCM58-K DACs per channel. They also drive a high-speed Toshiba switch at the output of the BB PCM58-K, probably to shut the upper DAC output when necessary(?).

The dual DACs per channel are therefore fed with a new BTC code for the upper and lower part of the signal. The asymmetry of the initial signal had to be accounted for, as you see in the above table. Theoretically, the weight of 1LSB needs to be added to the lower DAC, at 0 and when the upper DAC is involved. That means the lower DAC should always be active, but upper DAC could be shut to reduce output noise (is it the reason for the high-speed switch at the output? I’m not sure). The two DACs output are paralleled to reconstruct the entire signal.

I think that another advantage of the above was to virtually increase by one bit the DAC’s resolution. The PCM58 was an 18bits DAC, but with this architecture, one less bit is required to be processed by the DAC, because the sign no longer needs to be part of the code, it simply becomes which DAC processes the lower or upper part of it. So, with two DACs processing half of the wave, we can virtually decrease the resolution by 1bit (dropping the MSB), but we need two DACs… I can only suppose that this is how Denon could claim “Real 20 bits converter”, because with two 18bits DAC, they could process 19bits data. This, together with the oversampling filter running at 8x instead of 4x in the previous generations, was how they could claim a real 20 bits conversion.

The downside, again, is that this requires two DACs and more processing. In addition, the gain adjustment between the upper and lower DACs was critical for good operation, not to create massive amount of non-linearities (and distortion). That’s the reason why Denon had to go for the most linear K version of the PCM58, AND to manually set the trimming pots of the BB on top before sending the Player to their proud (I hope) owners.

And voilà, if one doubts the Players of this time deserved their reputation of over-engineering, I guess you have the answer. After trimming mine, I got very low levels of THD (for the period) between -108dB and -103dB, with per the books max SNR, and an exceptional linearity down to -110dB with less than 1dB deviation.


Procedure for gain adjustments between DACs and minimized THD

So… time has come to talk about the procedure to adjust the two trimming pots per DACs. The below is not so much complex but requires a lot of time, patience and dexterity. Be warned.

First, to get access to the trimming pots:

  • Remove the top cover
  • Remove the screws of the servo card
  • Lift the servo without unplugging it
  • Remove the copper plate below the servo
  • Position the servo car so it stays plugged in and allows access to the trimming pot (example given here)
  • The trimming pots are: for right channel VR601, VR602, VR605 and VR606, and for left channel VR501, VR502, VR505 and VR506
  • Take a picture of their current position so you can return pretty much to them in case you get lost
  • Connect a distortion analyzer at the output of the player, one that allows live reading of harmonic distortion changes, up to H5 minimum
  • Have a test CD to play a single sine tone around 1khz, from -6dBFS to 0dBFS AND from -12dBFS to -9dBFS
The adjustments must be performed per channel. The mentioned trimming pots are connected to bit 1 and bit2 adjust pin of the BB PCM58. Theoretically the bit 1 is the signed bit, but no longer with that architecture, that is why the standard procedure documented by BurrBrown does not apply (and does not work).

Let’s start with right channel:

  • Identify VR601 and VR605 and let’s balance the upper and lower sections of the DACs, with Bit1 adjust of the PCM58:
  • Play a 1kHz test tone near full scale (0fBFS, minimum -6dBFS).
  • Take a snapshot of the current distortion measurement so you have a view of the factory trimming results. You’ll now be trying to do better.
  • Rotate VR601 a very small step in one direction, look at the distortion pattern changes. It’s ok if it increases. You should see a change in H3 and H5, one will increase and the other will decrease.
  • Rotate VR605 the opposite direction and verify if the distortion pattern cancels (on H3 and H5). It must cancel a potential imbalance of distortion that you might have created in step c). This is to confirm upper and lower DACs gain balance.
  • If the resulting total distortion did not decrease compared to the snapshot you took at step 1.b, then rotate VR601 back to where it was, and a tiny bit further.
  • Rotate VR605 the opposite direction of what you did with VR601 in step 1.e. Check if the THD decreased.
  • Repeat all the above multiple times until you find a good balance which will show through reduced harmonics. The lack of balance will show itself with H3 and H5 and you need to land at a similar level for both.
  • You will also see that H2, as well as H4, are not so much impacted by the adjustments, but they will increase too if you go completely off track, at which point the THD will be too high, that means worse than -90dB.
  • If you get lost, use the picture you took of the trimming pots, go back to these, and start all over again.
  • Identify VR602 and VR606, and let’s further reduce distortion with high level signals, with Bit2 adjust of the PCM58:
  • Play a 1kHz test tone between -12dBFS and -6dBFS.
  • Take a snapshot of the current distortion measurement so you have a view of the starting point.
  • Rotate VR602 a very small step in one direction, look at the distortion pattern changes. It’s ok if it increases slightly. You should see a smaller change than at bit1 adjust, and of a different pattern.
  • Rotate VR606 the opposite direction and verify if the distortion pattern cancels. It must cancel a potential imbalance of distortion that you might have created in step 2.c.
  • If the resulting total distortion did not decrease compared to the snapshot you took at step 2.b, then rotate VR602 back to where it was, and a tiny bit further.
  • Rotate VR606 the opposite direction of what you did with VR602 in step 2.e. Check if the THD decreased.
  • Repeat all the above multiple times until you find a good balance which will show through reduced harmonics. The sweet spot is extremely tiny, so don’t be surprised if it takes time to get there, be patient.
Repeat the same with left channel using VR501, VR502, VR505 and VR506.

Eventually, you should obtain the below results

  • THD (without Noise) is below -100dB (from -6dBFS to 0dBFS) calculated on 9 harmonics.
  • THD (without Noise) is below -90dB @-12dBFS calculated on 9 harmonics.


Note for self: I think a voltage drop of 0V across R670 and R671 should indicated the null point of trimming 100k resistors, while a voltage drop of +-7.62mV would correspond to a 1LSB in current (because 1mA/131’072 * 1kOhm = 7.62mV, were +-1mA is the analog output range of the PCM58 and 2power18 / 2 = 131’072). To test later (this would be one more clue to the implemented architecture).
Very interesting and thought provoking post. Thank you!
 
Sorry everyone, I largely updated my previous message, especially with measurements, to show some results and benefits of the Denon LAMBDA conversion, and after having played with the MSB trimming too.
 
Last edited:
I highly appreciate your intensive tuning efforts on DENON DCD-3560; very much interesting and amazingly impressive descriptions. I have never encountered so "in depth" tuning article and info on DCD-3560 and Burr-Brown DAC processor of the era.

I will be talking about the 2 MSB (Most Significant Bit) adjustments of the Denon DCD-3560, which applies to the Denon DCD-3500RG as well. ....
As I shared in my above posts #8 and #21, I am still keeping my beautiful DCD-3500RG (purchased brand-new one in 1992) in perfectly functioning shape (it has been received overhaul maintenance at DENON twice!); I am very much exciting and happy about knowing, therefore, your impressive tuning descriptions/guidance are common also for my DCD-3500RG.

Since now I seldom power-on and seldom actually use my DCD-3500RG (I have already ripped all of my ca. 2,000 CDs into my digital music library in SSD; ref. here), at present I dare not perform the intensive tuning you described.

Nevertheless, I definitely would like to continue having wonderful, beautiful and respectful DCD-3500RG as it is now on my audio cabinet as shown in #8 and #21.

Thank you again for your wonderful posts and invaluable information thereof.
 
Denon DCD-3560 - Adjustment procedure for the BurrBrown trimming pots.

I will be talking about the 2 MSB (Most Significant Bit) adjustments of the Denon DCD-3560, which applies to the Denon DCD-3500RG as well. Everything I wrote here, I can’t confirm 100%, but many clues and my testing drove me to the below understanding. Should there be obvious mistakes in my text, please be so kind to let me know, so I can update it.

The PCM58P DACs MSB adjustment procedure for the Denon DCD-3560 is complex (it’s probably why it’s not documented by Denon), so don’t go for it unless you fully understand the below, else you would only make things worse.

I think Denon had to add the trimming pots to balance the output of the two DACs per channel, considering the unique and complex conversion architecture, but not as a convenience for the end users to play with them trying to improve the THD, although that’s exactly what I did. I’m sharing this experience.


Introduction

The Denon DCD-3560 uses 4 mono converters from BurrBrown, the PCM58P in its most linear “K” version (with a typical THD of -100dB at full scale).

The PCM58P is an ancient R2R 18bits DAC, one of the best of its kind. At the time, BurrBrown offered laser trimming from factory and so the achieved linearity was SOTA. To go further in pure performances, BurrBrown also offered an optional manual trimming. The PCM58P was the first to allow MSB (Most Significant Bit) manual trimming to improve low levels precision, and bit2, bit3, bit4 trimming to lower the distortion near full scale.

The Denon DCD-3560 offers an undocumented possibility to trim bit1 (MSB) and bit2. But, when I went for it, I realized it did not work for Bit1 and I guess it is because of the specific and unusual DAC architecture. Denon did not provide a procedure because of the adjustment complexity (my guess), and anyways had to manually adjust the trimming pots before letting the CD Player leave the factory (else the distortion would have been way too high, per my testing).

The below addresses how to perform the trimming adjustments, based on detailed description of the DAC architecture and the function of the BurrBrown trimming pots in that context. It requires a very precise live distortion analyzer, able to display the harmonic content of the distortion, up to H5 minimum, and fast enough to monitor live the effects of the trimming. I personally used REW for the purpose, with a high precision ADC (a Cosmos E1AD or a Motu Mk5).


The basics

The PCM code uses a Binary Two’s Complement representation of the digital signal. It is a signed code meaning that the first bit (aka MSB for Most Significant Bit) identifies the positive from the negative sides of the signal. The remaining bits identify the amplitude of respective positive and negative sides as shown below (source : Digital Audio and Compact Disc Technology - Sony Service Center Europe):

View attachment 419680

In this representation, you see that around zero, the Binary Two’s Complement (BTC) code sees all bits switching. You can also clearly see that the first bit on the left is the one of the sign, and that is the MSB.

In addition, the Binary Two’s complement (BTC) code is asymmetrical. The lowest value is -32768 (Hex code is 8000) and the highest is +32767 (Hex code 7FFFF), because there is only one value that defines 0 (Hex code 0000). So there is one lowest value difference between the highest minimal (-FS: negative Full Scale) and highest maximal value (+FS: positive Full Scale).

Assuming someone would like to have a 2V peak to peak output from this DAC, we can calculate, for an 18bits DAC, that the smallest symmetrical signal would be 2V / 2power18 = 76.3µV. That means the negative full-scale value would be -1V, and the positive full-scale value would be 0.99999237V. The difference between the two is called “1LSB”, and it is 76.3µV in this example.

The BurrBrown PCM58P offered an optional manual MSB adjustment. It is indeed important to make sure that the MSB is precisely represented because it will influence the analog output at every crossing point to 0. Since the audio signal is made of sine waves that permanently go from negative to positive, it means they cross the zero point all the time. It makes a real difference with low level signals, around the bipolar zero (BPZ), less so with high level ones since 1LSB variation (or 76.3µV in the example) has much less importance with a full-scale signal than with very small levels. Yet, for maximum performance, we want the MSB to be very precisely represented. By the way, Philips had similar requirements before stamping their famous TDA1541A with a “Crown”, and one of them was that the MSB precision should be <0.5LSB.

As per BurrBrown’s instructions, the MSB adjustments of the PCM58P should be performed either by comparing the output voltages at +FS and -FS to ensure the 76.3µV (in my example) difference is obtained, or (simpler) by sending a low-level signal (eg -80dBFS) and adjusting the trimming pot live until minimum distortion is seen. The PCM58P offered 3 additional adjustments (for bit 2, 3 and 4), to minimize distortion with high level signals, and they had to be adjusted with a full-scale signal.

Unfortunately, as per my tests, the above adjustment procedure does not apply to the Denon DCD-3560, and that’s because of the way the BurrBrown DACs are implemented.


Denon “LAMBDA” - Theory of operation

Now that we understand the importance of the MSB and the need for precision, time has come to explain how Denon (with probable assistance of BurrBrown) wanted to resolve the MSB “issue” to improve the precision of the conversion at lower levels (their intention was to reach equivalent 20bits of resolution).

The problem, that not only them identified, was that for low level signals, the zero crossing point (called Bipolar zero: BPZ) was generating a relatively high distortion compared to the signal to output. Even for the lowest symmetrical sine wave, were only the one Lowest Significant Bit (LSB) would change, the MSB would too. Because of the BTC code, all bits would change from 1 to 0 and vice versa too. Any small MSB error together with the potential glitch generated by all bits switching from 0 to 1 was the concern.

Some DACs solved the problem by a digital offset (with a compensating analog offset) to move away from BPZ. Shifting the code at higher digital level, and therefore output, had the benefit to reduce the BPZ error. By shifting 10 codes higher, the BPZ error was made 10% smaller, which was smart.

But instead of bit shifting, Denon went the hard way of simply removing the zero crossing and the BPZ error. For that purpose, I understand they inserted a signal code change between the output of the oversampling filter (which was a standard NPC) and the input of the dual mono DACs per channel. They called it “Ladder-form Multiple Bias D/A”, aka LAMBDA. The high-level architecture is shown below:

View attachment 419686

Converging explanations in documentations provided by Denon and BurrBrown, that I already reported here, make me think their idea was to split the Binary Two’s Complement code in two: the positive code for the positive side of the signal to go to one DAC, and the negative code of the negative side of the signal to go to another DAC, and that for each channel. The main objective was to remove the MSB change.

They did it by changing the PCM code as shown below (from the AES Convention paper: 20 Bit "Colinear" DAC, a Solution to Low Level Problems):

View attachment 419682

BurrBrown obviously contributed to this architecture as this is the same concept that was later released by them in the PCM63 and PCM1702, what they called “Colinear”. The above table comes from the AES convention paper written by Fred J. Highton and Toshio Murota from Atsugi Japan, which talks about the “new PCM63” with previews on achieved performances, and demonstrates the presence of a BB R&D center in Japan. You can also consult an old US patent describing the BB colinear concept: Modified sign-magnitude DAC and method, if you wish to be more into the details.

You can see, from the above figure, that the BTC code is changed to feed one “lower DAC A” and another “upper DAC B”. Since the code change allows for two representations of 0, it is necessary to account for the asymmetry of the initial BTC code. That is why there is +1LSB shown for the lower DAC.

Another representation of the same code change, can be found in the PCM63 datasheet which, again, was later released by BurrBrown. It might be easier to understand:

View attachment 419685

As you can see, with the new dual code, there’s simply no BPZ, by definition. In addition, the new generated code is basically the same for the upper DAC A and lower DAC B. This is what BurrBrown leveraged in their subsequent DAC chips, as it simplifies the internals, reducing size and cost of production. But the Denon still uses pre-colinear DACs, the PCM58P, which will only accept an unmodified BTC code. So that's why I think it's a simple split of the code that Denon did to feed the two PCM58P per channel, not the full change as shown above, but the idea is the same, and that's about to remove the BPZ in each DAC.

The downside of the method, for Denon, is of course the increased costs associated to the need of two DACs per channel instead of one. Not only that, but all code changes/split prior to feeding the DACs needed some specialized computation, that meant more components.

The PCM58-K in use in the Denon DCD-3560 pre-dates the PCM63/PCM1702, but Denon implemented the concept, using a specific circuitry which they called “LAMBDA”. They used a specific gate array (undocumented TI CF37606 components and other logical gates) as shown below (from the service guide of the Denon DCD-3500RG), one channel shown only:

View attachment 419710

Each of the dual TI CF37606, as per the schema of the player, feeds the upper and lower PCM58P-K DACs per channel. They also drive a high-speed Toshiba switch at the output of the BB PCM58P-K, probably to shut the upper DAC output when necessary(?).

The dual DACs per channel are therefore fed with a split BTC code for the upper and lower part of the signal. The asymmetry of the initial signal had to be accounted for, as you see in the above table. Theoretically, the weight of 1LSB needs to be added to the lower DAC, at 0 and when the upper DAC is involved. That means the lower DAC should always be active, but upper DAC could be shut to reduce output noise (is it the reason for the high-speed switch at the output? I’m not sure). The two DACs output are paralleled to reconstruct the entire signal.

I think that another advantage of the above was to virtually increase by one bit the DAC’s resolution. The PCM58P was an 18bits DAC, but with this architecture, one less bit is required to be processed by the DAC, because the sign no longer needs to be part of the code, it simply becomes which DAC processes the lower or upper part of it. With two DACs processing half of the wave, and dropping the no longer necessary signed bit (the MSB), 19bits data can be processed with 18bits DACs. This, together with the oversampling filter running at 8x instead of 4x in the previous generations, was how Denon could claim a real 20 bits conversion.

The downside is that this requires two DACs and more processing. In addition, the gain adjustment between the upper and lower DACs was critical for good operation, not to create massive amount of non-linearities (and distortion). That’s the reason why Denon had to go for the most linear K version of the PCM58P, AND to manually set the trimming pots of the BB on top before sending the Player to their proud (I hope) owners.

And voilà, if one doubts the CD Players of this time deserve their reputation of over-engineering, I guess you have the answer. After trimming mine, I got very low levels of THD (for the period) between -108dB and -103dB, with per the books max SNR, and an exceptional linearity down to -110dB (with shaped dither) with less than 1dB deviation.


Procedure for gain adjustments between DACs and minimized THD

So… time has come to talk about the procedure to adjust the two trimming pots per DACs. The below is not so much complex but requires a lot of time, patience and dexterity. Be warned.

First, to get access to the trimming pots:
  • Remove the top cover
  • Remove the screws of the servo card
  • Lift the servo without unplugging it
  • Remove the copper plate below the servo
  • Position the servo car so it stays plugged in and allows access to the trimming pot (example given here)
  • The trimming pots are: for right channel VR601, VR602, VR605 and VR606, and for left channel VR501, VR502, VR505 and VR506
  • Take a picture of their current position so you can return pretty much to them in case you get lost
  • Connect a distortion analyzer at the output of the player, one that allows live reading of harmonic distortion changes, up to H5 minimum
  • Have a test CD to play a single sine tone around 1khz, from -6dBFS to 0dBFS AND from -12dBFS to -9dBFS
The adjustments must be performed per channel. The mentioned trimming pots are connected to bit 1 and bit2 adjust pin of the BB PCM58. Theoretically the bit 1 is the signed bit, but no longer with that architecture, that is why the standard procedure documented by BurrBrown does not apply (and does not work).

Let’s start with right channel:
  1. Identify VR601 and VR605 and let’s balance the upper and lower sections of the DACs, with Bit1 adjust of the PCM58:
  2. Play a 1kHz test tone near full scale (0fBFS, minimum -6dBFS).
  3. Take a snapshot of the current distortion measurement so you have a view of the factory trimming results. You’ll now be trying to do better.
  4. Rotate VR601 a very small step in one direction, look at the distortion pattern changes. It’s ok if it increases. You should see a change in H3 and H5, one will increase and the other will decrease.
  5. Rotate VR605 the opposite direction and verify if the distortion pattern cancels (on H3 and H5). It must cancel a potential imbalance of distortion that you might have created in step 4). This is to confirm upper and lower DACs gain balance.
  6. If the resulting total distortion did not decrease compared to the snapshot you took at step 3), then rotate VR601 back to where it was, and a tiny bit further.
  7. Rotate VR605 the opposite direction of what you did with VR601 in step 5). Check if the THD decreased.
  8. Repeat all the above multiple times until you find a good balance which will show through reduced harmonics. The lack of balance will show itself with H3 and H5 and you need to land at a similar level for both.
  9. You will also see that H2, as well as H4, are not so much impacted by the adjustments, but they will increase too if you go completely off track, at which point the THD will be too high, that means worse than -90dB.
  10. If you get lost, use the picture you took of the trimming pots, go back to these, and start all over again.
  11. Identify VR602 and VR606, and let’s further reduce distortion with high level signals, with Bit2 adjust of the PCM58:
  12. Play a 1kHz test tone between -12dBFS and -6dBFS.
  13. Take a snapshot of the current distortion measurement so you have a view of the starting point.
  14. Rotate VR602 a very small step in one direction, look at the distortion pattern changes. It’s ok if it increases slightly. You should see a smaller change than at bit1 adjust, and of a different pattern.
  15. Rotate VR606 the opposite direction and verify if the distortion pattern cancels. It must cancel a potential imbalance of distortion that you might have created in step 14).
  16. If the resulting total distortion did not decrease compared to the snapshot you took at step 13), then rotate VR602 back to where it was, and a tiny bit further.
  17. Rotate VR606 the opposite direction of what you did with VR602 in step 2.e. Check if the THD decreased.
  18. Repeat all the above multiple times until you find a good balance which will show through reduced harmonics. The sweet spot is extremely tiny, so don’t be surprised if it takes time to get there, be patient.
Repeat the same with left channel using VR501, VR502, VR505 and VR506.

Eventually, you should obtain the below results:
  • THD (without Noise) is below -100dB (from -6dBFS to 0dBFS) calculated on 9 harmonics.
  • THD (without Noise) is below -90dB @-12dBFS calculated on 9 harmonics.
Note for self: I think a voltage drop of 0V across R670 and R671 should indicate the null point of the trimming 100k resistors, while a voltage drop of +-7.62mV would correspond to a 1LSB in current (because 1mA/131’072 * 1kOhm = 7.62mV, were +-1mA is the analog output range of the PCM58 and 2power18 / 2 = 131’072). To test later (this would be one more clue to the implemented architecture).


Measurements

I wanted to finalize this long text with some measurements about tangible benefits, not only of the Denons's idea about removing the zero digital zero crossing from the equation (LAMBDA concept), but also about the MSB trimming activity that I enjoyed.

For the below measurements, I used my newly acquired Cosmos E1AD (Grade O) and its scaler. It is more precise than my previous Motu Mk5.

First, on small levels representation perspective, I reused a measurement of Stereophile. It is a sine tone (997Hz) @-90.31dBFS which is the smallest symmetrical signal we can represent with 16bits data (see More than we hear for more information). At this very low level, only 3 states can be represented, the -1, 0 and 1. So you should see a square in band limited, that is with the typical ringing due to the Gibbs Phenomenon. The below is the measurement with the Denon:

View attachment 420073

It's extremely precise and good. Of course that means the Denon CDC-3560 does not suffer too much from low level noise. The 3 DC levels are very well represented. That level of precision was rare, at the time. So the Denon delivers on the promise of low level precision, around bipolar zero (BPZ).

The low level linearity should be benefitting as well from this. So, with the help of shaped dither noise added to the signal, we can go way below the theoretical -96dB of the Audio CD. Let's try with -110dBFS:

View attachment 420074

For this view, I calibrated the Cosmos to be at -1.43dBFS input when playing 0dBFS test tone. At -110dBFS, we should therefore get -111.43dBFS, and we get -111.46dBFS. So the deviation is only an incredibly low 0.03dB. The software computes 19bits of resolution in our most sensible frequency area, before the noise shaping kicks in (6kHz). In these conditions, the Dynamic Range is 113dB, I'm tempted to say WOW :)

Last but not least, at full scale (999.91Hz @0dBFS, no dither), I got the below results:

View attachment 420079

Left and Right channels are shown but only the Left channel is represented in the dashboard. The right channel has a little less noise and a little more distortion, but not at all levels. So they are close. The THD is as low as -107dB (0.0004%). The left channel keeps the distortion below -100dBr up to -12dBFS signal, which is very good. Right channel does 5dB worse, but nobody will be able to hear that.

Last one, a view of the THD vs Frequency of the best channel of the Denon compared to the best channel of the Sony CDP-3337ESD which uses a non conventional conversion too, with two Philips TDA1541A fed with a staggered digital signal (THD vs Frequency @-12dBFS):

View attachment 420092

It speaks by itself.

Conclusion

I already wrote, if you were suspicious (I was) about the ability of these overengineered Japanese CD players to make a tangible difference, or if it was just marketing BS, you now have the answer. What I measured is crazy good for a device that is 35 years old...

So far, it's the best R2R conversion I've measured, but that is a very unusual implementation we have here. Now, let's try to find if someone else did better (I'm searching for other contenders, the Denon is out of reach of all the other R2R type converters I own). I think it would be fun to compare to the subsequent BB PCM63P-K in a CD Player that allowed trimming Bit1 and Bit2 of the colinear conversion. There are not many...

I really enjoyed testing all of this. It was also quite a learning for me.
Hello everyone, first post here on ASR.
I recently imported a Pioneer Pd-5000 from Japan.
It uses 1xPCM58P-EX per channel. On inspection I was surprised to find it also had 4 MSD trimmer pots per channel.

This was Pioneer's flagship in 1989 and weighs over 15kg.

Could this be a contender ?
 

Attachments

  • P_20250206_142049.jpg
    P_20250206_142049.jpg
    283.5 KB · Views: 22
  • P_20250206_143022.jpg
    P_20250206_143022.jpg
    211.6 KB · Views: 24
  • P_20250206_145329.jpg
    P_20250206_145329.jpg
    293.6 KB · Views: 22
Yep, not often we get the 4 MSB trimming pots indeed. It's not the same architecture but it would be a probable contender once calibrated.
 
Back
Top Bottom