I see it as a treat everytime I find an album I like in the DSD format. I prefer my DAC to follow the DSD specification when I do, and not pop annoyingly.
Yes, but there are precise reasons there is often a pop between the tracks in a DSD, and they are technical. Either you have a single DSD master and you chop it - then you get a pop when you start a track in isolation, i.e. not following the previous one; or each track is its own master and it may have differences in potential baked in between the end of one track and the beginning of the next. To avoid those pops there are workarounds, but they usually involve adding some samples between the tracks. However there are solutions, and shame on those the do not apply them.
Note that a modern DAC converts everything internally to a multi-bit DSD-like representation anyway. Also the PCM. But a native 1-bit signal will either stay single bit (whence it does not use the full potential of the DAC itself) or be processed and converted into a multi-bit format before the digital-analog step.
By the way, you claim that the data should not be processed or over sampled. In fact, this MUST be done to get a proper result, and tis was known since before the beginning of the digital era. The bits of the carrier format are a representation of how you expect the wave form to be, but they are not the wave form. And converting them in a NOS way does NOT provide that – mathematically you MUST first upsample with a proper filter and then pass through the digital-analog circuitry, to reduce spurious effects of the latter. This is something that should not even be up for debate, like the fact that the earth is round and not flat and that vaccines do not cause autism. The proper way to convert IS to use aggressive oversampling (in the DAC itself, for instance) to avoid having an attenuation of the treble and the introduction of distorted positive and negative copies of the original signal translated by the Nyquist frequency.