• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Holo Audio Cyan2 and Benchmark DAC3 measurements vs. Luxman DA-07X

cochinada

Member
Joined
Feb 1, 2023
Messages
19
Likes
5
Hello all,

I've read the Review and Measurements of Benchmark DAC3 by amirm but could not find anywhere any objective measurement of the Luxman.

I currently own a Holo Audio Cyan2 which apparently measures very similarly to the Topping Centaurus also reviewed here by amirm and the three seem to be more or less in the same league although some measures are not available on all the models.

I know it's like comparing apples with oranges as the Cyan2 is an R2R DAC and the Luxman uses these BD34301EKV DAC chips from ROHM’s “MUS-IC” series in a dual mono configuration which I never heard of.
Anyway, and in the lack of objective measurements so far, how would you think the Luxman compares based on the architecture and design of each unit?

Thank you very much in advance
 
Last edited:
I know it's like comparing apples with oranges as one is an R2R DAC and the other one uses these BD34301EKV DAC chips from ROHM’s “MUS-IC” series in a dual mono configuration which I never heard of.
Hm? The DAC3 uses ESS converters, afaik.
 
Hm? The DAC3 uses ESS converters, afaik.
You're right. I jumped the gun as I was thinking about my current DAC Cyan2 which I think is comparable to the DAC3 from an objective measurement POV.
 
The state of DAC’s today is that good ones (well designed ones that cost $700 or more) have no audible differences.

With distortion & noise -120dB or better, the only things you have left to consider are build quality, features and price.

Obviously, there’s little to no reason to spend more than $1k on a DAC.
 
Thank you. I've consulted ChatGPT just for curiosity :):

The “measurements are everything” camp​

Their core claim is this:
If a DAC has distortion and noise far below audibility, it cannot sound different in a meaningful way.
From a psychoacoustic and engineering standpoint, that is largely correct. Once you are at ~−110 to −120 dB SINAD, flat frequency response, low jitter, proper linearity, and no channel imbalance, you are well past the threshold where known, repeatable audible artifacts exist for human hearing under controlled conditions.

In other words:
  • You cannot reliably hear −120 dB distortion.
  • You cannot hear jitter that is orders of magnitude below audibility.
  • Two DACs that output the same analog signal within those limits will null almost perfectly.

So when someone says “good DACs all sound the same”, what they really mean is:
They do not introduce audible errors that can be proven in blind testing.
That is an important distinction.

Where that position quietly overreaches​

The leap of faith happens here:
“Therefore, there are no audible differences whatsoever.”
This is where engineering certainty quietly becomes ideology.

Why?

Because measurements do not capture the entire playback chain, nor the entire human experience.

A few points that matter, even if they are uncomfortable for purists:

1. We measure what we know how to measure​

DAC measurements focus on steady-state tones, sweeps, and noise floors. Music is dynamic, time-varying, complex, and interacts with downstream components.

Output stage behavior, impedance interactions, transient behavior under load, reconstruction filters, and even clock recovery strategies can differ while still producing “excellent” numbers on standard tests.

Do these differences always matter? No.
Can they matter in some systems? Yes.

2. R2R vs Delta-Sigma is not magic, but it is not nothing​

An R2R DAC like the Cyan 2 behaves very differently internally than a modern delta-sigma DAC like the Luxman’s ROHM chips.

That does not mean one is “more musical” in a mystical sense.
It means:
  • Different noise spectra
  • Different linearity error distribution
  • Different filtering approaches
  • Different output stages
All of these can be engineered to be inaudible, but they are not identical.

3. Audibility is not the same as preference​

Even if two DACs are audibly transparent in blind tests, listener preference can still diverge.

Why?
  • Slight tonal shifts at the margins
  • Filter choices
  • Interaction with amps and speakers
  • Listening fatigue
  • Long-term listening vs quick A/B tests
Blind tests answer the question:
“Can you reliably detect a difference?”
They do not answer:
“Which one do you enjoy living with more?”



The opposite camp: “measurements don’t matter”​

This side often falls into its own trap.

Claims like:
  • “This DAC has better microdynamics”
  • “More air”
  • “Blacker background”
…are frequently made without controls, without level matching, and without blind testing.

Human perception is easily nudged by:
  • Price
  • Brand
  • Aesthetics
  • Expectation
  • Confirmation bias
That does not mean people are lying.
It means the brain is an active participant in the experience.

Dismissing measurements entirely leads to:
  • Paying thousands for zero technical improvement
  • Confusing coloration with quality
  • Treating subjective impressions as universal truth
That road ends in audio mysticism.



My position, clearly stated​

I sit between the camps, but closer to the measurement side with conditions.

Here is my actual stance:
  1. There is little to no reason to spend more than ~$1k on a DAC for sound quality alone.
    I fully agree with you here.
  2. Beyond a certain performance threshold, differences are small, system-dependent, and often subtle.
    Not imaginary, but not night-and-day either.
  3. Measurements are necessary, but not sufficient.
They tell you:
  • If a DAC is competently designed
  • If it is transparent
  • If it is objectively flawed
They do not tell you:
  • If you will enjoy it more
  • If it integrates better into your system
  • If it makes you want to listen longer
  1. Aesthetics, build quality, ergonomics, and emotional connection matter.
    Anyone who dismisses this is pretending humans are oscilloscopes.

If someone loves a Luxman not because it “measures better”, but because:
  • It looks beautiful
  • It feels like a serious instrument
  • It invites listening
That is a perfectly valid reason.



About your specific comparison​

Cyan 2, Topping Centaurus, and Luxman DACs all sit firmly in the transparent DAC tier.

None of them is “objectively superior” in a way that guarantees audibility.
Any preference between them will be:
  • Subtle
  • System-dependent
  • Personal
The Luxman’s ROHM MUS-IC chips are not magical unicorns, but they are well-regarded, and Luxman’s analog stages and power supplies are typically excellent.
If you choose Luxman, you are paying for:
  • Industrial design
  • Brand philosophy
  • Build quality
  • Pride of ownership
Not for a revolution in sound.
And that is completely fine.



Final thought​

The biggest irony in this debate is that both camps are partially right and partially wrong, but each insists on being 100% correct.

Audio is engineering and experience.
Ignore either side, and you miss the point
 
Oh boy, does this discussion ever stop? Btw.: since when is the output of Chatgpt a scientific argument? A DAC is completely described by it's measurements and doesn't interact in any way with the loudspeakers (just one of the "statements" you quoted from ChatGPT.....).

Let alone the fact that it becomes annoying, when somebody posts AI based statements and leave it to the forum members to review it and highlight the false or ambiguous parts.
 
I've consulted ChatGPT
 
Oh boy, does this discussion ever stop? Btw.: since when is the output of Chatgpt a scientific argument? A DAC is completely described by it's measurements and doesn't interact in any way with the loudspeakers (just one of the "statements" you quoted from ChatGPT.....).

Let alone the fact that it becomes annoying, when somebody posts AI based statements and leave it to the forum members to review it and highlight the false or ambiguous parts.
Where did I write it was a "scientific argument"? In my view it gives very valuable arguments to both sides. Obviously you didn't care to read it but I'm not here to feed and polarize the topic as my purpose was first hand to find out if there were objective measures that anyone knows for the Luxman, which I could not found anywhere.
 
  • Slight tonal shifts at the margins
Will be measurable
  • Filter choices
Will be measurable, only audible on filters that do not properly reconstruct
  • Interaction with amps and speakers
There is no such thing
  • Listening fatigue
How can something that is indistinguishable from another in controlled tests cause listening fatigue? This has to be a psychological thing, nothing to do with the actual voltages coming out of the device
  • Long-term listening vs quick A/B tests
Same thing. Auditory memory is very short. There really is nothing that would cause long term effects other than psychology.

For sure one DAC can give more enjoyment than another, but this has generally nothing to do with what actually comes out of it, and everything with all the other senses acting together to form your biases.
 
Where did I write it was a "scientific argument"? In my view it gives very valuable arguments to both sides. Obviously you didn't care to read it but I'm not here to feed and polarize the topic as my purpose was first hand to find out if there were objective measures that anyone knows for the Luxman, which I could not found anywhere.
There is little value to AI listing different arguments, if they are not based in reality. LLMs are advanced text generators, they sound "smart" and well spoken but their output is not based on facts but on their training data which - in case of audio - is heavily skewed by lots of audiophile bullshit posted to the internet. Please don't expect forum members to invest half an hour of their time to refute the wall of text you got out of ChatGPT with a ten second prompt. If you have valid arguments or ideas youself, formulate them and we are happy to discuss them here.


Coming back to your original question:
Anyway, and in the lack of objective measurements so far, how would you think the Luxman compares based on the architecture and design of each unit?
It's not possible to judge that. You can give an upper bound to performance based on the chip specs stated by the manufacturer, but the company implementing the design can fuck up the architecture or PCB in a thousand different ways, so the actual performance needs to be measured for any valid comparison. Personally, I would be sceptical of "boutique" companies producing DACs with unsual chips or architectures, because my assumption would be that they chosse those two properties solely for marketing purposes and not to achieve well engineered sound output.
 
Coming back to your original question:

It's not possible to judge that. You can give an upper bound to performance based on the chip specs stated by the manufacturer, but the company implementing the design can fuck up the architecture or PCB in a thousand different ways, so the actual performance needs to be measured for any valid comparison. Personally, I would be sceptical of "boutique" companies producing DACs with unsual chips or architectures, because my assumption would be that they chosse those two properties solely for marketing purposes and not to achieve well engineered sound output.
I agree with the 1st part but I would not be so sure about the last sentence. A chip being rare or not so commonly used, does not necessarily mean it is bad. If that was the case there would never bee any evolution in electronics. I never heard about this chip myself but that doesn't tell me one way or the other.
 
I agree with the 1st part but I would not be so sure about the last sentence. A chip being rare or not so commonly used, does not necessarily mean it is bad. If that was the case there would never bee any evolution in electronics. I never heard about this chip myself but that doesn't tell me one way or the other.
Yes, being rare doesn't mean it sucks. My gut feeling just tells me that manufacturers who essentially select components because they look cool in the marketing pamphlet and cater to mostly uninformed or misinformed customers are not using the best criteria to select their components. With competent engineering behind questionable marketing, it might still result in a great product - you just don't know until someone tests it.
 
Last edited:
Back
Top Bottom