• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Review and Measurements of CHORD Qutest DAC

jae

Major Contributor
Joined
Dec 2, 2019
Messages
1,208
Likes
1,508
Denafrips Ares 2 vs Chord Qutest? I have a Topping D10s, Drop THX 789, SMSL DA-9 and B&W 607 S2

Neither. Both are at best sidegrades or inaudible differences to your d10s and at worse downgrades in various ways. If you need something with more inputs, balanced output etc and want to spend money there are way better options out there. Topping's flagship is one example.
 

Purité Audio

Master Contributor
Industry Insider
Barrowmaster
Forum Donor
Joined
Feb 29, 2016
Messages
9,127
Likes
12,324
Location
London
Initially it is about detecting whether there is a difference, preference can come later .
Keith
 

reg19

Member
Joined
Jul 28, 2020
Messages
65
Likes
23
Initially it is about detecting whether there is a difference, preference can come later .
Keith

Agreed. And they did detect the difference correctly each time.
 

Purité Audio

Master Contributor
Industry Insider
Barrowmaster
Forum Donor
Joined
Feb 29, 2016
Messages
9,127
Likes
12,324
Location
London
I personally wasn’t able to detect any difference in ‘properly engineered’ oversampling designs once level matched to .1dB I would buy on features and aesthetic.
Keith
 

reg19

Member
Joined
Jul 28, 2020
Messages
65
Likes
23
Use a multimeter

Why? Did you read what we were trying to achieve? We maintained same SPL levels. I propose that if we changed the test a bit the 2nd time around and now changed SPL levels by large amounts (1-3dB), the testers would still be consistent in the characteristics attributed to ‘DAC A’ and ‘DAC B’. The terms the testers threw around was that DAC A was ‘more analytical’, ‘more clinical’, ‘fatiguing’ and the ‘DAC B’ was ‘more refined’ ‘musical’.

None of you are trying to answer my question that I posted (what measurements point out the difference). Or, should we, perhaps, postulate that the measurement set that we currently use are necessary but not sufficient in pointing out the differences.

Edit: besides using those terms, the 11 out of 15 testers also described consistently differences in soundstage (though most did not use or know that term). ‘DAC A’ had narrower / flat staging and ‘DAC B’ was more 3D in which the testers ‘could point out where the musicians were’.
 
Last edited:

Jimbob54

Grand Contributor
Forum Donor
Joined
Oct 25, 2019
Messages
11,098
Likes
14,755
I personally wasn’t able to detect any difference in ‘properly engineered’ oversampling designs once level matched to .1dB I would buy on features and aesthetic.
Keith

I hadnt realised even "good" SPL meters only have to be accurate to within a couple of dB. Given the changes I hear when I adjust EQ/ preamp settings etc by only half dB or so, I see why voltage matching is so important
 

Jimbob54

Grand Contributor
Forum Donor
Joined
Oct 25, 2019
Messages
11,098
Likes
14,755
Aaaaaaaand here we go again.
 

Purité Audio

Master Contributor
Industry Insider
Barrowmaster
Forum Donor
Joined
Feb 29, 2016
Messages
9,127
Likes
12,324
Location
London
Why? Did you read what we were trying to achieve? We maintained same SPL levels. I propose that if we changed the test a bit the 2nd time around and now changed SPL levels by large amounts (1-3dB), the testers would still be consistent in the characteristics attributed to ‘DAC A’ and ‘DAC B’. The terms the testers threw around was that DAC A was ‘more analytical’, ‘more clinical’, ‘fatiguing’ and the ‘DAC B’ was ‘more refined’ ‘musical’.

None of you are trying to answer my question that I posted (what measurements point out the difference). Or, should we, perhaps, postulate that the measurement set that we currently use are necessary but not sufficient in pointing out the differences.
The differences you heard and then your subjective interpretations are purely imaginary, but if you prefer one over the other use that dac, I find the RME’s ability to correct bass peaks ( and it’s plethora of other useful features) invaluable.
Keith
 

reg19

Member
Joined
Jul 28, 2020
Messages
65
Likes
23
The differences you heard and then your subjective interpretations are purely imaginary,
Keith

And you concluded this from our DBT? If not, then how did you conclude?
I was not one of those testers.
 

jae

Major Contributor
Joined
Dec 2, 2019
Messages
1,208
Likes
1,508
Why? Did you read what we were trying to achieve? We maintained same SPL levels. I propose that if we changed the test a bit the 2nd time around and now changed SPL levels by large amounts (1-3dB), the testers would still be consistent in the characteristics attributed to ‘DAC A’ and ‘DAC B’. The terms the testers threw around was that DAC A was ‘more analytical’, ‘more clinical’, ‘fatiguing’ and the ‘DAC B’ was ‘more refined’ ‘musical’.

None of you are trying to answer my question that I posted (what measurements point out the difference). Or, should we, perhaps, postulate that the measurement set that we currently use are necessary but not sufficient in pointing out the differences.

My fault for chiming in randomly, I didn't read the back and forth in its entirety and did not realise using a meter was more or less already suggested. I was just alluding to the the fact that SPL meters are not the appropriate tool in this case nor accurate enough for the task of an objective volume matching.

It is a known fact that human beings can discern the relative differences in SPL (fractions of a decibel) well within the range of error of any SPL meter that you were likely to use (decibels or greater). It is also a known fact that SPL and preference is highly correlated, and is one of best predictors of subjective preference apart from raw frequency response due to human anatomy and the way the brain processes auditory signals. Since both these products have been reviewed on this site and the expected frequency response and other signal differences are more or less negligible between the two, it seems like volume is almost certainly the culprit unless something else was malfunctioning.

I am simply applying occam's razor here. You cannot reasonably make the conclusion that some unmeasured or immeasurable thing is the reason for the difference when you did not properly control for a measureable one that we know can cause a difference (SPL). Your methodology for controlling for SPL was error-prone in the first place, which is why others made that suggestion.
 

DSJR

Major Contributor
Joined
Jan 27, 2020
Messages
3,387
Likes
4,522
Location
Suffolk Coastal, UK
My comment regarding utterly precise level matching (which I don't believe you can do with a microphone and spl meter as accurately) was to do with A-B switching. A relaxed (hopefully) dem at an audio dealers place is different.

I do applaud the fact that some attempt is made to level match, but the question here is as much about the accuracy of that matching. Sorry to be anal here, but I've done similar myself and know beyond any shadow of doubt in my case anyway, just how easily fooled we are - and it only takes half a dB on ONE CHANNEL (all else identical) to create confusion in an A-B test. I'm still stung by the 'difference' a Chord M-Scaler appeared to make without knowing how the levels are changed by passing through the processing...
 

reg19

Member
Joined
Jul 28, 2020
Messages
65
Likes
23
My fault for chiming in randomly, I didn't read the back and forth in its entirety and did not realise using a meter was more or less already suggested. I was just alluding to the the fact that SPL meters are not the appropriate tool in this case nor accurate enough for the task of an objective volume matching.

It is a known fact that human beings can discern the relative differences in SPL (fractions of a decibel) well within the range of error of any SPL meter that you were likely to use (decibels or greater). It is also a known fact that SPL and preference is highly correlated, and is one of best predictors of subjective preference apart from raw frequency response due to human anatomy and the way the brain processes auditory signals. Since both these products have been reviewed on this site and the expected frequency response and other signal differences are more or less negligible between the two, it seems like volume is almost certainly the culprit unless something else was malfunctioning.

I am simply applying occam's razor here. You cannot reasonably make the conclusion that some unmeasured or immeasurable thing is the reason for the difference when you did not properly control for a measureable one that we know can cause a difference (SPL). Your methodology for controlling for SPL was error-prone in the first place, which is why others made that suggestion.


I guess you still did not read what we were trying to do.

We were not evaluating preference.

We were testing to see whether one DAC could be identified to sound different from another in a DBT irrespective of preference.

Not only did ALL the testers identify it, their comments were consistent with each other and also consistent with the general sentiments of the two DACs found online.

And, my point is that the testers comments regarding characteristics would be the same even if we changed volume by 2-3 dB.

Why don’t you specify EXACTLY how you’d conduct the test. If I’ve not yet sold my ADI-2, we can try do it.
 

jae

Major Contributor
Joined
Dec 2, 2019
Messages
1,208
Likes
1,508
I guess you still did not read what we were trying to do.
We were not evaluating preference.
We were testing to see whether one DAC could be identified to sound different from another in a DBT irrespective of preference.
Not only did ALL the testers identify it, their comments were consistent with each other and also consistent with the general sentiments of the two DACs found online.
And, my point is that the testers comments regarding characteristics would be the same even if we changed volume by 2-3 dB.
Why don’t you specify EXACTLY how you’d conduct the test. If I’ve not yet sold my ADI-2, we can try do it.

OK, you can replace "preference" with "difference" and it changes absolutely nothing about my explanation.

I did. I own both DACs (RME for 6 months and the Chord for a month). We conducted DBTs for 15 people over the past two weeks.

Speakers: Focal Sopra 3, Amp: McIntosh MC402. No preamp (volume changed digitally on Roon / HQPlayer).

We did select the tracks, though and played the same track on both DACs, volume matched. My friend (who helped conduct this) is a retired salesperson from this business

Though the audience / testers did not even know the names of DACs they were hearing, all could identify correctly which DAC ('DAC A' or 'DAC B)' was playing. And everyone was consistent in the characteristics of each.

My point is not ‘told you so’. It is to question specifically what measurements (of the ones published) can bring out the differences between the two.

Both are similar priced ($1.2k - $1.7k). I am thinking of buying a $4k discrete R2R DAC that has similar good measurements (Holo May). However, this shall be without listening to it (it comes directly shipped to the US from China). So, from the measurements published on ASR, what difference should I expect to hear from the other two?

Is this the post/test you are referring to? You would have to further explain exactly how the test was conducted from start to finish if you want any sort of of explanation on what could have produced your results or what could have been done better to gain more accurate ones.

Did you have 15 different audiences? 15 individuals in the audience tested? Was it conducted using ABX methodology? The listening position and experience of each individual identical? Were these individuals allowed to communicate with each other? Were participants given feedback on the answers during the test or ever unblinded? How were the responses recorded? Was the tester manually swapping hardwired inputs? Which parties were blinded and how? Was the auditory stimulus for each DAC played for an equal amount of time? Was the delay between each stimulus identical regardless of DAC? Was the stimulus randomised sufficiently? Were enough trials completed to determine statistical significance?

If volume accuracy was something you've overlooked there is the possibility of many other things you've overlooked. Just food for thought. There really are endless ways your test could have given you inaccurate data that you're using to draw inaccurate conclusions.
 
Last edited:

Chester

Senior Member
Joined
May 22, 2021
Messages
442
Likes
1,068
I guess you still did not read what we were trying to do.

We were not evaluating preference.

We were testing to see whether one DAC could be identified to sound different from another in a DBT irrespective of preference.

Not only did ALL the testers identify it, their comments were consistent with each other and also consistent with the general sentiments of the two DACs found online.

And, my point is that the testers comments regarding characteristics would be the same even if we changed volume by 2-3 dB.

I think the point that is trying to be made is that your testers probably could tell the DACs apart, regardless of preference. They could tell that one was slightly louder than the other.

Slightly louder can also often be perceived as ‘different’ sounding.

As has been said, all credit to you for going to this effort but you should probably be open to the potential flaws in the test, if you are genuinely interested in this stuff.
 

MaxBuck

Major Contributor
Forum Donor
Joined
May 22, 2021
Messages
1,544
Likes
2,203
Location
SoCal, Baby!
I'm open to the notion that similarly excellent DACs can interact sufficiently with either upstream or downstream components such that audible differences could occur. I'm not knowledgeable enough to evaluate whether SPL metering is sufficient to AB this properly. I'm dubious about whether voltmeter usage would really improve the test.
 

sjeesjie

Active Member
Joined
Aug 16, 2020
Messages
238
Likes
133
Question about the Qutest. Sometimes mine picks the wrong samplerate it seems and then the sound that comes out is quite weird and tinny. The porthole is yellowish instead of red.

I always switch the unit off whenever I don’t use it. It’s converting iPad running Spotify with USB. Turning it off and on again usually helps but I’m worried something’s wrong with the unit itself.
 

Angsty

Major Contributor
Forum Donor
Joined
Apr 11, 2020
Messages
1,899
Likes
2,266
Location
North Carolina, U.S.
Question about the Qutest. Sometimes mine picks the wrong samplerate it seems and then the sound that comes out is quite weird and tinny. The porthole is yellowish instead of red.

I always switch the unit off whenever I don’t use it. It’s converting iPad running Spotify with USB. Turning it off and on again usually helps but I’m worried something’s wrong with the unit itself.
You're probably right. If the unit was not syncing at the correct bitrate, it should simply not work at all. It's not like you can just be close to the right frequency with digital like it was with analog radio. I'd suggest there is a broader firmware problem with the unit and it will need to be repaired or replaced.

To see how broad the issue is, try connecting another digital source like a TV or CD player to see if it works properly with something else.
 

sjeesjie

Active Member
Joined
Aug 16, 2020
Messages
238
Likes
133
You're probably right. If the unit was not syncing at the correct bitrate, it should simply not work at all. It's not like you can just be close to the right frequency with digital like it was with analog radio. I'd suggest there is a broader firmware problem with the unit and it will need to be repaired or replaced.

To see how broad the issue is, try connecting another digital source like a TV or CD player to see if it works properly with something else.
Well I believe it happens when I switch back and forth with another DAC. So it might be the iPad not really knowing what driver to use? Or would that also mean it wouldn’t work at all?
I’ll try as you suggested, connect a tv also and if I see the problem, then I switch over to the other device.
I bought the unit secondhand so repairing would be a pain…
 

CR6

New Member
Joined
Oct 17, 2021
Messages
1
Likes
4
I’ve had experience of a Chord Qutest failing and it wasn‘t pretty. I bought a Qutest from Peter Tyson UK and it failed after 9 months, PT fobbed me off continuously for 6 weeks “ i‘ve emailed Mitch at Chord”. I’m not sure what Mitch at Chord does but in my experience it sure ain’t fixing things. This is a premium priced product and should not fail, if the unexpected happens it should be resolved swiftly.
Newer, less well known disrupters surely deserve a crack if this is what you receive from the premium, blue ribbon companies that fail to back up their supposed reputations.
 
Top Bottom