• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Cost no object dac/streamer

andreasmaaan

Master Contributor
Forum Donor
Joined
Jun 19, 2018
Messages
6,652
Likes
9,410
I am sure you have extensively listened to the Bartok and compared it with the Benchmark to be so sure. After all this is the audiosciencereview forum.
:)

I am an engineer so the "scientific" bit of this forum is something that I appreciate a lot. I am also fascinated what level of audio performance is possible today at very reasonable cost. I have a Chord Mojo for portable use that I am very happy with.
However measurements are not everything. We are only able to measure certain parameters and measuring instruments have limited resolution. Claiming that every device that measures the same sounds the same is as wrong as dismissing measurements and seeking justifications why a device that measures badly "sounds good".
Having said that, my purchase of a dCS DAC was preceded by four lengthy single blind listening sessions with two friends. In the given circumstances (system, room etc.) there were significant differences in the sound of the different DACs. We did not listen to the Benchmark DAC3, so I cannot say how it would have compared. But for me, the dCS Rossini was the DAC that most gave me the impression of being in a live concert and hearing music and not hifi. It achieved this both with PCM and DSD material. Some of the comparatives were equally good with either PCM or DSD but not the other. The dCS was the best across both formats.
The Bartok is half the cost of the Rossini, 4-5x the cost of the Benchmark (not 20x). It has a roon ready streamer, is MQA capable, future proof through firmware upgrades and I would expect a useful life >10, probably 20 years. Everyone must decide for themselves whether it is worth the money - and not everyone will come to the same conclusion.
BTW I do not disagree with MZKM that room treatment is probably the single biggest return per Dollar invested in an audio system. But the question here was about DAC / streamers, not room treatment.

Which other DACs did you listen to in this blind test?

Did you always know whether you were listening to “DAC A” or “DAC B” etc? In other words, was “DAC A” always known to be the same DAC, or were you in the dark as to that too?

And did you verify first that you could reliably discern differences between the DACs, ie before deciding which you preferred?

I don’t mean to attack your views btw. Just trying to evaluate what your tests might mean more broadly.
 

sonci

Active Member
Joined
Feb 18, 2018
Messages
233
Likes
112
Claiming that every device that measures the same sounds the same is as wrong as dismissing measurements and seeking justifications why a device that measures badly "sounds good".
.
I agree with you.
If it measures good but sounds bad, it is BAD!
Some of the most musical dac I've listened are Audio Note dac, or TDA 1541 cd player, which are NOS designs, so should measure pretty bad.. (I don't know how to scientifically define "musical")
but, you know it's good to want to listen to the music all day, I listened to Benchmark 2 and just wanted to run..
To the OP, for cost no object I'd try a MSB dac.
 

oneforward

Member
Joined
Nov 28, 2018
Messages
7
Likes
1
Can anyone recommend a music "streamer" or server with a built-in CD ripper? I have a lot of physical media, in addition to downloads. Thanks.
 

sergeauckland

Major Contributor
Forum Donor
Joined
Mar 16, 2016
Messages
3,480
Likes
9,236
Location
Suffolk UK
I agree with you.
If it measures good but sounds bad, it is BAD!
Some of the most musical dac I've listened are Audio Note dac, or TDA 1541 cd player, which are NOS designs, so should measure pretty bad.. (I don't know how to scientifically define "musical")
but, you know it's good to want to listen to the music all day, I listened to Benchmark 2 and just wanted to run..
To the OP, for cost no object I'd try a MSB dac.

Absolutely wrong!! if it measures good but sounds bad to you, then YOU are at fault. There's nothing wrong with the equipment, the measurements prove that, so it must be that YOU just don't like decent measuring equipment.

S.
 

sonci

Active Member
Joined
Feb 18, 2018
Messages
233
Likes
112
Absolutely wrong!! if it measures good but sounds bad to you, then YOU are at fault. There's nothing wrong with the equipment, the measurements prove that, so it must be that YOU just don't like decent measuring equipment.

S.
Ha ha my fault??
of course to me, I don't buy a dac for the neighbors, and I need it for listening not for watching graphics.
 

Thomas savage

Grand Contributor
The Watchman
Forum Donor
Joined
Feb 24, 2016
Messages
10,260
Likes
16,310
Location
uk, taunton
Let’s not conflate personal preference with ultimate performance, You can easily measure performance. If however you prefer sauce on your steak that’s up to you but putting it on dose not make the steak a better representation of itself lol

These exchanges are fruitless though so please let’s not go down this road.
 

jacobacci

Member
Joined
Jan 3, 2018
Messages
91
Likes
84
Location
Switzerland
Which other DACs did you listen to in this blind test?
We had separate listening sessions on separate days for the following pairings of DACs:

dCS Delius / Purcell - Chord DAVE
dCS Delius / Purcell - Playback Systems Merlot
dCS Delius / Purcell - Weiss DAC202 USB
dCS Delius / Purcell - T+A DAC8DSD
dCS Delius / Purcell - Aune S16

In a final session we compared
dCS Delius / Purcell - dCS Rossini

So we always knew, which two DACs we were comparing. However we did not know which of the two we were hearing first, which was A and which was B.
The test operator always switched between A and B. He did not insert random sequences of A A or B B, which is why I said the test was single blind, not A B X. I accept that a test with random sequences would have been more meaningful.
As I said, in about 90% of the around 10 sequences we correctly identified the playing DACs to A and B. In about 10% of the cases we had it backwards. What we thought was DAC A was actually DAC B.
Music used was mainly small ensemble jazz, classical orchestra and opera. The same music was used for all the tests.
The two people listening came to a similar 90% hit rate. They did not get the same pairings wrong.
Whether there was no real difference and the 90% hit rate was due to a faulty test design or whether a difference actually existed is in the eye of the beholder. Different people will come to different conclusions, depending on their mindset.
For me personally the experience was sufficiently selective to replace my Weiss DAC 202 with a dCS Rossini and I have never since had the feeling I made the wrong decision. For me personally this process eliminated expectation bias to a sufficient degree to come to a decision whether to keep the Weiss or buy something else. I am aware that is a very subjective judgement but so be it.
 
Last edited:

andreasmaaan

Master Contributor
Forum Donor
Joined
Jun 19, 2018
Messages
6,652
Likes
9,410
We had separate listening sessions on separate days for the following pairings of DACs:

dCS Delius / Purcell - Chord DAVE
dCS Delius / Purcell - Playback Systems Merlot
dCS Delius / Purcell - Weiss DAC202 USB
dCS Delius / Purcell - T+A DAC8DSD
dCS Delius / Purcell - Aune S16

In a final session we compared
dCS Delius / Purcell - dCS Rossini

So we always knew, which two DACs we were comparing. However we did not know which of the two we were hearing first, which was A and which was B.
The test operator always switched between A and B. He did not insert random sequences of A A or B B, which is why I said the test was single blind, not A B X. I accept that a test with random sequences would have been more meaningful.
As I said, in about 90% of the around 10 sequences we correctly identified the playing DACs to A and B. In about 10% of the cases we had it backwards. What we thought was DAC A was actually DAC B.
Music used was mainly small ensemble jazz, classical orchestra and opera. The same music was used for all the tests.
The two people listening came to a similar 90% hit rate. They did not get the same pairings wrong.
Whether there was no real difference and the 90% hit rate was due to a faulty test design or whether a difference actually existed is in the eye of the beholder. Different people will come to different conclusions, depending on their mindset.
For me personally the experience was sufficiently selective to replace my Weiss DAC 202 with a dCS Rossini and I have never since had the feeling I made the wrong decision. In my judgement this process eliminated expectation bias to a sufficient degree to come to a decision whether to keep the Weiss or buy something else. I am aware that is a very subjective judgement but so be it.

Interesting, thanks.

So just to clarify, the process was to listen first to DAC A and DAC B, knowing which was A and which was B but not knowing anything else, i.e. to get an idea of how DAC A sounded vs DAC B. Then you would listen to A or B and have to guess which of A or B you were listening to, then repeat. Is that right?

And with that process in place, on average you were both able to correctly guess whether you were listening to A or B around 90% of the time?
 

jacobacci

Member
Joined
Jan 3, 2018
Messages
91
Likes
84
Location
Switzerland
You got it.

Then you would listen to A or B and have to guess which of A or B you were listening to, then repeat. Is that right?

We would listen to around 10 sequences of around 10 switching back and forth between A and B (for about 30 seconds to one minute each time). We took a short break between each sequence.
 
Last edited:

Sal1950

Grand Contributor
The Chicago Crusher
Forum Donor
Joined
Mar 1, 2016
Messages
14,339
Likes
17,185
Location
Central Fl
What is your choice if everything was free for a dac/streamer?
Just as a note to your question, if money were no object, to be honest, I might get some gear much more impressive looking for my entire rig. The kinds of things built in big custom machined boxes that cost way more than the components inside them. But I used to be a machinist and I appreciate and love the look of that stuff.
Otherwise I'll stick to the Emotiva DC-1 that Amir measured some time back.
https://www.audiosciencereview.com/...surement-and-review-of-emotiva-dc-1-dac.2306/
It didn't measure perfect but much better than many since, and I don't believe it's minor measurement failings to be audible.
It offers a excellent measuring headphone amp with very low output impedance and plenty of power to drive my Sennheiser 650 cans plus a analog level control with remote.
Small but handsome in it's own right, it will most likely remain in my system till it Schitts the bed. ;)
 

sonci

Active Member
Joined
Feb 18, 2018
Messages
233
Likes
112
We had separate listening sessions on separate days for the following pairings of DACs:

dCS Delius / Purcell - Chord DAVE
dCS Delius / Purcell - Playback Systems Merlot
dCS Delius / Purcell - Weiss DAC202 USB
dCS Delius / Purcell - T+A DAC8DSD
dCS Delius / Purcell - Aune S16

In a final session we compared
dCS Delius / Purcell - dCS Rossini

So we always knew, which two DACs we were comparing. However we did not know which of the two we were hearing first, which was A and which was B.
The test operator always switched between A and B. He did not insert random sequences of A A or B B, which is why I said the test was single blind, not A B X. I accept that a test with random sequences would have been more meaningful.
As I said, in about 90% of the around 10 sequences we correctly identified the playing DACs to A and B. In about 10% of the cases we had it backwards. What we thought was DAC A was actually DAC B.
Music used was mainly small ensemble jazz, classical orchestra and opera. The same music was used for all the tests.
The two people listening came to a similar 90% hit rate. They did not get the same pairings wrong.
Whether there was no real difference and the 90% hit rate was due to a faulty test design or whether a difference actually existed is in the eye of the beholder. Different people will come to different conclusions, depending on their mindset.
For me personally the experience was sufficiently selective to replace my Weiss DAC 202 with a dCS Rossini and I have never since had the feeling I made the wrong decision. For me personally this process eliminated expectation bias to a sufficient degree to come to a decision whether to keep the Weiss or buy something else. I am aware that is a very subjective judgement but so be it.
Did you have to do this to justify the buying or just for fun?
I mean it's a lot easier to demo a dac for some time, and if you like, to buy it. When I compare this way for a short time I get mad, and may pick wrong.
It's like listening to bright speakers, you like the first 2 songs, but then you want to run..
 

digititus

Active Member
Joined
Jul 17, 2018
Messages
203
Likes
315
We had separate listening sessions on separate days for the following pairings of DACs:

dCS Delius / Purcell - Chord DAVE
dCS Delius / Purcell - Playback Systems Merlot
dCS Delius / Purcell - Weiss DAC202 USB
dCS Delius / Purcell - T+A DAC8DSD
dCS Delius / Purcell - Aune S16

In a final session we compared
dCS Delius / Purcell - dCS Rossini

So we always knew, which two DACs we were comparing. However we did not know which of the two we were hearing first, which was A and which was B.
The test operator always switched between A and B. He did not insert random sequences of A A or B B, which is why I said the test was single blind, not A B X. I accept that a test with random sequences would have been more meaningful.

There is no control or 'reference.' Another unit should have been used to reference A/B comparisons and to ensure that your Ossicles were all resonating within the same dynamic range at the beginning of each reference group test :facepalm:
 

garbulky

Major Contributor
Joined
Feb 14, 2018
Messages
1,510
Likes
829
Absolutely wrong!! if it measures good but sounds bad to you, then YOU are at fault.
How can it possibly be the listeners fault?
In subjective listening, the listener is always correct. The difference is that it applies to them which isn't a bad thing at all.

Since "the unit measures well, the listener doesn't like accurate equipment" is also a logical fallacy. The equipment may measure accurately on the tests performed. But that doesn't mean the listener didn't like accuracy. It meant the listener didn't like what he was hearing. We can confirm the latter but we can't confirm the former.
 

sergeauckland

Major Contributor
Forum Donor
Joined
Mar 16, 2016
Messages
3,480
Likes
9,236
Location
Suffolk UK
How can it possibly be the listeners fault?
In subjective listening, the listener is always correct. The difference is that it applies to them which isn't a bad thing at all.

Since "the unit measures well, the listener doesn't like accurate equipment" is also a logical fallacy. The equipment may measure accurately on the tests performed. But that doesn't mean the listener didn't like accuracy. It meant the listener didn't like what he was hearing. We can confirm the latter but we can't confirm the former.

If the equipment measured accurately, and the listener didn't like what he was hearing, then ipso facto, he didn't like accuracy. No logical failure at all.

We can confirm accuracy to any arbitrary level, we can't confirm liking, as that's entirely personal.

Consequently, if somebody doesn't like the sound of something proven to be accurate, they don't like the sound of accuracy and prefer something less accurate. Either that, or they imagine what they like based on sighted tests.

S.
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,959
Likes
38,101
How can it possibly be the listeners fault?
In subjective listening, the listener is always correct. The difference is that it applies to them which isn't a bad thing at all.

Since "the unit measures well, the listener doesn't like accurate equipment" is also a logical fallacy. The equipment may measure accurately on the tests performed. But that doesn't mean the listener didn't like accuracy. It meant the listener didn't like what he was hearing. We can confirm the latter but we can't confirm the former.
I think the point is human hearing is not a reliable test device for best accuracy of the gear in reproduction. Now whether some inaccuracy is preferred over accuracy is another question. Mixing up the two ideas causes many of the squabbles about audio.
 

andreasmaaan

Master Contributor
Forum Donor
Joined
Jun 19, 2018
Messages
6,652
Likes
9,410
You got it.



We would listen to around 10 sequences of around 10 switching back and forth between A and B (for about 30 seconds to one minute each time). We took a short break between each sequence.

Ok thanks again. So the purpose of the listening was to determine whether a difference could be detected? Or to determine a favourite?

And how did you level match the DACs?
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,959
Likes
38,101
Ok thanks again. So the purpose of the listening was to determine whether a difference could be detected? Or to determine a favourite?

And how did you level match the DACs?
Didn't he write earlier they were matched with voltmeter using 1 khz tone at speaker terminals. Matched within .25 db. I take it due to volume control being in .5 db steps.
 
Top Bottom