• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

CHORD M-Scaler Review (Upsampler)

Rate this product:

  • 1. Poor (headless panther)

    Votes: 358 88.2%
  • 2. Not terrible (postman panther)

    Votes: 13 3.2%
  • 3. Fine (happy panther

    Votes: 7 1.7%
  • 4. Great (golfing panther)

    Votes: 28 6.9%

  • Total voters
    406

DonR

Major Contributor
Joined
Jan 25, 2022
Messages
2,988
Likes
5,661
Location
Vancouver(ish)
I agree. Where is the proof that the M-Scaler doesn't do what it is claimed to in relation to timing accuracy? That is the claim of Amir's review and many on this forum. If you want proof from Chord, that's fair and reasonable too.
So to summarize: A claims X, B says he sees no sign of X, A claims X is not measurable but knows it's there and has heard it.
 

PassionforSound

Member
Reviewer
Joined
Jul 24, 2022
Messages
45
Likes
12
So to summarize: A claims X, B says he sees no sign of X, A claims X is not measurable but knows it's there and has heard it.

Almost. A claims X, B says he sees no sign of X while looking at Y. A shrugs and returns to enjoying X

At this point, I'm fairly comfortable showing myself out.
 

dc655321

Major Contributor
Joined
Mar 4, 2018
Messages
1,597
Likes
2,235
Thank you, but without any information about timing accuracy, this doesn't help

So, you're willing to accept a dubious claim based on theory, but not accept mathematical facts.

There is no way you read the links and absorbed the content in the brief time between posts.
You could have just said, "I don't understand that. Please express it differently".
 

DonR

Major Contributor
Joined
Jan 25, 2022
Messages
2,988
Likes
5,661
Location
Vancouver(ish)
So, you're willing to accept a dubious claim based on theory, but not accept mathematical facts.

There is no way you read the links and absorbed the content in the brief time between posts.
You could have just said, "I don't understand that. Please express it differently".
Hypothesis
 

iamsms

Member
Joined
Dec 14, 2020
Messages
35
Likes
126
At this point, I am fairly confident about saying that our friend @PassionforSound doesn't really understand the relationship between Frequency (amplitude and phase) and Time domain signals, and thinks there is something about timing accuracy that isn't shown in frequency response.

While many might believe that you can explain reconstruction filters to people who are that naïve (which is fine, I am naive in his area of expertise I am sure), I am not one of them.

Before someone claims that I am saying only engineers/scientists/math people are allowed to criticize dacs/upsamplers, all I say is that if you don't understand the fundamentals (as I said I don't in most areas of knowledge) ask for subjective but blindly tested data.
 

Dogcoop

Active Member
Joined
Feb 12, 2021
Messages
136
Likes
269
Almost. A claims X, B says he sees no sign of X while looking at Y. A shrugs and returns to enjoying X

At this point, I'm fairly comfortable showing myself out.
On the way out, be sure to leave a copy of the tests showing you hear a difference between properly functioning usb cables. A null test would also be useful.
Cheers
 

dc655321

Major Contributor
Joined
Mar 4, 2018
Messages
1,597
Likes
2,235
At this point, I am fairly confident about saying that our friend @PassionforSound doesn't really understand the relationship between Frequency (amplitude and phase) and Time domain signals, and thinks there is something about timing accuracy that isn't shown in frequency response.

Yeah, but where's your evidence?
Oh... nevermind...
 

tmtomh

Major Contributor
Forum Donor
Joined
Aug 14, 2018
Messages
2,712
Likes
7,906
Based on my interview with Rob Watts, this can be measured and understood using the digital domain. As I understand it, this involves feeding in the input data and reviewing the output data. My understanding is that you cannot measure it in the analog domain, but that doesn't make it false. Also, not being able to measure something doesn't make it inexistent. The Higgs boson couldn't be found/measured prior to 2012 but was theorised in 1964. Similarly, many of the medicines we are familiar with today (Lithium, Tylenol, Penicillin) are still not understood in terms of how they work. We just know that they do.

Further to that, my superficial understanding of the sinc function theory used as the foundation for the 1,000,000 tap design of the M-Scaler and how it is doing the upsampling (possibly more important than the fact that it is upsampling) is that the fine detail accuracy of the reconstructed wave form gets greater as it moves towards the infinite product. If that is true and the sinc function theory isn't wrong then does it not stand to reason that more processing of that algorithm will result in tighter timing accuracy?

Thank you for your thoughtful and detailed reply - much appreciated!

I think the issue with Watts’ reply to your question is that, as has been noted by others, we don’t (in fact, we can’t) listen to a digital signal in the digital domain. So yes, a higher sample rate will reduce the gaps between samples, and therefore the digital waveform will visually look “smoother” and more “refined” or “high-res” when you examine it in an audio editing program or a similar app or device.

But digital sampling theory tells us that this visual appearance does not correspond to any difference in sound. Any frequency we want to be able to reproduce need be sampled only twice. This is not “just a theory,” or an “it sounds good enough most of the time” type of theory. This is a “mathematical-truth,” “cell phones and the sound on all your favorite streaming services wouldn’t work at all if it weren’t true” type of theory. I know it seems almost inconceivable that Rob Watts would ignore this or be incorrect if he claimed it was untrue. But I don’t know what else to say: it’s true, and you don’t have to take my or Amir’s or anyone else’s word for it, as it’s copiously documented in the scientific literature and well-established. Can’t say the same for Watt’s claim about a more “refined” sound coming from upsampling.

As I’ve written in another thread, we all experience this truth whenever we listen to music because a bass drum at 50Hz is sampled 100 times more than a cymbal or a vocal harmonic at 5kHz, regardless of the sample rate. And no one ever claims that bass sounds are always more “refined” than midrange or treble sounds within the very same recording.

So if a bass drum that’s 100x oversampled compared to a cymbal doesn’t sound more refined or high-res, then upsampling a digital recording by only 2x or 4x (or 20x) isn’t going to do a thing.

Please note that the previous two paragraphs are just a possibly easy way to think about this idea - again, this is not a perceptual “take my word for it, you won’t hear the difference” rule; it’s a hard and fast “there is no difference, by definition “ rule.
 

PassionforSound

Member
Reviewer
Joined
Jul 24, 2022
Messages
45
Likes
12
Thank you for your thoughtful and detailed reply - much appreciated!

I think the issue with Watts’ reply to your question is that, as has been noted by others, we don’t (in fact, we can’t) listen to a digital signal in the digital domain. So yes, a higher sample rate will reduce the gaps between samples, and therefore the digital waveform will visually look “smoother” and more “refined” or “high-res” when you examine it in an audio editing program or a similar app or device.

But digital sampling theory tells us that this visual appearance does not correspond to any difference in sound. Any frequency we want to be able to reproduce need be sampled only twice. This is not “just a theory,” or an “it sounds good enough most of the time” type of theory. This is a “mathematical-truth,” “cell phones and the sound on all your favorite streaming services wouldn’t work at all if it weren’t true” type of theory. I know it seems almost inconceivable that Rob Watts would ignore this or be incorrect if he claimed it was untrue. But I don’t know what else to say: it’s true, and you don’t have to take my or Amir’s or anyone else’s word for it, as it’s copiously documented in the scientific literature and well-established. Can’t say the same for Watt’s claim about a more “refined” sound coming from upsampling.

As I’ve written in another thread, we all experience this truth whenever we listen to music because a bass drum at 50Hz is sampled 100 times more than a cymbal or a vocal harmonic at 5kHz, regardless of the sample rate. And no one ever claims that bass sounds are always more “refined” than midrange or treble sounds within the very same recording.

So if a bass drum that’s 100x oversampled compared to a cymbal doesn’t sound more refined or high-res, then upsampling a digital recording by only 2x or 4x (or 20x) isn’t going to do a thing.

Please note that the previous two paragraphs are just a possibly easy way to think about this idea - again, this is not a perceptual “take my word for it, you won’t hear the difference” rule; it’s a hard and fast “there is no difference, by definition “ rule.

Thanks for the thoughtful response. I really like your analogy of the bass drum example - that definitely piques my curiosity to explore that concept further.

The key piece to the puzzle for me (and I know some people here won't like this) is that the M-Scaler when running at 16x into the TT2 creates a clear and obvious audible difference (to my ears) and that's without any specific expectations about what I would/wouldn't hear when I first tried it. I didn't own it, had no vested interests like affiliates, etc. I have multiple reviews on my channel that demonstrate my comfort in stating when a product makes little or no difference to my ears and also for that matter plenty of reviews that are negative even when I have affiliate earning opportunities. My channel does not make much money and is not my main source of revenue (nor is it expected to be). But, with all that said, I clearly hear a difference with the M-Scaler running at 16x and therefore remain curious as to why this is.

As I said earlier, I am accepting the proposed explanation from Rob Watts in lieu of another explanation, but I am also intrigued by the bass drum example you've put forward @tmtomh and look forward to exploring it further
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,563
Likes
238,983
Location
Seattle Area
One final point. I borrowed an M-Scaler ages ago for my review. After spending time listening to it in all modes and using the TT2, I enjoyed its sound (in 16x mode) sufficiently that I purchased it for myself after the review.
I feel bad that there is so much misinformation and improper testing out there to have caused you to make this purchase decision. How about you performing a blind test and repeat a dozen times and let us know the results? Here is how to do it:

 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,563
Likes
238,983
Location
Seattle Area
The key piece to the puzzle for me (and I know some people here won't like this) is that the M-Scaler when running at 16x into the TT2 creates a clear and obvious audible difference (to my ears) and that's without any specific expectations about what I would/wouldn't hear when I first tried it.
It should not be a puzzle at all. If I get a dozen members to do that test, good numbers of them arrive at the same conclusion as you. This happens even if there is no change to the sound coming out of the system! All that needs to be there is the knowledge that something has changed. That is enough for your brain to listen differently and arrive at conclusion that sound has improved. I cover this in my video above.

Let's remember again that nothing in research or engineering that affirms your subjective conclusion. Or else every high-end DAC would be upsampling as that logic for that is textbook stuff. Given this backdrop, you need to be more diligent in your testing than just that comparison.
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,563
Likes
238,983
Location
Seattle Area
I agree. Where is the proof that the M-Scaler doesn't do what it is claimed to in relation to timing accuracy? That is the claim of Amir's review and many on this forum. If you want proof from Chord, that's fair and reasonable too.
There is no claim related to timing accuracy in Chord marketing. Here is the sum total in the product page heading:

1658716346511.png


That is all it is doing. It is replicating samples to match that of the new sample rate and filtering them. I tested the filtering and said it is extremely sharp. Dubious claims about it helping transients is just that: random stuff hoping to confuse the lay audiophile. Have Rob point to papers/research that shows how resampling audio improves transient response. Really, anything to back such empty claim. Until then, I test the reality of the device which means impact on noise, distortion, filtering and sound. Latter included listening tests.
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,563
Likes
238,983
Location
Seattle Area
Thanks Amir, but I have no need of convincing others about the merits of what I'm hearing.
I thought you put that forward here as an argument. Why did you mention it then if it is not to tell us it improves audibility?
 

PassionforSound

Member
Reviewer
Joined
Jul 24, 2022
Messages
45
Likes
12
I feel bad that there is so much misinformation and improper testing out there to have caused you to make this purchase decision. How about you performing a blind test and repeat a dozen times and let us know the results? Here is how to do it:


Thanks Amir, but I have no need of convincing others about the merits of what I'm hearing. I came hear to ask the questions I had about the evidence that I perceive to be missing from your review based on my experiences. I am not seeking to make anyone else right or wrong.


My testing approach for all reviews includes a lot of testing of assumptions and biases. Blind testing is not the only way to account for such biases. Bias in audio (and other similar hobbies) relies on the same mechanisms as group think and there are many ways to overcome these if one is aware that the issue exists (which I agree it does) and if one is able to apply some of the techniques discussed in these articles (and many others):
  • Keebler, D. (2015). Understanding the Constructs of Groupthink and Learning Organizations. International Leadership Journal, 7(1), 93-97.
  • S. Alexander Haslam, B. P., John C. Turner, B. P., Penelope J. Oakes, B. P., Katherine J. Reynolds, B. P., Rachael A. Eggins, B. P., Mark Nolan, B. P., & Janet Tweedie, B. P. (1998). When do stereotypes become really consensual? Investigating the group-based dynamics of the consensualization process. European Journal of Social Psychology, 28(5), 755-776. doi:10.1002/(SICI)1099-0992(199809/10)28:5<755::AID-EJSP891>3.0.CO;2-Z
  • Valine, Y. A. (2018). Why cultures fail: The power and risk of Groupthink. Journal of Risk Management in Financial Institutions, 11(4), 301-307.

It should not be a puzzle at all. If I get a dozen members to do that test, good numbers of them arrive at the same conclusion as you. This happens even if there is no change to the sound coming out of the system! All that needs to be there is the knowledge that something has changed. That is enough for your brain to listen differently and arrive at conclusion that sound has improved. I cover this in my video above.

Let's remember again that nothing in research or engineering that affirms your subjective conclusion. Or else every high-end DAC would be upsampling as that logic for that is textbook stuff. Given this backdrop, you need to be more diligent in your testing than just that comparison.

There is no doubt that we can create perceptions of things that don't exist, but this doesn't mean that everything perceived doesn't exist which is kind of the inference.

By the logic that this is all perceived placebo or bias, I would be immediately drawn to prefer the most expensive and best marketed products in every category, but time and time again that's not the case. I prefer the TT2 over the DAVE despite the marketing and pricing suggesting the DAVE is better. I prefer the Supra ISL interconnects over the more expensive Sword interconnects in some setups. And (I know this one is a controversial topic) I prefer cheaper USB cables over the very expensive AudioQuest Diamond. The placebo argument doesn't hold up to the experiential evidence.
 

PassionforSound

Member
Reviewer
Joined
Jul 24, 2022
Messages
45
Likes
12
I thought you put that forward here as an argument. Why did you mention it then if it is not to tell us it improves audibility?
Thanks Amir, but I have no need of convincing others about the merits of what I'm hearing. I came hear to ask the questions I had about the evidence that I perceive to be missing from your review based on my experiences. I am not seeking to make anyone else right or wrong.
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,563
Likes
238,983
Location
Seattle Area
There is no doubt that we can create perceptions of things that don't exist, but this doesn't mean that everything perceived doesn't exist which is kind of the inference.
No it doesn't. But when strong evidence demonstrates that your subjective assessment is wrong, and manufacturer is no help to address them either, then it is time for rigor in your listening tests. You said the difference is obvious. So it should be easy to detect in a blind test you run. But then you tell us you are not interested in convincing us that your listening test results are valid. But then why share it with people on youtube? Don't you want to provide reliable information there?
 

Doodski

Grand Contributor
Forum Donor
Joined
Dec 9, 2019
Messages
21,486
Likes
21,764
Location
Canada
By the logic that this is all perceived placebo or bias, I would be immediately drawn to prefer the most expensive and best marketed products in every category, but time and time again that's not the case. I prefer the TT2 over the DAVE despite the marketing and pricing suggesting the DAVE is better. I prefer the Supra ISL interconnects over the more expensive Sword interconnects in some setups. And (I know this one is a controversial topic) I prefer cheaper USB cables over the very expensive AudioQuest Diamond. The placebo argument doesn't hold up to the experiential evidence.
Proverbially falling upon your own blade does not make your test results irrefutable. It just means that you have made some mistakes along the testing way.
 

PassionforSound

Member
Reviewer
Joined
Jul 24, 2022
Messages
45
Likes
12
No it doesn't. But when strong evidence demonstrates that your subjective assessment is wrong, and manufacturer is no help to address them either, then it is time for rigor in your listening tests. You said the difference is obvious. So it should be easy to detect in a blind test you run. But then you tell us you are not interested in convincing us that your listening test results are valid. But then why share it with people on youtube? Don't you want to provide reliable information there?

I do provide reliable information on the YouTube channel and there is no evidence presented here that I can see which clearly demonstrates that the M-Scaler is not doing what it claims.
There is no claim related to timing accuracy in Chord marketing. Here is the sum total in the product page heading:

View attachment 220302

That is all it is doing. It is replicating samples to match that of the new sample rate and filtering them. I tested the filtering and said it is extremely sharp. Dubious claims about it helping transients is just that: random stuff hoping to confuse the lay audiophile. Have Rob point to papers/research that shows how resampling audio improves transient response. Really, anything to back such empty claim. Until then, I test the reality of the device which means impact on noise, distortion, filtering and sound. Latter included listening tests.

You highlighted here the digital filtering in the marketing of the Blu MkII / M-Scaler, but my point is that the explanation of the purpose of the 'advanced digital filter' (their words, not mine) goes beyond it being 'advanced' and discusses that it is intended to improve the accuracy of the reconstruction of the analog wave form in the time domain. I haven't seen any evidence provided that shows this is not occurring so I don't understand how the claims that the M-Scaler does nothing can be considered reliable. It seems to me to be like testing a car that's been designed for maximum handling and cornering ability by driving as fast as you can in a straight line.
 

PassionforSound

Member
Reviewer
Joined
Jul 24, 2022
Messages
45
Likes
12
Proverbially falling upon your own blade does not make your test results irrefutable. It just means that you have made some mistakes along the testing way.

Not sure how you reach that conclusion. @amirm suggested earlier that I am expecting to hear something and so I hear it. By that logic, the natural assumption (and indeed my expectations) when demoing more expensive, "better" gear is that it will be better. Thus, if our ability to conduct listening tests were so fallible, my results would always align with the more expensive "better" product sounding better. In reality, by applying the theory in the articles I linked above, I tested assumptions and challenged biases to arrive at the conclusions I did.
 
Top Bottom