• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

CEntrance Hifi-M8 V2 Review (DAC and headphone amplifier)

radioman

Member
Audio Company
Joined
Oct 29, 2020
Messages
70
Likes
97
Location
Chicago
That's a different measurement altogether. See this guide on all of these tests: https://www.audiosciencereview.com/...derstanding-digital-audio-measurements.10523/
Awesome, hadn't seen that thread before. Will review tomorrow and work to give you an apples-to-apples plot then.

EDIT: Read the article. It's curious you mention Julian Dunn. I worked with Julian in the 90's on the development of the AES-24, the digitally interfaced microphone standard. I co-chaired the Standards Group with Steve Green before David Josephson took the reigns. Julian was a nice person indeed. Kind and soft-spoken. Great memories!
 
Last edited:

digicidal

Major Contributor
Joined
Jul 6, 2019
Messages
1,984
Likes
4,844
Location
Sin City, NV
But could you guide us what is « proper » training and skill?I never doubted that you are. And you could even arguably be the most, but not the only.
Although not the only training, of course (just doing dozens of ABX on file formats is another form) but I would recommend checking Sean Olive's blog posts on the Harman training software. There are a few posts here but the easiest to find is HERE.

Yes I've compared them blind, for fun, they are too different to mistake, but objectively, common sense, I think we can say it's quite rigourous to believe that the odds of mistaking them would have been higher on day zero, than on day 3000? You disagree with this?
I definitely can't speak for @amirm but yes, I would strongly disagree with that. There are really only two possibilities:
1) There is an obvious audible difference between the two DUT's - in which case it should remain so regardless of repetition. In fact, since you would almost certainly be much more focused in a zero-day test than you would after 10 years... it might even become inaudible to you eventually.
2) There is no significant audible difference at all. This means the preference was caused by something other than something being heard, and thus could easily present itself at any time during the 3000 day period or not.

In either case the odds of listening as carefully at the end as one did at the beginning are incredibly small. That's why people (myself included) are easily convinced of nearly any tweak making some difference in sound... because we started listening for it - and our brains take care of supplying it. That happens regardless of any actual differences. Hence the need for measurements by machines which are impervious to such limitations. ;)
 

PeteL

Major Contributor
Joined
Jun 1, 2020
Messages
3,303
Likes
3,846
Although not the only training, of course (just doing dozens of ABX on file formats is another form) but I would recommend checking Sean Olive's blog posts on the Harman training software. There are a few posts here but the easiest to find is HERE.


I definitely can't speak for @amirm but yes, I would strongly disagree with that. There are really only two possibilities:
1) There is an obvious audible difference between the two DUT's - in which case it should remain so regardless of repetition. In fact, since you would almost certainly be much more focused in a zero-day test than you would after 10 years... it might even become inaudible to you eventually.
2) There is no significant audible difference at all. This means the preference was caused by something other than something being heard, and thus could easily present itself at any time during the 3000 day period or not.

In either case the odds of listening as carefully at the end as one did at the beginning are incredibly small. That's why people (myself included) are easily convinced of nearly any tweak making some difference in sound... because we started listening for it - and our brains take care of supplying it. That happens regardless of any actual differences. Hence the need for measurements by machines which are impervious to such limitations. ;)
Just for fun, let's forget for now the concept of skill and training, and the concept or spotting differences. Let's say we are talking one subject, with it's set of skills, good or bad, and one single product. Would you claim that the objective evaluation of all the aspects of the performance of a product is more accurate after one test, under one controlled condition than after 1000 test under random conditions?

Now if we get back to my exemple. on day one, I was able to hear a difference between speaker a and speaker b. That is not the point. The point is, on day 3000, I know more rigorously that speaker b is a better speaker than speaker a.

In any cases, I agree with you on this: "measurements by machines which are impervious to such limitations"
 
Last edited:

digicidal

Major Contributor
Joined
Jul 6, 2019
Messages
1,984
Likes
4,844
Location
Sin City, NV
@PeteL I'd recommend moving this discussion to a different thread (there are many on here already - perhaps one of the threads regarding listener training, audio memory reliability, etc). In the context of this thread there's a perfect example of the problem in your first question in the preceeding dozen or so posts:

Would you claim that the objective evaluation of all the aspects of the performance of a product is more accurate after one test, under one controlled condition than after 1000 test under random conditions?

If the conditions are random, there is almost no repeatability... and therefore the results can't reliably be compared to previous results. Presuming there is no error in either equipment, parameters, etc. it should be possible to replicate the previous results (outside the margin of error or resolution of the test equipment itself). Just like statistics, by modifying the parameters sufficiently it may be possible to achieve significantly disparate results... which is why the control of as many variables as possible in every test is of paramount importance.

Once truly controlled, any more than 2-3 passes (in a case where the validity of the results is in question) should make the source of the anomaly apparent. I can think of no case in which 1000 tests under random conditions would serve any purpose other than to test the durability of the analysis equipment, and the patience (or obsessive compulsions) of the operator. ;)
 

PeteL

Major Contributor
Joined
Jun 1, 2020
Messages
3,303
Likes
3,846
@PeteL I'd recommend moving this discussion to a different thread (there are many on here already - perhaps one of the threads regarding listener training, audio memory reliability, etc). In the context of this thread there's a perfect example of the problem in your first question in the preceeding dozen or so posts:



If the conditions are random, there is almost no repeatability... and therefore the results can't reliably be compared to previous results. Presuming there is no error in either equipment, parameters, etc. it should be possible to replicate the previous results (outside the margin of error or resolution of the test equipment itself). Just like statistics, by modifying the parameters sufficiently it may be possible to achieve significantly disparate results... which is why the control of as many variables as possible in every test is of paramount importance.

Once truly controlled, any more than 2-3 passes (in a case where the validity of the results is in question) should make the source of the anomaly apparent. I can think of no case in which 1000 tests under random conditions would serve any purpose other than to test the durability of the analysis equipment, and the patience (or obsessive compulsions) of the operator. ;)
Yes I don't mind taking this elsewhere, Note than I'm only commenting and replying tough, I didn't asked "the first question", I was participating in a discussion that our host himself was engaging, in this very thread, with statements that I disagreed on. But guide me and I'll answer there. We don't have to agree on everything, but reading what you say I don't know if we are even talking about the same thing. Listening tests. The "margin of error of the test equipment"? your ear?The "durability of the analysis equipment"? you mean testing if you are going deft? Random conditions is not about repeatability and comparison, it's about extending the scope of analysis. If you always have the same input in the same environment, with the same equipment under test, yes, obviously, you'll always get the same output. But you'l know nothing on how it performs with limit cases, under stress, etc. Anyway...
 

digicidal

Major Contributor
Joined
Jul 6, 2019
Messages
1,984
Likes
4,844
Location
Sin City, NV
If you always have the same input in the same environment, with the same equipment under test, yes, obviously, you'll always get the same output. But you'l know nothing on how it performs with limit cases, under stress, etc. Anyway...

My point was simply that for consumer audio products, used almost exclusively indoors, by humans... the number of those edge cases isn't as large or significant as your post seemed to imply (although I'm certainly willing to be proven wrong). If you are a manufacturer and are testing for purposes of extreme environmental durability or to qualify component selections and cost consideration then certainly it could be useful. In the case of reviewing a DAC for comparison to like products and adherence to manufacturing specifications... not so much.

Most of the tests are indeed ranges (of frequency, linearity, distortion, etc.) so they take into account both minimal and maximal values (and everything in between of course) - within reason at least.

If you have evidence of a commonly occurring variable/parameter which is missing - I'm sure @amirm would be happy to consider it, but you'd need to prove that it's not only common, but that it has significant impact on the device/signal. Looking at every possible combination for every device would take years to test anything... hence standards. Of course, if you are saying that as far as subjective analysis is concerned (which it sounded like) then the sky's the limit... but this forum has "science" in the name, so the limits are much more rational and, well... limited. ;)
 
Last edited:

PeteL

Major Contributor
Joined
Jun 1, 2020
Messages
3,303
Likes
3,846
Most of the tests are indeed ranges (of frequency, linearity, distortion, etc.) so they take into account both minimal and maximal values (and everything in between of course) - within reason at least.

If you have evidence of a commonly occurring variable/parameter which is missing - I'm sure @amirm would be happy to consider it,
Again, sorry, are we still taltking about LISTENING tests?
I don't know where this is going, so I'll rephrase again.
Do you believe, that it's a scientifically proven fact, that the full evaluation of the performance of an audio product, by ear, is equally, or more accurate after one listen than it is after years of evaluation? That is this claim, and only this, that I disagreed with, and that I built an argument against.

"if you are saying that as far as subjective analysis is concerned (which it sounded like) then the sky's the limit... but this forum has "science" in the name, so the limits are much more rational and, well... limited. "

That was exactly my point, that's why I mentioned these limitations and mentioned that it is not for the subjective evaluations that we (I'll admit that I should have implied only me) consult science and measurment based reviews. @amirm disagreed with this as well.
 
Last edited:

Panelhead

Senior Member
Forum Donor
Joined
Apr 3, 2018
Messages
348
Likes
137
Looking forward to seeing the tests run at the factory. Seeing the battery powered tests with baseline at -150 dB looks like it has 19+ bit resolution.
I am a CEntrance fanboy.
Currently listening with an almost 12 year old CEntrance product. It was sold and marketed by another company. But CEntrance did all the heavy lifting. Integral hardware Subwoofer outputs, fully balanced inputs and outputs, Room Perfect DRC, and a few other unusual features. Think they also provided the original FireWire drivers. Has AK4396 dac chips, capable of more than 16 bit resolution.
Not bad at all for a product that went on sale in early 2009. I am sure they can do better today.
 

radioman

Member
Audio Company
Joined
Oct 29, 2020
Messages
70
Likes
97
Location
Chicago
@radioman, when can we expect CEntrance’s measurements?
This will be short as it's been a busy week (is it only Wednesday?) We are basically done with our internal evaluations. I wrote a 2-page summary plus a set of design criteria today and will publish it shortly for the community to see. It will explain the reason for slightly elevated THD (hint: A digital volume pot was required to enable perfect channel tracking for balanced armature IEMs).

We shared a noise graph the other day. Amir indicated he prefers a SINAD graph instead. As SINAD combines Noise and THD into one number it obscures the fact that in HiFi-M8, SINAD is dominated by THD. Noise is actually quite good. It's on par with our own DACport HD, which Amir has recommended. HiFi-M8 has same SINAD, more power and less noise, so it's certainly not worse than DACport HD. On top of that it has an internal battery and will work on the plane/train, etc., which DACport HD cannot do.

We know that HiFi-M8 is not for everyone. Amir shared that he doesn't understand why it exists. He didn't like the industrial design and felt it was designed for the pro market. That's not the case. We acknowledge that HiFi-M8 simply serves the needs of a different customer than the typical ASR reader. HiFi-M8 is not competing to be the best Desktop DAC. It's another animal altogether.

If you want lowest THD, plenty of options around (hint: Get a desktop DAC). If you are ok with 0.004% THD, are on the move, have balanced armature IEMs and want pitch-black noise floor, 4 outputs (2 balanced and 2 unbalanced) and a whole bunch of portable-oriented features, then HiFi-M8 may be for you. We celebrate the difference in opinion and have a decent backorder to fill. As our research has shown, HiFi-M8 has its customer.
 
Last edited:
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,654
Likes
240,808
Location
Seattle Area
We shared a noise graph the other day. Amir indicated he prefers a SINAD graph instead.
I did not say that at all. I explained why your FFT was not a good way to read the dynamic range from. If the driver had worked, I would run these tests:

index.php


The one on the left is at full output (if not exceeding 2/4 volts respectively for unbalanced/balanced out) and the one on the right at much reduced 50 millivolt output to tease out the very point you are trying to make. Once there, I put it in rankings: (click on the image for larger size)

index.php


See this review for examples: https://www.audiosciencereview.com/...qudelix-5k-bluetooth-dac-headphone-amp.17386/
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,654
Likes
240,808
Location
Seattle Area
He didn't like the industrial design and felt it was designed for the pro market.
This point is also incorrect. I don't mind the look of it. The thing I take issue with it is that it pretends to be a "Pro" product by copying the look of such devices but without delivering the performance. And making the device harder to use for consumer applications. Kind of reminds me of the El Camino:

el-camino.jpg


Cutting the back open doesn't make a sedan a truck!
 

digicidal

Major Contributor
Joined
Jul 6, 2019
Messages
1,984
Likes
4,844
Location
Sin City, NV
Again, sorry, are we still taltking about LISTENING tests?
I don't know where this is going, so I'll rephrase again.
Do you believe, that it's a scientifically proven fact, that the full evaluation of the performance of an audio product, by ear, is equally, or more accurate after one listen than it is after years of evaluation? That is this claim, and only this, that I disagreed with, and that I built an argument against.

"if you are saying that as far as subjective analysis is concerned (which it sounded like) then the sky's the limit... but this forum has "science" in the name, so the limits are much more rational and, well... limited. "

That was exactly my point, that's why I mentioned these limitations and mentioned that it is not for the subjective evaluations that we (I'll admit that I should have implied only me) consult science and measurment based reviews. @amirm disagreed with this as well.

Well, in that case I would agree with you (I think?) that it is NEVER possible to have a "full evaluation of the performance of an audio product by ear." Period. Regardless of training it is only possible to provide a subjective evaluation of the sound - which could never be considered (by me and many others) as "full" in any sense. I would still disagree that any subjective analysis gets more accurate over time... in fact, it almost certainly gets less accurate as familiarity accumulates (or at the very least gets more biased). The first take is almost certainly the most free of bias and likely involves the most careful analysis.

That being said, I've never cared about the subjective analysis @amirm put forth on anything... I'm not here for those (but plenty of others are so I think they're a great add-on benefit). I don't have his ears, training, equipment, room, etc. - so those aren't nearly as useful as the IMD, SINAD, and level measurements (to me at least).

This point is also incorrect. I don't mind the look of it. The thing I take issue with it is that it pretends to be a "Pro" product by copying the look of such devices but without delivering the performance. And making the device harder to use for consumer applications. Kind of reminds me of the El Camino:

el-camino.jpg


Cutting the back open doesn't make a sedan a truck!

That probably would be a better analogy if that weren't an exceptionally sexy resto-mod El Camino... ;) I'm not a fan of the design either... but I wouldn't kick that one out of my garage. LOL.
 

radioman

Member
Audio Company
Joined
Oct 29, 2020
Messages
70
Likes
97
Location
Chicago
This point is also incorrect. I don't mind the look of it. The thing I take issue with it is that it pretends to be a "Pro" product by copying the look of such devices but without delivering the performance. And making the device harder to use for consumer applications. Kind of reminds me of the El Camino:

el-camino.jpg


Cutting the back open doesn't make a sedan a truck!
Thank you for the nice picture. We appreciate that you can inject levity into the discussion and deviate from the "science above all" position when discussing visual preferences and customer use cases. We hear you that beauty is in the eye of the beholder. To each his own, as they say ;)

We see a lot of trucks on the road that carry absolutely nothing in the bed. So it must be the feeling that counts. Perhaps the El Camino is more about the feeling than the truck. But we digress.
 
Last edited:

radioman

Member
Audio Company
Joined
Oct 29, 2020
Messages
70
Likes
97
Location
Chicago
I did not say that at all. I explained why your FFT was not a good way to read the dynamic range from. If the driver had worked, I would run these tests:

We are sorry -- we missed the fact that the driver on our website was older and it hadn't yet included HiFi-M8. As this review happened out of the blue, we weren't properly prepared, and that caused you inconvenience and frustration. We will be publishing a new driver shortly, it's complete and in beta test now.
 

PeteL

Major Contributor
Joined
Jun 1, 2020
Messages
3,303
Likes
3,846
The first take is almost certainly the most free of bias and likely involves the most careful analysis.

Many showroom salesmen would LOVE to hear this. A LOT of speakers are designed specifically to give a great first impression, muscular bass and details are sexy, you'll take her home, only to realize how annoyingly shouty she is after a few days...
 

digicidal

Major Contributor
Joined
Jul 6, 2019
Messages
1,984
Likes
4,844
Location
Sin City, NV
Many showroom salesmen would LOVE to hear this. A LOT of speakers are designed specifically to give a great first impression, muscular bass and details are sexy, you'll take her home, only to realize how annoyingly shouty she is after a few days...
All of which is readily apparent in a simple FR sweep (and maybe a waterfall if it's a resonance vs. midrange peak causing that "shout") viewed prior to auditioning in the first place, right? And yes I'm familiar with BOSE displays and yes they sold millions of them. More to the point would be that unless the speaker was horrible... DSP and/or room treatment could be used to tame all but the most egregious disasters - not to mention that sales floor "auditions" are literally the opposite of a controlled environment evaluation.

You seem to keep moving goal posts here as we're again talking about speakers not DACs so I'll simply have to leave it there.
 

PeteL

Major Contributor
Joined
Jun 1, 2020
Messages
3,303
Likes
3,846
All of which is readily apparent in a simple FR sweep (and maybe a waterfall if it's a resonance vs. midrange peak causing that "shout") viewed prior to auditioning in the first place, right? And yes I'm familiar with BOSE displays and yes they sold millions of them. More to the point would be that unless the speaker was horrible... DSP and/or room treatment could be used to tame all but the most egregious disasters - not to mention that sales floor "auditions" are literally the opposite of a controlled environment evaluation.
You seem to keep moving goal posts here as we're again talking about speakers not DACs so I'll simply have to leave it there.
Sure, we'll leave it there, but I don't know who's moving the goal post when you keep willing to disagree with me on stuff I never disagreed with myself? And always coming back to measurments when it was never about this. I don't mind being disagreed with, and love being proved wrong, but in this argument it's about being misunderstood, and in these case I guess I'll just have to accept that it's my bad, that my communication skills are poor I guess.
What I AM saying, is that:
1. auditioning a piece of audio equipment in different situations, with different content, at different level, for an extended period of time can reveal flaws or prowess that are not necessarily cought on a first listen.
2. Amir's subjective impressions and listening tests are valuable, but are only valuable because he is more skilled and trained than most of us, but not because the PROCESS is more valid scientifically, or less biased.

You are welcome to express disagreement on these statements, I, like you, fully value the measurments being performed
 

digicidal

Major Contributor
Joined
Jul 6, 2019
Messages
1,984
Likes
4,844
Location
Sin City, NV
1. auditioning a piece of audio equipment in different situations, with different content, at different level, for an extended period of time can reveal flaws or prowess that are not necessarily cought on a first listen.
2. Amir's subjective impressions and listening tests are valuable, but are only valuable because he is more skilled and trained than most of us, but not because the PROCESS is more valid scientifically, or less biased.

You are welcome to express disagreement on these statements, I, like you, fully value the measurments being performed
LOL. 1. Agreed. :eek: You might even hear something differently depending on how hungry you are at the time. That's the inherent problem with all subjective analysis. However, that is the problem of the listener... not the devices in the signal chain. So your point in the context of producing a review for publication by a single individual is confusing (to me at least). There is no possibility to ever test every potential... and even if there were the results would be meaningless in any other context, and certainly to any other person.

2. Disagree (ish). As long as the process is controlled as much as possible - it is more valid scientifically. I'm not saying more valid than the objective testing... just more valid than an uncontrolled random test. Even if there is an inherent bias... if all other variables are controlled, then even those biases will be far more consistent - and the variable (DUT) easier to analyze. I don't recall @amirm claiming to have no bias at all - in fact, he's commented on his biases in many cases (preference for speakers which most closely approximate the same Harman house curve his Ultima's exhibit for example).
As long as the biases aren't further exacerbated by changing other variables... they become closer to a control themselves.

As I said previously, I don't even consider his subjective analysis in my decisions - I select the products I've purchased based on the measurements... but many people enjoy them, so I certainly don't see the problem with what usually amounts to less than 5% of the review's content, especially if the other 95% is very useful. I also don't get worked up over which panther is chosen however... and that seems to get peoples panties in a bunch also.
 

radioman

Member
Audio Company
Joined
Oct 29, 2020
Messages
70
Likes
97
Location
Chicago
Hello all! We want to thank this community for your continuous inquiry.

Your feedback offered us a chance to go back, review our decisions and summarize why we designed the HiFi-M8.

We put together a short paper describing our design criteria for the product. Hopefully it will help you see our decision process and shed some light on why we made it this way (and for whom). In this paper we address THD and noise specs, as they were brought up in this thread. Click the link below to download the PDF paper (4 pages, 900 kb).

HiFi-M8 Design Criteria

Best wishes for your continued enjoyment of music!

Michael Goodman
Founder, CEO, Chief Product Architect
CEntrance
 
Top Bottom