Totally off topic but what did you end up spending on the amp?I bought the parts and assembled it myself. Purifi modules from Purifi, Ghent Case from Ghent and Hypex PS from Hypex.
Totally off topic but what did you end up spending on the amp?I bought the parts and assembled it myself. Purifi modules from Purifi, Ghent Case from Ghent and Hypex PS from Hypex.
I think I ended up paying around $1400, which is really dumb considering that VTV sells it for a few hundred dollars less. I didn't realize that they had a Purifi Eval-1 based design as they started out with just the Hypex buffer.Totally off topic but what did you end up spending on the amp?
Would this process have been necessary at all, i.e. are the differences not audible at any volume (possibly hidden details, impulse response....)?Yeah, but we recalibrated each amplifier on each sample, as a result this would introduce a random variation.
Would this process have been necessary at all, i.e. are the differences not audible at any volume (possibly hidden details, impulse response....)?
I love my Purifis for the speed, the elaboration of details and their separation.
And that's probably why people don't notice such differences between amplifiers, because the speakers they use are often a bottleneck and worsen the actual signal, thus masking the amp potential. And it looks like the Magico is not one of them and was a very good choice for this test.On a few run of the mill speakers, I really couldn't tell much of a difference as they themselves just don't have the detail resolution of the S5
Can't you even do an original excuse? The "your equipment isn't resolving enough" line is old enough to collect Social Security.And that's probably why people don't notice such differences between amplifiers, because the speakers they use are often a bottleneck and worsen the actual signal, thus masking the amp potential. And it looks like the Magico is not one of them and was a very good choice for this test.
Can't you even do an original excuse? The "your equipment isn't resolving enough" line is old enough to collect Social Security.
You got it.I haven't had the patience to read through all of it... But has this thread grown over 100 posts, because the OP refuses to do a DBT (or simply a BT) correctly?
I would if I were still there- escaped in 2009.If you live in the SF Bay area, drop by and we can do some 'blind' testing.
Why did you omit the problem pointed out that you didn't bench test the actual amps that were used? Including electrical FR when hooked up to the speakers used, at the speaker terminals? (although you could do this acoustically, too, with a bit less precision).Again thanks for comments and ideas, we will definitely factor those in to the next time we do a test.
But while the problems that have been pointed out are valid, it doesn't change the fundamental problem of explaining how I ended up with exactly the same sequence of 1,2,3 six times in a row. I would love for a the true statistician on this thread to maybe do a better job, but let me take a quick stab at it.
Hypothesis: It is unlikely, if not impossible to discern audible differences in amplifiers that have bench test results that are beyond audibility. In other words the standard suite of tests that we perform, Frequency Response, Noise, IMD etc, fully characterize the audio response such that two amplifiers (or DACs) that have similar results should be indistinguishable. (Of course we can assume that the audible test would be performed in a linear response region, i.e. not clipping).
Real world limitations in the testing that have been pointed out, and more importantly how could each SKEW the results in a particular direction. Random skew doesn't matter as with a sufficient number of samples, the skew will be average out.
1. In accurate measurements of level due to the use of an acoustic reference instead of electrical. Random skew factor as this would affect each amplifier measurement (1/36) equally.
2. Acoustic memory limitations. Random skew factor as this would affect each amplifier measurement (1/36) equally.
3. Preamp impedance issues: This could be a systematic error that skews in a particular direction. This seems highly unlikely in today's world of modern DAC's. The D90SE has an output impedance of 100ohms with the Benchmark's input impedance is 50K while the Eval1 is 10.2K. It isn't clear why this would matter.
4. Clipping: None of the amplifiers were clipping when we listened to selections as they were all moderate since our interest was in hearing differences, not how loud they could get. That doesn't seem to be a very plausible reason to invalidate the results.
So what is the probability in this case of picking 6 tests with the same order RANDOMLY? It is 1 in 46,656. In this case the results actually strongly favor that there is an audible difference that can't be explained by chance. Even if the odds were reduced to only 3 of the 6 tests were valid due to errors in the method that were random, that still is 1 in 216. That still doesn't favor the explanation that it is random and therefore inaudible.
Here is what I would strongly recommend. Why don't a few others repeat the tests and see what you get? There is nothing like actually using the scientific method and testing a hypothesis vs. theorizing about it. Remember the basis for this hypothesis is bench testing can measure anything that we can discern audibly. How well have we tested that hypothesis? If anything with the specs on every new DAC and amplifier approaching the limits of test equipment, testing this hypothesis should be easier and easier.
Correct.This forum is ultimately about the 'practice' of audio and practically is. So, what I have noticed since I posted this thread is the plethora of responses from a lot of folks who seem to have a lot of theoretical responses in a forum which is focused on the practical (testing of real gear). Why don't more of you test some of these items out and see if your experience in fact matches what your expectations are?
Correct.
But even the theory does not always seem to be fully reflected here, if that is possible at all. A good example is the repeated reference to Benchmark's DF paper.
The short paragraph on '7.4.3 Damping factor' is sufficient to understand how complex the theory can be and I think this one is closer to reality:
Link
It does in fact super assume human hearing acuity because all properly conducted listening tets show that measured differences between excellent amplifiers such as these are below the threshold of human hearing.I don't think it's because of a super-human hearing (at my age i certainly don't have that anymore), but the setup used plays a very big role. And it is not really an unexpected observation, because well-known manufacturers are talking about it and they should know best.
You do love your advertisements, don't you?Also recording studios and their engineers swear by this product, for example, especially because of the high DF in the mid-high range:
Link
This is interesting, because it is inconsistent with something in Toole's book first edition.I would like your comments on why the participants could not easily identify the difference between the German and Danish speakers that were the same except the crossovers were voiced differently, one has the German voicing, the other Danish voicing, when the FRs were visually quite different,
It was not intended as an advertisement, but simply to show that this behavior also plays a major role in the professional sector.You do love your advertisements, don't you?
It's an advertisement. Oh, sorry, it's a LINK to an advertisement, sooooo different.It was not intended as an advertisement, but simply to show that this behavior also plays a major role in the professional sector.