• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Do objective youtube reviewers exist?

eigenvalue

Member
Joined
Apr 29, 2021
Messages
20
Likes
38
I'm pretty tired of watching youtubers who review equipment such as dacs and amplifiers. Many seem to spout stuff like "this amplifier's bass is recessed and mids are forward, closed soundstage and bad dynamics etc etc", despite of course the amp measuring as neutral as a flat line with inaudible distortion. So my question is are there youtubers who actually do objective comparisons and blind abx testing or does every single one of them just spout the same nonsense? I know of amir's youtube channel which is great but of course he doesn't have videos for everything.
 

LTig

Master Contributor
Forum Donor
Joined
Feb 27, 2019
Messages
5,814
Likes
9,532
Location
Europe
I'm pretty tired of watching youtubers who review equipment such as dacs and amplifiers. Many seem to spout stuff like "this amplifier's bass is recessed and mids are forward, closed soundstage and bad dynamics etc etc", despite of course the amp measuring as neutral as a flat line with inaudible distortion. So my question is are there youtubers who actually do objective comparisons and blind abx testing or does every single one of them just spout the same nonsense? I know of amir's youtube channel which is great but of course he doesn't have videos for everything.
Erin's Audio Corner. No blind abx tests but measurements.
 
Last edited:

MOCKBA

Member
Joined
Sep 23, 2019
Messages
80
Likes
28
Preparing a good YouTube video requires a lot of efforts. Obviously people want to be rewarded for that, so this is a source not to be objective. But actually youtube does the job good. I like videos where a reviewer shows a product well, and if I need technical detail, then I go there. So no much complains from my side.
 

GoldenOne

Not Active
Joined
Jun 25, 2019
Messages
201
Likes
1,469
I've been quite open with my approach to reviews and my view on things.
I think that the polarisation between objectivist and subjectivist is getting worse and it isn't good for anyone.

There are far too many people who will blindly claim that X level of spec Y is inaudible and anything beyond it makes no difference. Even if there is insufficient evidence to make that claim.
Or that so long as a product measures beyond say -120dB in THD/IMD/dynamic range etc, that it is "transparent" and therefore will sound identical to any other similarly well measuring product.

I do not believe this is the case, and hell even Audio Precision themselves are quite happy to tell you this:

There are many factors which cannot be shown in a standard test suite, and there are also many metrics for which there is surprisingly little evidence to support the current assumed threshold of audibility.
Additionally many studies were conducted with just normal people rather than musicians or experienced listeners who in various other studies have repeatedly shown to have much better audibility thresholds in various areas.

Objective performance is important to me. And in my view there are almost no reasons to buy an objectively BAD product. In fact just about everything my main chain is still some of the best measuring equipment in each category
- Holo May which is the best measuring R2R dac by a long shot and in many areas one of the best DACs period
- Holo Serene which to my knowledge is currently the best measuring preamp though there are many preamps with no publicly available measurements (previously I used the goldpoint attenuator which is as clean as it gets)
- Benchmark AHB2, which again, to my knowledge is still the best measuring speaker amp on the market with the possible exception of some class D options.

But I absolutely believe that just because a product measures well, it does not mean that it will sound identical to another well measuring product.
Take DDC's for example. This is an area that many in this forum would laugh at. But, there is an audible difference there. I can in an assisted double-blind test correctly choose which DDC is being used even though both of them have measured inferred jitter that many here would consider to be below the threshold of audibility, to a more than statistically significant level.

This either indicates that I have 'golden' hearing, which I do not believe to be the case. Or that maybe the assumption of the audibility threshold is incorrect. Given as there IS evidence to support that human hearing beats the Fourier uncertainty principle (and is also better for those who are experienced listeners), that is probably the most likely factor.
https://phys.org/news/2013-02-human-fourier-uncertainty-principle.html#:~:text=The Fourier uncertainty principle states,required to represent the sound.

Objective performance is important, but it's also important to understand when something is an assumption, rather than fact.


On the other hand, you have people who refuse to consider any objective evidence whatsoever, and will purchase products that they feel sounds better even if objective performance is shown to be exceptionally poor (such as Audio GD R2R dacs) or shows no difference whatsoever (fancy power cables etc).

I don't believe this is good at all because it means you are leaving people's purchasing decisions up to marketing, word of mouth, and fluff, rather than actual engineering or performance.
Hell even the aesthetics of a product could likely play a part here. I'm sure if you made an amp orange people would suddenly perceive it as subjectively "warmer".
Parroting, placebo, expectation bias etc are all serious problems which far too many people are quite happy to ignore so that they can keep believing they are correct.
No one is immune from placebo. There have been times where I've been convinced I heard a difference or change, but then as soon as I did a blind test I couldn't pick it out at all.
It's important to check for this stuff. Both because then it filters out the placebo, and also can help confirm if something IS indeed there even if you don't have an explanation for it.
(Though it's important to note a blind test cannot confirm if two things are the same/no difference as you can expectation bias yourself into not hearing one. Swimming teacher can't make you float but that doesn't prove that humans can't swim.
It can however confirm or suggest to a statistically significant degree that a difference is present)



I sit in the middle of objective/subjective.
I want stuff that has clearly been well thought through, and proper engineering has made the product possible.
But If I also have two dacs, both of which are perfectly volume matched, both of which measure exceptionally well, and should both be "audibly transparent", and yet they don't sound the same, and I can reliably, and to a statistically significant level tell the difference in a double blind test, I can't simply ignore that and pretend it doesn't exist.
I want to find out WHY that is. And it's frustrating when half of the community says "well measurements are pointless anyway so just use your ears and enjoy it" and the other half says "nope, impossible, it doesn't exist".

I just wish we could have more of a conversation about this stuff.
I'd love to find out WHY some of these things happen. WHY some things that "shouldn't" make a difference DO make a difference.

And to be clear, it's not like I'm peddling expensive stuff. There are of course many instances where if you want the best in all areas it's going to be pricey, but in many situations there are very expensive products that objectively, AND subjectively to my ear, perform poorer.
Going back to the DDC example, I use USB on my DAC normally, but when I was comparing a few DDCs, I had the ~£1000 denafrips hermes, and the ~£200 pi2aes hooked up to my dac. Both via i2s, both are bitperfect, no DSP or alteration.
One would assume that either there would be no difference because anything below X level of jitter should be inaudible anyway, or that the hermes with it's linear PSU, oven controlled clocks, isolation and all sorts of other stuff, would be the winner.

And yet, to my ears, the pi2aes was slightly better. Images were slightly more precise. And yes, I did an assisted blind test, and with a P-Value of about 2% I could correctly choose which DDC was the source.

I then measured the inferred jitter on both and the pi2aes actually was slightly objectively better. Despite both having inferred jitter completely below -140dB (~7ps, it's also quite important to remind that a J-test should not be directly equated to harmonic distortion in terms of audibility. The J-Test is a clever way for us to measure jitter using FFT but it shouldn't be treated as though the audibility levels are going to be the same as THD or IMD or noise.)

So in this case the much cheaper option was the winner. And the fancy 'audiophile components' product wasn't as good.



I'd be quite happy to make a video doing some blind tests on some of this stuff. To be honest the only reason I haven't is because I just assumed that anyone who disagreed that whatever I was comparing could make a difference would simply claim I'd faked it somehow so there wouldn't be all that much point.



Anyway, to summarise:
I value objective performance and would likely never buy an objectively bad product unless it was specifically because it had some subjectively enjoyable quality (tube amps for example are known to be quite coloured in many cases but people like them a lot. So long as you understand the shortcomings then that's ok IMO).
I also do not think that just because something measures very well that it is guaranteed to sound excellent or the same as any other well measuring product.
We don't have explanations for everything that we subjectively experience. And so whilst I would love to have an explanation and say that staging, imaging, timbre etc are due with certainty to factors X, Y and Z, we simply can't do that.
And so until then we can only describe the subjective result, and make assumptions as to the objective explanations for them.


The MQA video was a bit of a different area because it was effectively a black box. We have no idea exactly how it works and so I simply set out to test their own marketing claims.
I quite deliberately avoided giving any subjective impressions or my thoughts on MQA vs Lossless quality because in my opinion there simply isn't enough evidence to reach a conclusion.
 
Last edited:

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,595
Likes
239,634
Location
Seattle Area
Or that so long as a product measures beyond say -120dB in THD/IMD/dynamic range etc, that it is "transparent" and therefore will sound identical to any other similarly well measuring product.

I do not believe this is the case, and hell even Audio Precision themselves are quite happy to tell you this:
No, he doesn't say that (he is a member here btw and I was the one asking him questions at the end of that video). If you believe in what you typed above, then you are in the wrong forum.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,595
Likes
239,634
Location
Seattle Area
We don't have explanations for everything that we subjectively experience.
Subjective experience is the Gold Standard in audio evaluation. But you have to a) do them properly and b) show that you have critical listening abilities. Just playing something and pontificating about what you think you heard is storytelling, not anything to do with proper sound evaluation.

If you have not watched this, I highly suggest that you do so now before you mislead more people:

 

richard12511

Major Contributor
Forum Donor
Joined
Jan 23, 2020
Messages
4,335
Likes
6,702
While I disagree with most things @GoldenOne said in the post above: I beg to differ.

Some of his publications, such as his recent MQA deep dive are quite valuable to this forum, IMHO.

Same. I'm in the all DACs/AMPs/Preamps generally sound the same unless it's broken(changing the frequency response) camp, but GoldenOne's MQA video was one of the best audio video's I've ever seen, and definitely valuable to this community.

*Edit: And I think it's valuable to have subjectivists here. I used to be one when I was younger, and it's a good balance that keeps us in check :).
 
Last edited:

charleski

Major Contributor
Joined
Dec 15, 2019
Messages
1,098
Likes
2,240
Location
Manchester UK
I did an assisted blind test, and with a P-Value of about 2% I could correctly choose which DDC was the source.
Failing to disprove the null hypothesis doesn’t mean you proved anything.

If you do ten ABX tests and just guess each time, there’s a (1 - 0.98^10) = 18.3% chance you’ll get a 2% p value or better on one of them. The key factor is repeatability, but too many people think that getting lucky on one test means they’ve shaken the world of science
 

GoldenOne

Not Active
Joined
Jun 25, 2019
Messages
201
Likes
1,469
Failing to disprove the null hypothesis doesn’t mean you proved anything.

If you do ten ABX tests and just guess each time, there’s a (1 - 0.98^10) = 18.3% chance you’ll get a 2% p value or better on one of them. The key factor is repeatability, but too many people think that getting lucky on one test means they’ve shaken the world of science
And I agree fully
 
OP
eigenvalue

eigenvalue

Member
Joined
Apr 29, 2021
Messages
20
Likes
38
Failing to disprove the null hypothesis doesn’t mean you proved anything.

If you do ten ABX tests and just guess each time, there’s a (1 - 0.98^10) = 18.3% chance you’ll get a 2% p value or better on one of them. The key factor is repeatability, but too many people think that getting lucky on one test means they’ve shaken the world of science

Also if the test was valid, why not go through the methodologies, results, what controls were put into place etc etc.
 
Top Bottom