• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Master Thread: Are measurements Everything or Nothing?

welwynnick

Active Member
Forum Donor
Joined
Dec 26, 2023
Messages
245
Likes
200
I quite agree about the threshold of hearing, but I was talking about the audibility of jitter, which I have found to be quite audible with level matched blind comparisons. That was a few years ago, and it could be that newer equipment has improved to the extent that it's inaudible now. I'm sure Topping and SMSL have got it licked, but anything with HDMI I'm yet to be convinced.
 

DLS79

Addicted to Fun and Learning
Forum Donor
Joined
Dec 31, 2019
Messages
744
Likes
971
Location
United States
I quite agree about the threshold of hearing, but I was talking about the audibility of jitter, which I have found to be quite audible with level matched blind comparisons. That was a few years ago, and it could be that newer equipment has improved to the extent that it's inaudible now. I'm sure Topping and SMSL have got it licked, but anything with HDMI I'm yet to be convinced.


Even for this papers exist.


In order to determine the maximum acceptable size of
jitter on music signals, detection thresholds for artificial
random jitter were measured in a 2 alternative forced
choice procedure. Audio professionals and semi-professionals participated in the experiments. They were allowed to
use their own listening environments and their favorite
sound materials. The results indicate that the threshold for
random jitter on program materials is several hundreds ns
for well-trained listeners under their preferable listening
conditions. The threshold values seem to be sufficiently
larger than the jitter actually observed in various consumer
products.
 

welwynnick

Active Member
Forum Donor
Joined
Dec 26, 2023
Messages
245
Likes
200
I've been quoting that paper for many years as an example of bad science, that the gullible take as evidence to support their mistaken opinions.

Have another read and see if you can find the glaring faults.
 

DLS79

Addicted to Fun and Learning
Forum Donor
Joined
Dec 31, 2019
Messages
744
Likes
971
Location
United States
I've been quoting that paper for many years as an example of bad science, that the gullible take as evidence to support their mistaken opinions.

Have another read and see if you can find the glaring faults.

I'm sorry but that is not how scientific debate works, If you think something is wrong/incorrect you single it out and describe/explain why you think it is wrong. Ideally you support your argument with research of your own or that of others!
 

SIY

Grand Contributor
Technical Expert
Joined
Apr 6, 2018
Messages
10,511
Likes
25,351
Location
Alfred, NY
Just because an ICs can do it, doesn't mean the manufactures have configured it to do so, or added any auxiliary circuits needed to make sure it work as well as it possibly can.
Tell us you don't know about how digital transmission works without saying you don't know how digital transmission works. It doesn't have to "work as well as it possibly can," it just has to be good enough, which it trivially is.

This is fabulously a non-problem up to the point where you get glitching and dropouts. Irrelevant to jitter and sound quality.
 

SIY

Grand Contributor
Technical Expert
Joined
Apr 6, 2018
Messages
10,511
Likes
25,351
Location
Alfred, NY
I've been quoting that paper for many years as an example of bad science, that the gullible take as evidence to support their mistaken opinions.

Have another read and see if you can find the glaring faults.
So where is your data showing the picosecond-level jitter is audible?
 

antcollinet

Master Contributor
Forum Donor
Joined
Sep 4, 2021
Messages
7,751
Likes
13,086
Location
UK/Cheshire
Yes some of them are, but they're always significantly worse than other connections. Always has been, so it doesn't sound like a problem that's fully fixed.

If it really was fixed, every megabuck AVP would measure like a cheap thumb drive.
If it's inaudible, it's fixed. Fixing it more doesn't make it even more inaudible.

And AVRs have a whole load of other compromises they need to make that standalone dacs - or even stereo integrated simply don't have to deal with.
 

Newman

Major Contributor
Joined
Jan 6, 2017
Messages
3,530
Likes
4,371
I was talking about the audibility of jitter, which I have found to be quite audible with level matched blind comparisons.
Oh, this is exciting! POST YOUR FINDINGS!!
 

Newman

Major Contributor
Joined
Jan 6, 2017
Messages
3,530
Likes
4,371
That's a harsh reception everyone. Is that how you always greet people?
Do you mean greet people in general, or do you mean greet people who come here and lecture down to the forum with a ‘here’s how things really are’ post that is full of drivel?

Perhaps you, just to be consistent, have also berated him for how he greeted us? Maybe I missed that post of yours?

Or is that post missing because you agree with some of his post’s inaccuracies? So you are in effect taking sides, and that’s why you berate his critics for harshness, but not him?
 
Last edited:

Ghostofmerlin

Member
Forum Donor
Joined
Jan 16, 2024
Messages
37
Likes
61
I have now read the entirety of this thread, as well as the "golden ears" welcome thread, so I'm going to make a comment. I think it's pretty clear that most of these people who come to discuss how they "hear things" with any of their equipment are trolls.

It is really quite baffling that people who will claim to "hear" differences between two identically measuring devices or cables can claim the high ground. The onus would be on them to show that there is either a measurable difference in the tested device (which Amir takes care of) or that there is some sort of measurable difference in the measuring device, i.e. their ears. Just telling people you can hear something different is absolutely not provable without a blind test. Of course this is all information that the non-troll members here already believe, but it strikes me how similar the concept of measuring and testing how humans perceive sound is to biomedical research and the never-ending quest to get rid of biases.

Regarding the placebo effect, this word gets tossed around a lot regarding audio sales and consumption. The placebo effect is well studied, however and is so important that the entire medical community considers this to be paramount in their research. There was a time when much of medical therapy was based on theory or "what works in my hands" but over time many treatments and medications thought to be effective have been proven to not be so. A common figure bandied about for the amount of effect that a placebo has is about 40%. But what does that 40% mean? It's not a plus/minus phenomenon. You can have patients that will have a placebo effect on top of an actual effect. That is why randomized, controlled, double blind clinical trials are done with medications. The goal is to eliminate bias.

There are some in the audio community that like to say there is no placebo effect with cancer drugs, or surgery, but this is not something that can be proven within the ethical constraints of modern medicine. You cannot ethically deny a treatment to a person, even if done in research, if there is a high likelihood of morbidity/mortality and the medication or treatment has a reasonable chance of working. You could only measure the effect of placebo on a surgery via a sham surgery, which just isn't going to be done on people outside of notably fascist regimes. When looking at a medication, a study may come back with a result of "works no better than placebo". This doesn't mean that you always get a placebo effect, though. Again, it's not a plus/minus thing, and some of that 40% mentioned before may not get as much "placebo" as another individual. It is likely that everything done in medicine has some form of placebo effect for some individuals but we probably will never know for many treatments. There is also a defined outcome measure that is easily and unambiguously measurable with these sorts of treatments- mortality. If it decreases mortality, it is deemed to be "working", but we still don't know if there is a placebo component here, it's just assumed there is not because of the gravity of the situation. But it's not been measured, and likely won't be measured.

There are three different general concepts when it comes to medical research for a medication. The first is mechanisms, meaning what does the tested drug actually do. You look at the molecular makeup, whether or not it blocks or up regulates an enzyme, occupies a receptor, binds a messenger ligand, etc. This is typically done in the laboratory where controls are easier to manage and will have results provided by instrumentation designed to measure all of those features to a highly detailed level. This would be akin to Amir's measurements, done on a benchtop in a controlled setting. And while this research is important, it is not going to tell you everything you need to know about how the medication will affect the body of a given human, or a group of humans. That is why we also have tests on pharmacodynamics and pharmacokinetics, how the treatment affects the body. These concepts tell us how a drug is metabolized in the body, what the distribution of the drug in the body will be, how long is the time to effect, how long effects last and those sorts of things. Then you have tests designed to measure how well a designed treatment or drug performs to achieve the desired outcome, i.e. measuring blood pressure, heart rate, blood cell counts, etc. You can also measure two of the senses with high precision, with field of vision testing and audiology.

Where things get really fuzzy with medical research is when the end result is not measurable with an objective test. There are times when the patient has to be the measuring instrument. The best example for this is probably pain therapy. For evaluation of how effective a pain medication is we have to rely on things such as a visual analog scale, which can be reproducible and used for populations, but is never going to be perfect. And you really can't take a large research study on pain medication and confidently relate the results to a single person. There are a lot of reasons why this is the case, but some of it can be ascribed to the placebo affect. There is also malingering and secondary gain, often seen with pain medication prescribing. There will always be that person who says, "yeah, I understand that the research says Ibuprofen and tylenol works as well as percocet, but only percocet works for me." There's just no way to really deal with that as a medical professional. And then there is also unethical prescribing, also seen with pain medication prescribing. This sort of research is really most prone to bias because of the patient bias. For a pain example, a patient may look at an unblinded option of extended release morphine versus high dose ibuprofen/tylenol, which may be equal on double blind studies, and say that obviously the extended release morphine will work better. But blinded the results are often different.

In summary, no ethical medical professional is going to go along with an unblinded study for medications or treatment. The medical world has known for decades that a randomized, controlled, double blind study is the only way to really determine if something works when it comes to the human body. There are certainly times when the best research isn't available and you have to fall back on theory and best experiences, but this isn't the case with audio. And so this seems to be where we are with the audio world when it comes to scientific accountability. Why would anyone believe an unblinded, non-controlled opinion on how well something sounds? Particularly, say, cables that cost thousands of dollars. Well, you have the placebo effect that is brought up here a lot. And it will undoubtedly explain most of the results people experience, and then there is the secondary gain of status. "Hey man, how does my $7,000 set of speaker cables sound? Good, right?". Thankfully nobody gets a buzz from snorting their copper. Is it possible that some people can have golden ears and hear something different for a given source? Absolutely. In fact, it is very likely that some do, if you look at probability curves. The numbers will be vanishingly low, however, and in the realm of statistically insignificant.
 

Axo1989

Major Contributor
Joined
Jan 9, 2022
Messages
2,902
Likes
2,954
Location
Sydney
It might be more difficult to taste the difference if you live in the US, as over there any blended oil containing 51% olive oil can be labeled as such. Whereas in EU it must be 100% olive oil.

51% ... heathens !!
 

Newman

Major Contributor
Joined
Jan 6, 2017
Messages
3,530
Likes
4,371
I have now read the entirety of this thread, as well as the "golden ears" welcome thread,
:eek:

You know, you really didn’t have to do that. ASR is not a hard-labour prison camp. ;)

But thanks for your very interesting contribution.

cheers
 

welwynnick

Active Member
Forum Donor
Joined
Dec 26, 2023
Messages
245
Likes
200
I think it's pretty clear that most of these people who come to discuss how they "hear things" with any of their equipment are trolls.
Trolls are malevolent, but are you suggesting that's also the motivation for subjectivists? That's a bit of a stretch.
I think that subjectivists are motivated by improving the audio state of the art just as much as objectivists.
I'll call them listeners and testers from now on.
Listeners are just as scornful of testers, as testers are of listeners.
It is really quite baffling that people who will claim to "hear" differences between two identically measuring devices or cables can claim the high ground.
I'll offer something towards reconciliation.
I don't believe listeners really think they can hear things that can't be measured, it's more about what the measurements need to be.
Noise is never a good thing, but listeners do often prefer equipment with higher distortion, and that's certainly measurable.
There's a never-ending debate about artistic intent, and what the music is supposed to sound like, which may be different to what's actually recorded.
My guess is that we now have some domestic equipment that measures better than some studio equipment, so we don't hear what the artist wanted us to hear.
Listeners may prefer equipment that sounds better even when it measures worse, perhaps because it mirrors the studio equipment.
I was thinking about Amir's recent Bricasti Audio M1SE review.
It measured very well in every respect except that it had a large number of distortion harmonics, with the higher orders progressively reducing in amplitude.
The test results taken as a whole suggest that Bricasti are quite capable of making the DAC behave exactly how they want it to.
The distortion is higher than other DACs, but that's how they want it.
Equipment like Bricasti, Gustard etc follow a different course to Topping, SMSL etc, who pursue best measurements, and all credit to them.
It's easy to establish what's more accurate, but not so easy to establish what's preferable
 
Last edited:

BDWoody

Chief Cat Herder
Moderator
Forum Donor
Joined
Jan 9, 2019
Messages
7,082
Likes
23,540
Location
Mid-Atlantic, USA. (Maryland)
I'll call them listeners and testers from now on.
Listeners are just as scornful of testers, as testers are of listeners.

Seems a bit of a false starting point.
Maybe controlled testers, vs uncontrolled testers might be better.

It's hard to discuss this stuff rationally with folks who refuse to recognize some basic human limitations.

Noise is never a good thing, but listeners do often prefer equipment with higher distortion, and that's certainly measurable.

Can you provide evidence of this? I mean, beyond the common claims. I think people decide they MUST like distortion because they like THIS box and it has a lot, so I guess that proves they must like distortion. Doesn't work that way.

I was thinking about Amir's recent Bricasti Audio M1SE review.
It measured very well in every respect except that it had a large number of distortion harmonics, with the higher orders progressively reducing in amplitude.
The test results taken as a whole suggest that Bricasti are quite capable of making the DAC behave exactly how they want it to.

How is that suggested? Do you think they listened to that power supply spike and tuned it just right? What makes you believe, other than really wanting to, that the Bricasti didn't just turn out how it turned out, and it was good enough to not be offensive under most circumstances. I think you give these people much more credit than is warranted.
It's easy to establish what's more accurate, but not so easy to establish what's preferable

Maybe someone can establish that they hear a difference first, before getting into how people prefer it.
 

welwynnick

Active Member
Forum Donor
Joined
Dec 26, 2023
Messages
245
Likes
200
I'm sorry but that is not how scientific debate works, If you think something is wrong/incorrect you single it out and describe/explain why you think it is wrong. Ideally you support your argument with research of your own or that of others!
In the research that was performed in that paper, they simulated the effect of jitter by modifying the digital data, instead of actually adding jitter, which only affects the clock.
They did nothing to minimise jitter in their baseline test.
They didn't quantify the jitter in their baseline test set-up, even though it was PC based and full of jitter already.
This paper became popular because it's available for free, while proper scientific papers cost proper money.
This paper wasn't peer-reviewed and published by the AES, which is a trusted resource.
Their well-trained listeners couldn't distinguish between 250ns of simulated jitter and 250ns of real jitter, which proves nothing except how much jitter was in their baseline test.
It's analogous to testing a Purifi amplifier using a signal generator with 1% THD, which will conclude that the amplifier has 1% THD.
I explained this earlier in this very thread.
The paper is a sham, but you can't see it, yet you count this as scientific proof?
 
Last edited:

welwynnick

Active Member
Forum Donor
Joined
Dec 26, 2023
Messages
245
Likes
200
So where is your data showing the picosecond-level jitter is audible?
I'm quite sure that picosecond jitter is inaudible.

HiFi wasn't always as good as it is now, though. Re-clocking was never very effective, and some manufacturers depended on using a DAC-master configuration to get round the problem. This was done by using separate clock connections, or using flow control protocols in i-link, Denon Link or more recently asynchronous USB. I found that whenever I used one of those connections instead of spdif or toslink, the sound quality was significantly improved. Jitter was probably quite bad then, but it was clearly audible with blind level-matched comparisons. I think things have improved since then.
 

welwynnick

Active Member
Forum Donor
Joined
Dec 26, 2023
Messages
245
Likes
200
The ones that I did at home using a Pioneer DV-757 and a Sony TA-DA9000ES. I played it to everyone I could persuade to listen, and they all heard the difference.
 

SIY

Grand Contributor
Technical Expert
Joined
Apr 6, 2018
Messages
10,511
Likes
25,351
Location
Alfred, NY
The ones that I did at home using a Pioneer DV-757 and a Sony TA-DA9000ES. I played it to everyone I could persuade to listen, and they all heard the difference.
I'm sure this was done with complete rigor and without coaching.
 
Top Bottom