• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

"Things that cannot be measured"

The modern AVR that we are using are in fact a sound processing unit.
When Dirac Live is engaged, some strange side effects similar to the one's observed by a sound engineer could occur.
Then it’s Dirac, not your amp.

EDIT: It’s also probably due to one of your tweeters is damaged.
Post in thread 'Onkyo TX-RZ30'
https://audiosciencereview.com/forum/index.php?threads/onkyo-tx-rz30.56776/post-2277395

You cannot hope to solve any issues with any amp until you have your speaker issues fixed.

You could switch to headphones and work it from that angle, in fact I would recommend starting there and seeing if you are hearing what you are describing while you fix the speaker issue(s). If not, then confirm you have one good speaker and check everything in mono with that one good speaker.
 
Last edited:
Epistemologically the question of whether there exist any immeasurable auditory phenomena is beside the point, which really hinges on the existence of qualia, which are real but not quantifiable, are pivotal to why we even care about this stuff in the first place, and can be communicated to others but (probably?) can never be identical among people.

But that's also beside the point, because either way, we're back where we started:
1. We can ascertain the most neutral, least colored or distorted tools to reproduce sound.
2. We cannot know without direct, personal, empirical experience whether that is the sound we prefer (even very rigorous preference studies say nothing about the preferences of any single individual, after all--they merely present probabilities, which are useful to designers and engineers but not so much to an individual listener).

Given 1. and 2., a devoted listener can either (A) chase non-neutral sounding tools hoping to land on ones that suit her personal preference or (B) seek out the most neutral tools possible and then use other tools (DSP, room treatment, etc.) to tune them to her personal preference, if necessary.

Some people love (A) and have the resources to pursue it. I don't see any reason to disdain or belittle them. But for the rest of us, the pragmatic path is (B), since it's eminently measurable and communicable and it's more cost-effective and flexible in allowing us time to identify our preferences and to adapt if and when those preferences change.
 
The answer is that his new amp is more revealing of sibilance then his prior amp and it’s very easy for him to satisfy himself of that.
That's a highly dubious claim unsupported by evidence.
 
...
Some people love (A) and have the resources to pursue it. I don't see any reason to disdain or belittle them.
It's not the choices they make for themselves. It's the feelings-evidenced advice they give to others that attracts disdain and belittlement.

And the notion that whatever formula of coloration (in the most general sense) they end up with works for the music they used to get there, but given that most every recording will need a different formula, it's hard to imagine how an "A" system can be responsive to the objectives they express.

But there's also the problem that people claim to hear, feel, or otherwise divine these differences but are unable or unwilling to set up a test to show that they can do so predictably and reliably, without even the need to determine which is better or worse. If there is a difference a person claims to detect, their ability to detect it should be demonstrably repeatable, especially if they are prepared to assert their conclusions using the adjectives we routinely see associated with the differences.

And then there's the problem that the purveyors of the products in the A category often claim to be utterly (indeed, uniquely) devoid of any distortion that gets between the music and the listener. Followers of A therefore delude themselves that what they prefer is the most transparent, when in fact it may be the least transparent. Transparency is measurable. And absolutely neutral is not always the strategy to transparency. When I apply equalization to correct for room effects, I'm not making the recording more transparent. I am making it more accurate given the acoustic context of the system.

That in no way means that I don't respect the impressions of experienced listeners, especially when their observations are at least broadly supported by measured performance. Many (claim to) ignore Amir's editorial opinions on the stuff he measures, but I don't. That's also why I still do read reviews in Stereophile from time to time when researching some old piece of equipment I'm considering. Kal's opinion matters to me, for example--without it I don't think I would have bought my current speakers. But I don't think even Kal would suggest that people take his impressions as the be-all and ignore the measurements that John adds. I also suspect that Kal is the least nervous about his own impressions when John is able to find an explanation for them in the measurements, which he is not always able to do. I don't necessarily think Kal has golden ears, but I do think he's compared a LOT of good stuff in his listening environment, and I also am confident that he knows what actual performance sounds like, at least the music I am most concerned about.

But for the rest of us, the pragmatic path is (B), since it's eminently measurable and communicable and it's more cost-effective and flexible in allowing us time to identify our preferences and to adapt if and when those preferences change.

It's also the path that can be recommended to others, given that it is supported by repeatable and verifiable data that others can compare to their own use case requirements.

Rick "would never buy something that measures poorly no matter how much some other guy raved about it, however" Denney
 
So clearly the propensity of an amplifier to enhance an already existing sibilance (or even to create it as a side effect of its DSP treatement) is not detected by the current measurements applied during the standard audio product tests . We can ear it, but not measure it, for now.
This has NOT been established, and is vanishingly unlikely. I suspect you have completely misdiagnosed how your equipment is setup or malfunctioning, along with the potential for your brain to fool you.

 
Last edited:
Yes you can cook potatoes to each persons preference, the same way you could create speakers for each persons preference. What you can't do is test a speaker or test a cooked potato and predict if a person not involved, whohas not tasted it or listened to it actually likes it. That is the point. Showing me a speakers frequency response graph or any other data is not remotely predictive of if I will like it or not, that's a matter of personal preference, as you describe that it also is for cooking a potato, thanks.

No less than 3 members have pointed out flaws in your proposed potato experiment and attempts to draw analogies to measuring speakers. I can think of more flaws yet, but you have failed to properly respond to the initial ones. Suggest you need to do so or move on.
 
No less than 3 members have pointed out flaws in your proposed potato experiment and attempts to draw analogies to measuring speakers. I can think of more flaws yet, but you have failed to properly respond to the initial ones. Suggest you need to do so or move on.
If it is pronounced Poo-tat-oh does it then read better?
 
No less than 3 members have pointed out flaws in your proposed potato experiment and attempts to draw analogies to measuring speakers. I can think of more flaws yet, but you have failed to properly respond to the initial ones. Suggest you need to do so or move on.

The whole food/wine analogy is a mess, usually because people make assumptions that food/wine is less measurable than it is.

 
Last edited:
BTW, has this article been cited in this thread? Salty part below, but read the whole thing.


The burden of proof has always lain with those making questionable claims, and mere anecdotal experiences do not meet that standard. As the late Carl Sagan said, “Extraordinary claims require extraordinary evidence.” The interesting thing about MQA is that there was a consensus amongst the experts (in the fields of mathematics, communication theory and digital signal processing) regarding the fallacies presented by the purveyors of MQA. There was no requirement for any formal argument on the issue, yet the self-proclaimed authorities remained invested in their opinions long after the science was litigated, always falling back on personal observations as rebuttals.

The notion of "there are unmeasured things that affect sound quality" when they push cables (USB, network, power, etc.) that should merely be built to specifications, is essentially indistinguishable from the arguments of anti-vaxxers. These authors don't subscribe to any of the principles of the science that they supposedly follow, but they instead just want to turn their sphere of influence into a regressive theocratic structure with them at the top of the heap. This pulpit that they have preached from is slowly beginning to show cracks in its supporting foundation and the claims proclaimed from thereon are being questioned critically.

There often arises the comment that if one doesn’t find a magazine informative then one must simply not read it instead of spending time criticizing it. This conveniently overlooks the idea that the criticism isn’t aimed at the absence of information but is, in actual fact, aimed at the presence of misinformation and/or disinformation. There is a duty of sorts to challenge the propagation of disinformation because ignoring it eventually normalizes it, and this eventually leads to the Balkanization of ideas such that discussions based on common acceptable facts cannot occur — precisely because the disinformation propagates alternative realities based on wishful thinking.

Columnists (who don’t appear to live up to the definition of the term journalists) are more than happy to reside in their version of Narnia that has a tenuous connection to reality, but unfortunately, they lure a significant segment of today's audiophiles looking for some easily attainable sonic nirvana that they require to fulfil some need. They manage to get their online page views (and here we must not increase those by directly linking back to the site, because any publicity is literally monetarily beneficial) by running manufacturers' claims as is without any commentary on the *facts* that disprove these claims. Recent articles could lead some readers to think that the magazines are now mainly in the business of writing merely advertorials and, that, is a far cry from journalism. Journalism has its roots in curiosity despite, sadly, much like the cat, curiosity having killed journalists (Jamal Khashoggi and Daphne Caruana Galizia to name but two).

Over the years the public has allowed the members of the mainstream audio press to turn their jobs into sinecures. The rationale for this charitable statement lies in a paraphrasing of Stephen Fry, “The only reason [members of the audio press] do not know much is because they do not care much. They are incurious. Incuriosity is the oddest and most foolish failing there is.
 
Hoping to learn here... what is the thinking if the claim were not "there are things we can hear which cannot be measured" but instead a more moderate "there are things we can hear and capture that we do not yet know to look for"?
A real world example: after the better part of a decade on the market it is only now (publicly) discovered that many Cirrus Logic CS431xx-based devices display errant behaviour producing plausibly audible distortion which is not caught by conventional testing methodology "On the Distortion of Cirrus Logic CS431xx-Based Devices: A Comparative Review." I'm referring to not the "Cirrus Hump" but the "Part II" DRE-related distortion. Obviously these are measurable, we're looking at the measurements. But it takes targeted and fairly non-standard testing to identify that there is in fact an engineering failure involved, and that in real world content it is very likely distortion levels rise well above their topline rated levels. Given how long these products have been on the market, its likely that this escaped many of the engineers involved too.

Modern electronics are complex and you don't have to look far to find cases of esoteric hardware or firmware bugs which behave one way on test tones and another in real world content. This is obviously more an issue of exposing such a fault than it is the ability to even capture it at all. But I can certainly understand why some people become skeptical of how closely measurements reflect what we actually hear if their ears are telling them something the standard suite of charts is not.

I could probably use some homework as to why no one uses musical content for test signals, as the assumption that a chip would respond the same to complex signals as it would to test tones doesn't always seem to hold true. I understand the failing here (other than Cirrus' gaff) is in test methodology, but I'm trying to understand how we could try and catch the next chip bug as clearly our current methodology did not.
 
A real world example: after the better part of a decade on the market it is only now (publicly) discovered that many Cirrus Logic CS431xx-based devices display errant behaviour producing plausibly audible distortion which is not caught by conventional testing methodology "On the Distortion of Cirrus Logic CS431xx-Based Devices: A Comparative Review." I'm referring to not the "Cirrus Hump" but the "Part II" DRE-related distortion. Obviously these are measurable, we're looking at the measurements. But it takes targeted and fairly non-standard testing to identify that there is in fact an engineering failure involved, and that in real world content it is very likely distortion levels rise well above their topline rated levels. Given how long these products have been on the market, its likely that this escaped many of the engineers involved too.

Modern electronics are complex and you don't have to look far to find cases of esoteric hardware or firmware bugs which behave one way on test tones and another in real world content. This is obviously more an issue of exposing such a fault than it is the ability to even capture it at all. But I can certainly understand why some people become skeptical of how closely measurements reflect what we actually hear if their ears are telling them something the standard suite of charts is not.

I could probably use some homework as to why no one uses musical content for test signals, as the assumption that a chip would respond the same to complex signals as it would to test tones doesn't always seem to hold true. I understand the failing here (other than Cirrus' gaff) is in test methodology, but I'm trying to understand how we could try and catch the next chip bug as clearly our current methodology did not.
Yes, I'm much more open to the idea that there is a very situational/conditional phenomenon that our current suite of measurements does not capture, as opposed to the idea that there is some audible phenomenon not currently measurable in amplitude and frequency over time. However, pure tones tend to be more demanding and revealing of problems than music.

 
Last edited:
A real world example: after the better part of a decade on the market it is only now (publicly) discovered that many Cirrus Logic CS431xx-based devices display errant behaviour producing plausibly audible distortion which is not caught by conventional testing methodology "On the Distortion of Cirrus Logic CS431xx-Based Devices: A Comparative Review." I'm referring to not the "Cirrus Hump" but the "Part II" DRE-related distortion.
Where did you get "plausibly" from?

I can report that ears were not involved in having the idea nor in the process of finding the test signal for Part II. So I don't see relevance with the claim "there are things we can hear and capture that we do not yet know to look for".
 
Where did you get "plausibly" from?

I can report that ears were not involved in having the idea nor in the process of finding the test signal for Part II. So I don't see relevance with the claim "there are things we can hear and capture that we do not yet know to look for".
It is repeatedly addressed in jkim's recent additions
1748378288911.png
 
I could probably use some homework as to why no one uses musical content for test signals, as the assumption that a chip would respond the same to complex signals as it would to test tones doesn't always seem to hold true. I understand the failing here (other than Cirrus' gaff) is in test methodology, but I'm trying to understand how we could try and catch the next chip bug as clearly our current methodology did not.
For engineers, test signals have the advantage of being deterministic and testing the specific thing you want to test, with the resultant distortions offering insight into the error mechanism, and the test can be more sensitive (ie, you can detect a sine wave buried deep in noise). Musical content can be used with modern technology, but it has less diagnostic power and the results need to be interpreted with care, as sonically irrelevant differences can cause large differences in a null comparison.

If your device has some kind of "smart" switching behavior that static-amplitude signals do not exercise, you need a more suitable test signal.
 
Last edited:
It is repeatedly addressed in jkim's recent additions
In the "Additional Remarks after writing Part II" I can only see "although it is another matter whether a listener can hear it or not" and "[auditory perception] is subtle and sensitive holistically" (the last one doesn't mean anything concrete to me). I guess we may have different definitions of "plausibly".
 
It's not the choices they make for themselves. It's the feelings-evidenced advice they give to others that attracts disdain and belittlement.
I wasn't referring to reviewers or influencers--some "audiophile" types really do enjoy the constant churn of new gear, mixing and matching, chasing for something (however deluded or irrelevant to others), and I'm fine with their doing so.
 
Google IA seems to have information that could enhance @SIY advice:

Aperçu IA
En savoir plus

Sibilance, characterized by harsh "s" or "sh" sounds, is a common issue in audio amplifiers and recordings, especially in vocals. It's often caused by the amplification of high frequencies in the 2-8 kHz range, where our hearing is most sensitive. Sibilance can be managed through microphone placement, equalization (EQ), and de-essing techniques.
Can you show me (just 1) amplifier, pre-amplifier or DAC that has a peak between 6kHz and 8kHz reaching over 1dB ?
I have read just about any review over the last few years here at ASR and have not seen that in a single review.
If you can prove that, than those devices can indeed have 'sibilance' if not ... you have no legs to stand on.
Instead... look at recordings and transducers/rooms/ears instead.


I wasn't referring to reviewers or influencers--some "audiophile" types really do enjoy the constant churn of new gear, mixing and matching, chasing for something (however deluded or irrelevant to others), and I'm fine with their doing so.
I'm fine with that too... it's their money.
The problem is that they claim 'better sound quality' and 'superior ears/gears' and make all sorts of claims of audibility.
Having a preference is fine, saying what they found is something else than their preference (like saying: it is 'real' and 'easily audible') is another thing.
 
Last edited:
Back
Top Bottom