• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

How we could finally pin down flowery audiophile subjective descriptions

It's even worse than the chicken egg thing.

The point is, the measurements give us the vocabulary we need. Numbers. DONE.

So.. how would someone, that hasn't got the faintest idea what the plot below tells the initiated (the numbers), know if they might like the sound of this headphone ?
Just telling a casual reader that it should be 'following the dotted line' as close as possible (without showing a tolerance band) is not enough IMO.
There are loads of people liking this headphone and plenty that would never buy it based on the numbers below only.

index.php



index.php


index.php


Would you buy it ?

This is what Amir said about it:

AKG K271 Listening Tests
While not perfect, out of the box tonality was close to what I expected to hear. My reference tracks immediately sounded (almost) right.

Would anyone (even the initiated) come to the same sound description as Amir based on the plots above ?

How would someone 'describe' the sound of this headphone based on the numbers ?
 
Last edited:
So how would someone that hasn't got the faintest idea what the plot below tells the initiated (the numbers) have to decide if they would like the sound of this headphone ?
I don't think these discussions even matter. Subjectivity is subjectivity. It cannot be agreed upon. Which is why we have measurements.

People should just go listen and buy. Some will realize their ears are the weakest links in the listening chain and some won't.

There simply is NO WAY to describe audio to another person when we live in a world where people think 3 foot lengths of AC cable makes your system sound different.

If it measures good it is good. Good specs are the definition of high fidelity. FULL STOP.
 
I agree with everything you write.
But the casual reader does not know 'measures' good means. Nor do many others.
Gear that does not measure good can still sound good, maybe not to very critical people but to mortals.

Let me put it this way. Amir measured the K271 and listened to it and wrote:
AKG K271 Listening Tests
While not perfect, out of the box tonality was close to what I expected to hear. My reference tracks immediately sounded (almost) right.

They might well buy it and think... wow for little money I have a headphone that sounds (almost) right.
Someone that only looks at the numbers (plots) would definitely not buy it based on the plots because... very poor measurements.

What is measuring good ? What tolerances and what aspects need to comply to which 'standards'. How much deviation is allowed and to whom ?
What is perfectly acceptable ? What are the limits for what measurements.

This is the difficult part. What is 'measuring good',what is 'good enough'
 
All of these words describe the subjective, inner-world impression and reactions of the listener to an objective phenomenon. As such their capability of mapping to anyone else's inner world impressions and reactions, let alone objectively measured data points, is tenuous at best, and almost certainly not worth the effort unless you're a fan of subjective audio reviewer poesy because that's exactly what it is.
 
Someone wouldn't. The numbers "speak" for themselves. In other words, the numbers are a language, and that language is separate and different from the language we use to speak to each other.
And like any language, you have to learn it to be fluent in it.

Jim

While that is true the vast majority of people that listen to music is not audiophile minded (music fidelity addicts) and only a small portion of those actually understand the numbers and a much smaller number of them is 'fluent' in interpreting the numbers. The vast majority of 'music consumers' are not going to learn 'numbers language'. They want to read 'flowery and positive' words of 'reviewers' instead. So those words, in practice, are more important than 'numbers' to the vast majority of people. They don't care about 'numbers'.
So a good 'translation' or 'vocabulary/description' is very important.
This 'numbers language' is no option for every day life. Flowery wording and 'descriptions' are a more universal language that is easier to learn.
A 'vocabulary' with descriptions is easier to understand than learning to interpret measurements and taking that skill far beyond 'mount stupid'.

Yes, numbers are a language and when understood say a lot more than the many 'flowery wording vocabularies' as these all differ anyway. But not everyone that goes on vacation has learned to speak fluent Swahili, not even when often visiting the countries it is spoken in.
 
Last edited:
I have two questions.

1. What the heck is "rhythm and pace" in an audio system? Most of the other flowery language I can pretty wells suss out, but this has me stumped.

2. Why do ASR's reviews often include a purely subject "Rate this device" poll? At best it is a measurement of how much agreement there is in the value of the measured objective performance. It seems antithetical to the main premise of this forum. If it was a poll of the subjective performance of the device under test by actual users of the device that would potentially add real value.
 
You don't trust that they can hear if they don't hear so well?
Has nothing to do with trust. It's just another set of measurement to compare to my capability. I know the gaps in my hearing because of annual audiograms. Around 4K I need a little boost.
 
...To translate subjective descriptions into objective measurements...
I think what you are trying to achieve is similar to the puzzle which has been baffling marketeers/researchers for decades, as to what makes a hit.:oops:
This question may be related to your other post "...try[ing] to classify all audiophiles..." thread.
Are you exploring blending such worthy audio discussions/topics with AI to take marketing to a higher plateau?
 
Has nothing to do with trust. It's just another set of measurement to compare to my capability. I know the gaps in my hearing because of annual audiograms. Around 4K I need a little boost.
I was being facetious.
 
TAKE MEASUREMENTS.
It's even worse than the chicken egg thing.

The point is, the measurements give us the vocabulary we need. Numbers. DONE.
I don't think these discussions even matter. Subjectivity is subjectivity. It cannot be agreed upon. Which is why we have measurements.

People should just go listen and buy. Some will realize their ears are the weakest links in the listening chain and some won't.

There simply is NO WAY to describe audio to another person when we live in a world where people think 3 foot lengths of AC cable makes your system sound different.

If it measures good it is good. Good specs are the definition of high fidelity. FULL STOP.

Your keyboard is playing up?

If not, yep, hurling some boldface and all caps at the innumerates will sort 'em out.

You appear to have self-selected out of any discussion [of the topic]. I'm going to read NTK's linked paper, might be interesting.
 
Last edited:
Many of us throw up our hands when equipment is described as "fast", "slow", "crisp", "warm", etc. It seems impossible to relate these terms to measurable characteristics.

I have a slightly more optimistic view, in that subjective descriptions must be correlated with what people hear, and what people hear tends to be correlated (imperfectly) to measurable output.

To translate subjective descriptions into objective measurements (or the other way around, which might be more interesting), I propose that a machine learning model could be used.

The model would correlate subjective terms used (how often people say a speaker is "crisp", for example) with the measurements of the equipment relative to the median measurements of all equipment in its category.

The output would be a map of how semantically close certain audiophile words are to each other, and how use of those words correlates with measurable characteristics. Imagine a word cloud that groups words like "sharp, bright, tinny" together, and then displays measured characteristics that correlate with those terms - elevated response above 4khz, above-average H3 distortion in that range, etc.

This would be interesting in its own right, but if the ML was sophisticated enough, and fed enough data, it might also reveal trends in preference / subjective experience that go beyond the current preference score models. For example, you might unexpectedly find that some aspect of vertical directivity correlates with "warmth" or "speed", for example. I don't know.

I don't have anywhere near the skills to execute such a project, but it seems like a way to solve the "problem" of people using flowery language that, to many of us, is currently worse than useless. It might also reveal that things audiophiles consider to be "beyond science" are actually very well correlated with simple measurements. Which would be progress in and of itself.
If this even would work, how would you get enough data to make relevant conclusions, and how would you set up a machine learning algorithm to do it. Surely there's not enough organised data in order for this to work. Let alone the different rooms that people listen in and other variables - but I get that if you have enough data then some of that could be averaged out. I just don't know how the system would work to gather the data in first place, and secondly would there be enough data, and thirdly how do you design the algorithm. I don't know much about machine learning, but it seems like there are some sticking points.
 
If this even would work, how would you get enough data to make relevant conclusions, and how would you set up a machine learning algorithm to do it. Surely there's not enough organised data in order for this to work. Let alone the different rooms that people listen in and other variables - but I get that if you have enough data then some of that could be averaged out. I just don't know how the system would work to gather the data in first place, and secondly would there be enough data, and thirdly how do you design the algorithm. I don't know much about machine learning, but it seems like there are some sticking points.

Just call it Full Self Listening and start taking orders.
 
I tried reading this, but it slammed me with syrupy undertones that left a screeching texture on my tubby palette.

Learn the language of good sound​

Better understand the difference a hi-fi system can make​


 
LOL overall I don't see the need to quantify/recognize the flowery audiophile creative writing exercises. I'd just prefer they kept to themselves.
 
I tried reading this, but it slammed me with syrupy undertones that left a screeching texture on my tubby palette.

Learn the language of good sound​

Better understand the difference a hi-fi system can make​


I managed to get through the section talking about bass, but then I couldn't go on!
 
LOL overall I don't see the need to quantify/recognize the flowery audiophile creative writing exercises. I'd just prefer they kept to themselves.
They won't though, and as others have said ITT, most people will never learn the lingo or how to interpret measurements. So in some cases, interpeting someone's description of the audio as "crackly" or "scratchy" or "dull" is all you have.

For example, I used to handle escalated customer service inquiries when I worked for the consumer audio brand, sometimes. And you have to take "scratchy" or "crackly" or whatever, and figure out what's wrong with the actual equipment. Is it broken or is it user error? etc.

Or, if a reviewer calls your system "tubby" or "anemic"... maybe you can make a better guess about what actually went wrong in their listening setup, if you have a good-enough guide to correlation between subjective terms and physical reality.

Brands are motivated to get better reviews, but many reviewers steadfastly refuse to characterize their impressions quantitatively. I don't think we are going to turn the tide of 50+ years of stereo reviewer flowery language tradition, but maybe we can do better in translating their lingo into something actionable.

I certainly don't have the skills to do this, but there are probably a fair number of speakers for which enough data exists. You would need one full set of measurements for the speaker, and as many subjective impressions of the speaker as possible. The model would be based on objective differences between speakers and the relative likelihood of a deviation from ideal, or from the average, is to trigger the use of a certain term.

And, for those who hate the idea of legitimizing subjective reviewing, even a bit - consider this: If use of terminology is found to have NO correlation with quantitative performance, we can perhaps put the final nail in the coffin of this dubious school of reviewing.
 
They won't though, and as others have said ITT, most people will never learn the lingo or how to interpret measurements. So in some cases, interpeting someone's description of the audio as "crackly" or "scratchy" or "dull" is all you have.

For example, I used to handle escalated customer service inquiries when I worked for the consumer audio brand, sometimes. And you have to take "scratchy" or "crackly" or whatever, and figure out what's wrong with the actual equipment. Is it broken or is it user error? etc.

Or, if a reviewer calls your system "tubby" or "anemic"... maybe you can make a better guess about what actually went wrong in their listening setup, if you have a good-enough guide to correlation between subjective terms and physical reality.

Brands are motivated to get better reviews, but many reviewers steadfastly refuse to characterize their impressions quantitatively. I don't think we are going to turn the tide of 50+ years of stereo reviewer flowery language tradition, but maybe we can do better in translating their lingo into something actionable.

I certainly don't have the skills to do this, but there are probably a fair number of speakers for which enough data exists. You would need one full set of measurements for the speaker, and as many subjective impressions of the speaker as possible. The model would be based on objective differences between speakers and the relative likelihood of a deviation from ideal, or from the average, is to trigger the use of a certain term.

And, for those who hate the idea of legitimizing subjective reviewing, even a bit - consider this: If use of terminology is found to have NO correlation with quantitative performance, we can perhaps put the final nail in the coffin of this dubious school of reviewing.
I can easily ignore them, tho...and not in customer service either. Then there are those who use the terms differently, etc. Some of these guys all are also describing differences in cables and amps and a variety of other silly things with the same language....
 
I have suggested a similar approach to the OP's post back in February 2022:
 
I don't think these discussions even matter. Subjectivity is subjectivity. It cannot be agreed upon. Which is why we have measurements.

People should just go listen and buy. Some will realize their ears are the weakest links in the listening chain and some won't.

There simply is NO WAY to describe audio to another person when we live in a world where people think 3 foot lengths of AC cable makes your system sound different.

If it measures good it is good. Good specs are the definition of high fidelity. FULL STOP.

This is so amazingly, obviously wrong...it bottles the mind....
 
Back
Top Bottom