• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Those of you who believe measurements aren't the whole story, do you have a hypothesis why that is?

Jim Matthews

Major Contributor
Forum Donor
Joined
Mar 25, 2021
Messages
1,051
Likes
1,288
Location
Taxachusetts
So, how do we perceive stereo with headphones?
I have always heard headphones as mono, unless their is a heavy pan in the mix (some microphone input emphasized to either channel).
 

Jimbob54

Grand Contributor
Forum Donor
Joined
Oct 25, 2019
Messages
11,112
Likes
14,777
I have always heard headphones as mono, unless their is a heavy pan in the mix (some microphone input emphasized to either channel).

Seriously? I'm struggling of late because I hear everything as too split. Odd kind of vacuum feeling when certain frequencies are more towards one cup than the other. Im experimenting with crossfeed to try and remedy.

But I most definitely dont think of most recordings through HP as mono.
 

Raindog123

Major Contributor
Joined
Oct 23, 2020
Messages
1,599
Likes
3,555
Location
Melbourne, FL, USA
My point is that you assert (as far as I understand you correctly) that the brain processes audio the way our current DSP and related technology does. I see no evidence for this in the reading I have done on hearing and the brain.

If you have a reference that does provide evidence that it does I would love to read it.

No, I do not know how brain processes audio, so can’t help you there.

At no surprise, I come from the “mechanistic” - technological, algorithmic, processing - side of the story. From that angle, I see that functionally the state-of-the-art technology (sensing, processing and more-and-more cognition) can offer, support pretty much everything a brain functionally does:

At lower levels, (1) today’s signal processing can identify components of the [audio] spectra, and process and manipulate them - through FFT, filtering, notching, EQ’ing, selective modification. Ultimately leading to signal-to-noise improvement and other desired shaping effects (eg, auto-tuning). (2) Our understanding of wave physics - propagation, interference, interaction with objects/surfaces, Doppler, etc - allows us to accurately calculate (near and far field), simulate, and again manipulate 3D acoustic energy propagation — leading to calculations of directions to sources, times of arrival, multi-path reflections, multi-receiver diversity & correlation, triangulation, … All this information gets processed to “situational awareness” - ie source location(s), our position, obstacles (walls, floors, ceilings).

At even higher [processing] levels, (3) the state-of-the-art “AI” technology can do eg, voice recognition, translation, music composing and performing… heck, Boston Dynamic robots even dance to a beat better than I…

So again, as I‘ve never studied brain operation, I cannot say with certainty that my brain uses exactly the same signal-processing and higher-cognition algorithms as modern computing systems. However, as modern signal processing computers can perform the same functions based on fully measurable algorithmic implementations, my personal pragmatic engineering position is that even in our head the “music listening function” (and everything leading to it) is quantifiable and measurable.
 
Last edited:

JJB70

Major Contributor
Forum Donor
Joined
Aug 17, 2018
Messages
2,905
Likes
6,158
Location
Singapore
Seriously? I'm struggling of late because I hear everything as too split. Odd kind of vacuum feeling when certain frequencies are more towards one cup than the other. Im experimenting with crossfeed to try and remedy.

But I most definitely dont think of most recordings through HP as mono.

I tend to use cross feed to avoid this strange effect, for some recordings I think it is essential if listening with headphones.
 

LTig

Master Contributor
Forum Donor
Joined
Feb 27, 2019
Messages
5,835
Likes
9,577
Location
Europe
My apparent mistake was in thinking the same process was applied to audio signals, such that the position of an image could be determined from the signal sent from the speakers (or rather, reaching the ears, which is more difficult to measure). I can (and have) certainly adjust the signals to the speakers to move a source around, including moving it outside the boundaries of the speakers themselves, using an acoustic rather than radar processor. I did that long ago using bucket-brigade devices for a true time delay plus the usual analog circuitry to manipulate amplitude, phase, frequency response, and so forth. It was a fun project, presented for younger kids who were wowed by how we could move the sound around, but apparently inapplicable to this discussion. If the brain (processor) is making decisions based on (unknown to me) data then my basic premise (hypothesis) is wrong. Thus at this point I think my fundamental premise about what this thread is about and how we perceive spatial sound fields is wrong, and I do not want to waste more time (yours or mine). In this I am indeed ignorant, and following to learn.
No, you aren't. Your experiments with 2 channel soundstage illusions are valid. Q sound exists and is used to create an artificial soundstage with 2 speakers so in fact the knowledge about how to do this exists already - just not here. :confused:
 

Jim Matthews

Major Contributor
Forum Donor
Joined
Mar 25, 2021
Messages
1,051
Likes
1,288
Location
Taxachusetts
Seriously? I'm struggling of late because I hear everything as too split. Odd kind of vacuum feeling when certain frequencies are more towards one cup than the other. Im experimenting with crossfeed to try and remedy.

But I most definitely dont think of most recordings through HP as mono.
To be fair, I wear cans if and only if everyone at home is asleep.

I'm not mixing - most of my night-time tracks are low key.
 

escksu

Addicted to Fun and Learning
Joined
Jul 16, 2020
Messages
965
Likes
397
Many people don't care about measurements and audio science because they like distortion. This is certainly the case with tube enthusiasts. They don't care about replicating the authentic sound and may even find that disagreeable as both an objective and actuality. Additionally, they have an emotional attachment to the esoteric paraphernalia they employ, vintage tubes, tonearms, turntables, cartridges, etc (really a variety of audiophile necrophilia). In short, plenty of people really enjoy distorted sound and couldn't care less about science. They like what they hear, plain and simple, and don't wish to be bothered by graphs, diagrams, Amir, etc. I don't pass judgment on them but do get annoyed when they criticize Class D and all the wonderful advances of modern audio like Purifi that sound amazing (and have a ton of power without weighing 50 lbs.). Some people like bad wine, so who is to judge them if they are enjoying themselves?

Yup, its up to individual preference. Its their money, as long as they are happy who are we to tell them what they should or should not like? As for critizing, unless its unlawful, pple have the right to critize anything they want. Just like you have to right to critize others, others have the same right too.

This goes beyond audio too. As you have mention wine, there is food as well. You may have some favourite food which taste awesome to you. You love it, but there will be others who will say it taste aweful. Palate is different. This is part and parcel of life.
 

escksu

Addicted to Fun and Learning
Joined
Jul 16, 2020
Messages
965
Likes
397
Only solution is to read this forum and others, make a synthesis, order on line and return the item if not satisfied.
It is difficult for large and heavy items.

Always remember this: nobody likes to buy used items that is sold as new.

Its extremely simple to return the item and get a refund. But someone else will have to buy what you returned. Not to mention the cost and hassle to checking the item and ensuring its not damaged, even cosmetically.
 

escksu

Addicted to Fun and Learning
Joined
Jul 16, 2020
Messages
965
Likes
397
No, you aren't. Your experiments with 2 channel soundstage illusions are valid. Q sound exists and is used to create an artificial soundstage with 2 speakers so in fact the knowledge about how to do this exists already - just not here. :confused:

I challenge you to quantify these illusions.
 

egellings

Major Contributor
Joined
Feb 6, 2020
Messages
4,076
Likes
3,320
With speaker measurements, you are often including the room effects in the measurements. Maybe if you pulse the test speaker with a stimulus signal and quickly snap a mic measurement, catching the direct signal and then cutting the mic before a reflection can reach the it, then measurement repeatability might be achievable. Of course, electronics are not affected by the room they are in.
 

dc655321

Major Contributor
Joined
Mar 4, 2018
Messages
1,597
Likes
2,236
Maybe if you pulse the test speaker with a stimulus signal and quickly snap a mic measurement, catching the direct signal and then cutting the mic before a reflection can reach the it, then measurement repeatability might be achievable.

Or just truncate (in time) the impulse response, which is commonly called a gated measurement.
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,771
Likes
37,636
With speaker measurements, you are often including the room effects in the measurements. Maybe if you pulse the test speaker with a stimulus signal and quickly snap a mic measurement, catching the direct signal and then cutting the mic before a reflection can reach the it, then measurement repeatability might be achievable. Of course, electronics are not affected by the room they are in.
Which is exactly how something like REW and other softwares work using sweeps.
 

Snarfie

Major Contributor
Forum Donor
Joined
Apr 30, 2018
Messages
1,184
Likes
935
Location
Netherlands
Imo roomcorrection measurments are 70% of the story 20% is how you setup your speaker 10% are decent gear cables etc. The consiberable improved results that i have now is because of measurments an corresponding corrections for the most part.
 
Last edited:

Sombreuil

Active Member
Joined
Nov 20, 2020
Messages
236
Likes
242
[Sorry if the question has already been asked, I didn't feel like reading 9 pages. In case it has, feel free to redirecte me to the potential message]

Anyway, I just wanted to know if measurements can show how a soundstage is (for headphones)?
 

Roland68

Major Contributor
Joined
Jan 31, 2020
Messages
1,460
Likes
1,279
Location
Cologne, Germany
I was never able to accept that many audiophiles believe there's something just mystical about human hearing that simply can't be captured by science. And frankly I don't really think they believe that. But at the same time I don't think I ever heard or read a hypothesis about it, no matter how far-fetched. OK, maybe there is the "typical measurements rely on steady state signals and average certain kinds of distortions" but that's pretty much dismantled. I really don't believe there's a black and white divide between engineering types and the ones that simply trust their hearing without any interest for scientific explanation, that's just an exaggeration of the Internet era, it does a perfect job of making all shades of gray appear black or white as we all know. I'm really hoping for an interesting discussion.

I don't believe in the mystical in audio and just as little in the mystical in the human hearing.
I think very highly of scientific measurements of audio equipment and of listening and comparing.
However, I believe that most "audiophiles" are not all that audiophiles (see definition below), which leads to the feared gap in scientific measurements. And that's exactly what I would like to give food for thought.

In the last millennium I have given hi-fi workshops with devices and components myself and have taken part in them myself. The controversial discussions and opinions at these workshops about the sound of the devices irritated me.
Until I asked the attendees how often they listen to live music and attend concerts or similar events.
On average, less than 20% of the participants had experience with live music.
So how should the remaining 80% audiophiles assess or compare the sound of a device? Even if you have heard and compared 1000 devices, you still cannot say on which device an acoustic guitar, cello, violin or grand piano sounded "right".
And it is precisely these 80% that often jump to devices that somehow sound great and extraordinary, but have nothing to do with realistic music reproduction. Such devices always sounded "wrong" to me and some of them failed here on ASR with kettledrums and trumpets.

The word "audiophile" has become more and more of a dirty word in recent years. But that does not do justice to the original definition.
Sound fidelity is the ability of an ideal electro-acoustic transmission system to reproduce the recorded sound image in such a way that there is no audible difference between the original and the reproduction through loudspeakers. A corresponding music production is called audiophile.
 

richard12511

Major Contributor
Forum Donor
Joined
Jan 23, 2020
Messages
4,336
Likes
6,705
I don't know what "transparency" means. If it means adding nothing to the source, then frequency response and distortion measurements will suffice. For imaging, frequency response (amplitude and phase), and dispersion measurements (e.g. on- and off-axis response) will tell you, along with the in-room response since room reflections contribute greatly. Don't know what "staging" means; to me, setting up for a play or musical, or a sequence of operations in a process. We can't measure how it sounds or feels to you, but if the terms are defined, we can almost always measure them. But taking and interpreting the measurements can take a lot of equipment, experience, and so forth. Probably easier to just listen, and since the effects of he room are so important, probably more logical.

Agreed on all points.

IMO the Klippel can measure everything that's needed to predict imaging/soundstage. The tricky part is the interpretation, but the data is certainly there(or it could be).
 

richard12511

Major Contributor
Forum Donor
Joined
Jan 23, 2020
Messages
4,336
Likes
6,705
With respect @DonH56 , I don't completely agree.

Imaging is a result of two speakers, not one. Imaging depends on a whole lot of things, not the least of which are the tolerances of the drivers and crossover components from speaker to speaker. An image that suddenly shifts, loses focus or recesses during playback. A vocalist that isn't positionally stable on one pair of speakers but is on another. You know what I mean.

But this can be measured with the Klippel. Amir only measures one speaker, but I don't see this as the NFS being lacking.
 

richard12511

Major Contributor
Forum Donor
Joined
Jan 23, 2020
Messages
4,336
Likes
6,705
Are you aware of any peer-reviewed research which demonstrates the successful prediction of loudspeaker spatial qualities from measurements? I'm not saying it can't be done, but I'd be very interested in reading about it if it has.

IME(with traditional monopoles), it's pretty easy to predict how a speaker will image based on measurements. The spinorama on it's own is insufficient, but the off axis curves that @MZKM posts for every review do a great job of showing a speaker will image.
 
Top Bottom