• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

sound stage in headphones

Doodski

Grand Contributor
Forum Donor
Joined
Dec 9, 2019
Messages
21,572
Likes
21,854
Location
Canada
Yeah thanks for the revisit to high school math :) Now the connection with your special effects in terms of headphone specific audibility?
With the application of XC and XL reactance, leading and lagging current and voltage, and angular velocity things can get a bit more complex. More so than that web page detailed. From looking at waveforms used in special effects boxes I can imagine that there is method to the madness and I was thinking that if one extended their imagination they could rationalize that complex numbers could explain the imaging. How to do that is beyond me but it's a safe bet to think this would be the way. Lol @ headphone specific stuff...
 

Chrispy

Master Contributor
Forum Donor
Joined
Feb 7, 2020
Messages
7,938
Likes
6,094
Location
PNW
With the application of XC and XL reactance, leading and lagging current and voltage, and angular velocity things can get a bit more complex. More so than that web page detailed. From looking at waveforms used in special effects boxes I can imagine that there is method to the madness and I was thinking that if one extended their imagination they could rationalize that complex numbers could explain the imaging. How to do that is beyond me but it's a safe bet to think this would be the way. Lol @ headphone specific stuff...
LOL. I am often reminded of those claiming various amps drastically affect soundstage, too.....
 

Doodski

Grand Contributor
Forum Donor
Joined
Dec 9, 2019
Messages
21,572
Likes
21,854
Location
Canada
LOL. I am often reminded of those claiming various amps drastically affect soundstage, too.....
Some do. Tube amps, linear power output class A amps can sound different due to being in phase more so than a less linear amp and some IC amps sound different too. Those are the low hanging fruit anyway.
 

Chrispy

Master Contributor
Forum Donor
Joined
Feb 7, 2020
Messages
7,938
Likes
6,094
Location
PNW
Some do. Tube amps, linear power output class A amps can sound different due to being in phase more so than a less linear amp and some IC amps sound different too. Those are the low hanging fruit anyway.
Soundstage? Or something else?

ps I wasn't thinking about those amp technologies, neither of which particularly appeal to me.
 
Last edited:

Axo1989

Major Contributor
Joined
Jan 9, 2022
Messages
2,893
Likes
2,941
Location
Sydney
You can only really get a proper soundstage & imaging with HP's if you process the sound and impose appropriate Head Related Transfer Functions... (HRTF) - is adjust for the fact that our ears are on top of our shoulders, attached to our head, and have all sort of bits in them that change the sound based on where it is coming from ...

I'm in the 'soundstage in line with my ears' group (so a bit behind the centre of my head) when listening to normal stereo files.

I've made binaural recordings which work quite well. I enjoy spatial audio from Apple, and created a (iPhone LiDAR) scanned HRTF profile when they added that feature. With head tracking on (using AirPods Max) regular stereo tracks often include sonic elements that sound more 'out front'. Every now and then I have to check I'm not actually playing sound through the speakers (not wanting to irritate neighbours in the wee hours) but that usually happens when I've switched from speakers to headphone, or watching AV material (meaning it's partly influenced by vision and memory I assume).
 

dlaloum

Major Contributor
Joined
Oct 4, 2021
Messages
3,149
Likes
2,406
The only way to get a real soundstage experience when using headphones is playing recordings taken with a dummy head. It works best with radio dramas, not so well with music (there is only an extremely small amount of such recordings available at all).
Yes - that is binaural recording... a whole different kettle of fish.

and if you wanted to listen to a binaural recording on a traditional "stereo" (or surround) - you would have to do some sort of reverse processing similar to HRTF (but in reverse)....

Basically by using a dummy head - the recording imposes all of the adjustments created by our physiology on the recording - much like HRTF processing, it does not work perfectly for everyone, because everyone has their own unique ear shape (not to mention head size, height from shoulders, etc...) - but still the results are far more effective than most traditional stereo recordings!
 
OP
M

musica

Senior Member
Joined
Jun 29, 2022
Messages
405
Likes
96
what does the subjective way of perception of the soundstage in headphones depend on?
 

threni

Major Contributor
Joined
Oct 18, 2019
Messages
1,280
Likes
1,530
Location
/dev/null
Yes - that is binaural recording... a whole different kettle of fish.

and if you wanted to listen to a binaural recording on a traditional "stereo" (or surround) - you would have to do some sort of reverse processing similar to HRTF (but in reverse)....

Basically by using a dummy head - the recording imposes all of the adjustments created by our physiology on the recording - much like HRTF processing, it does not work perfectly for everyone, because everyone has their own unique ear shape (not to mention head size, height from shoulders, etc...) - but still the results are far more effective than most traditional stereo recordings!
In theory, should it be possible, when using earphones with minimal alteration to the sound between driver and ear drum, to entirely rule out physical ear/head characteristics and deliver a uniform, perfect (discuss!) soundstage if the sound was processed via a reference dummy head of some sort? If there's a soundstage in real life, and it's possible to simulate one with HD 800 S or binaurally processed stereo on other heaphones then it's just a matter of 1) processing the sound and 2) delivering this to your ear drums, right?
 

dlaloum

Major Contributor
Joined
Oct 4, 2021
Messages
3,149
Likes
2,406
In theory, should it be possible, when using earphones with minimal alteration to the sound between driver and ear drum, to entirely rule out physical ear/head characteristics and deliver a uniform, perfect (discuss!) soundstage if the sound was processed via a reference dummy head of some sort? If there's a soundstage in real life, and it's possible to simulate one with HD 800 S or binaurally processed stereo on other heaphones then it's just a matter of 1) processing the sound and 2) delivering this to your ear drums, right?
Well, the best HRTF systems (eg: Smythe Realizer) - use micro mic's placed in the listeners ears - which are used to measure a specific listening space.

The characteristic of both the listening space AND the listeners ears are calculated - and then used to simulate that listening space for that specific listener through his headphones.

It works really well.

To do the same with Binaural - you would need to model the listeners head and ear shape to make the Binaural head (perhaps a latex mold?) - the results might well achieve near perfection for the model of the head - but would be less perfect for anyone else.

There might be some interesting work to be done in identifying how much of that is needed to achieve audible differences - is it just the ear shape, or do you need to get the head and shoulders just right as well?

Basic Generic HRTF is something that every AVR has the processing power to do, and I find it disappointing, that such a generic headphones decoder, is not included as standard by Dolby or DTS....
 

threni

Major Contributor
Joined
Oct 18, 2019
Messages
1,280
Likes
1,530
Location
/dev/null
Well, the best HRTF systems (eg: Smythe Realizer) - use micro mic's placed in the listeners ears - which are used to measure a specific listening space.

The characteristic of both the listening space AND the listeners ears are calculated - and then used to simulate that listening space for that specific listener through his headphones.

It works really well.

To do the same with Binaural - you would need to model the listeners head and ear shape to make the Binaural head (perhaps a latex mold?) - the results might well achieve near perfection for the model of the head - but would be less perfect for anyone else.

There might be some interesting work to be done in identifying how much of that is needed to achieve audible differences - is it just the ear shape, or do you need to get the head and shoulders just right as well?

Basic Generic HRTF is something that every AVR has the processing power to do, and I find it disappointing, that such a generic headphones decoder, is not included as standard by Dolby or DTS....
To clarify - my question was focused fully on the concept of entirely bypassing this or that person's ears and going straight to the eardrum via an in-ear earphone, so that everyone can play the same, processed, source material.
 

dlaloum

Major Contributor
Joined
Oct 4, 2021
Messages
3,149
Likes
2,406
To clarify - my question was focused fully on the concept of entirely bypassing this or that person's ears and going straight to the eardrum via an in-ear earphone, so that everyone can play the same, processed, source material.
I get that - but the known question mark, is the variance in ear shape - for the effect to be 100% the recording ear, needs to be the same shape as the listening ear...

Which isn't to say that 80% won't sound good - it might be perfectly adequate - but not enough research has been done in that area
 

threni

Major Contributor
Joined
Oct 18, 2019
Messages
1,280
Likes
1,530
Location
/dev/null
I get that - but the known question mark, is the variance in ear shape - for the effect to be 100% the recording ear, needs to be the same shape as the listening ear...

Which isn't to say that 80% won't sound good - it might be perfectly adequate - but not enough research has been done in that area
I'm not so sure. Surely you only need one person's ears. Suppose you have very good hearing.

1) We put a mic in each ear, sit you in a concert hall and see what your ear does to the audio that's played on the speakers in the hall, checking that you do get a good sound stage, of course. Then play back the recording in some in-earphones and confirm that you hear the same - or at least, a very good - soundstage.
2) Then figure out what you have to do to the frequency/phase/whatever to get from the original to the in-ear recording.
3) Then apply that transformation to the original recording and see how close that gets to what you originally heard from the speakers in the hall.

I'm not sure if your ears do anything to create the soundstage. Perhaps the above could be attempted with mics at the side of your head additionally and see what the difference is. Perhaps repeat with directional mics, or multiple mics.

Maybe there's some magic which happens at the closest point to your ears that they can make an different to the soundstage. Let's say it happens between your ears and your eardrums. That would explain the difference between mics outside your ears and mics deep inside. Perhaps some people happen to have just the right shape to get great soundstage. Golden ears! So, great, get a bunch of them and see if there's a consistent quality to their ears as they'd make a perfect model for the reference "what is happening to the sound in all this meat which makes the soundstage great" calculations we want to apply to the source.

But absolutely none of what I've written above implies that we'd ever need to modify anything for a given user once we have the model. (Although the possibility exists for some sort of per-user in-ear recording to get a personal model so that further transformations could be be performed locally to enable listening on normal headphones, not earphones. One step at a time though, eh?)
 
Last edited:
OP
M

musica

Senior Member
Joined
Jun 29, 2022
Messages
405
Likes
96
so, does it depend on the autonomy of our ear and not on the best electronics?
 

solderdude

Grand Contributor
Joined
Jul 21, 2018
Messages
16,021
Likes
36,335
Location
The Neitherlands
I suspect it is a 'brain' thing. Not so much anatomy or electronics.
Driver-ear distance, angle, ear shape, perhaps driver qualities, brain and recording are all variables.
Electronics only can affect frequency response and phase change.
In order for phase variances to become audible they must be varying a lot in a very narrow frequency band. This is is something amplifiers simply do not do.
Well... some class D do but just outside of the audible band.
 

NTK

Major Contributor
Forum Donor
Joined
Aug 11, 2019
Messages
2,712
Likes
5,991
Location
US East

richard51

Member
Joined
Jan 21, 2018
Messages
14
Likes
3
The fact that we can hear a 3-D "out of the head" soundfield with headphone, or a 3/D surrounding soundfield in a room "out of the plane of the 2 speakers", are not only "brain" effects but psycho-acoustic phenomena related to specific disposition in physical acoustic ....

I managed with experiments to create a surrounding 3-D soundfield in my room with not only material treatment with a balance between reflection/absorption/diffusion but also using Helmholtz resonators and diffusers modifying the pressure zones distribution around speakers and my room like AKG did it his own way inside the cup of the K340 creating a dual acoustic chamber ....

For example these AKG K340, my best headphone by the way , own indeed inside his shells cups some Helmholtz devices that work to improve the soundfield in a realistic way and create sometimes 3-d huge soundfield "out of the head" completely , if the masterings recording process make it possible to reproduce it for sure in the first place.... Then this perception of a 3-D soundfield is not a pure illusion, a useless artefact, no, it is part of a vast informative mechanism related to the way the brain process meaningful sound informations and localization in some acoustic room or shell/cups conditions ....

A rainbow is like a a 3-D soundfield a useful very deep informative and very real "illusion"....We can understand a rainbow and explain it if we study it with many science fields together.... Physics and neurophysiology to begin with.... Same interdisciplinary studies explain 3-D soundfield and AKG, use it to created his famous extraordinary headphone... The only one of this kind even today.... The best i listen to.....
 
Last edited:

richard51

Member
Joined
Jan 21, 2018
Messages
14
Likes
3
Top Bottom