• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

What’s Your Triangle in Stereo Speaker Listening?

Which triangle is your stereo speaker setup?


  • Total voters
    176
If what he is talking about varies by frequency, that's true.
i made just dirac pulse. (flat)

View attachment 393495

Yes. it's flat. and same time.
And i add some And I'll artificially change the frequency as I like. (
PK Fc 4000 Hz Gain 10.00 dB Q 2.000
PK Fc 500.0 Hz Gain -10.00 dB Q 2.000)


View attachment 393498

Minphase is a change in frequency amplitude, so of course there's a change. I think this is probably what he's talking about. Or maybe I don't know.

View attachment 393499

can also check it out in the group delay


And to avoid this phase shift, we need to make it linear phase (all delayed to the slowest, symmetrical impulses).
But I'm not sure if that's real and if this is what leads to the comparison of the zero degree and the HRTF sum response at a certain angle we were talking about yesterday.

And I'm a little bit questionable, too.
If you want to observe really pure HRTF, you should use MATLAB to use public sofa and put a pure signal in it.
The public dummy head posted by me (or tim), my thread measurement response, is also not reliable in such a minute delay because HRIR has the influence of the speaker itself.
But the important thing is that I don't know how to handle matlab. lol :facepalm:

Whenever I see a flat frequency response, I know I cannot achieve that because my speakers output is not flat. My left and right ears inputs are extremely different due to pinna, head shadow, torso reflection so I just don’t concern myself with flat response. Although, I find in this digital era they are fun to use and can do all kind of measurements even create beautiful waterfall plots. :)

I don’t know what he is referring to.

I have nothing to add.

cheers!
 
Last edited:
STC, summing the responses doesn't work just like you critisize, if you think it is showing situation one speaker playing to two ears, which is logical as that is how dummy head is usually used for recording sound like that. But if you bend your thought that it shows situation for one ear and sound from two speakers, then the logic for summing works, the sounds cancel at (the one) ear as per summed!:) I had to think this a moment as I thought at first it doesn't work, like you.

I think the measurement can be done either way in case of dummy head because itnisnsymemtric: one source recorded by both ears, or two sources recorded with one ear would likely result exact same graphs (assuming anechoic or gating). It's easier to measure with both ears and single source, just rotate the head to take the three samples, so I think thats what happened, and it should be valid. So all we need to do is bend our thought.
 
Last edited:
STC, summing the responses doesn't work just like you critisize, if you think it is showing situation one speaker playing to two ears, which is logical as that is how dummy head is usually used for recording sound like that. But if you bend your thought that it shows situation for one ear and sound from two speakers, then the logic for summing works, the sounds cancel at (the one) ear as per summed!:) I had to think this a moment as I thought at first it doesn't work, like you.

I think the measurement can be done either way in case of dummy head because itnisnsymemtric: one source recorded by both ears, or two sources recorded with one ear would likely result exact same graphs (assuming anechoic or gating). It's easier to measure with both ears and single source, just rotate the head to take the three samples, so I think thats what happened, and it should be valid. So all we need to do is bend our thought.
Yes. That’s what interaural crosstalk is. The corruption is it corrupts the ITD value or rather limits the ITD to the physical location of the speakers. I agree. But how does this affect the ITD being different for different frequency implying speed of sound changes with frequency? And that is so basic that nobody bothered to publish it. And Imfor two days I am still searching for the answer.

Edit- the above is for stereo reproduction via loudspeakers. Now with real sound the sound from the source enters both ears. Both signals will not be identical even if is dead in the centre because our pi as is not identical. So again how does HRTF affect the ITD at different frequencies.
 
Last edited:
As you say distance around the head changes with source position. When source is in front, sound doesn't have to go around yer head. At 30deg angle there is some extra path length to the other ear, and since head is 3D object multiple paths all around the head. Some go around the face, some over the top, some even behind back of the head. At 60deg angle these paths change, around the face gets longer, behind the head gets shorter. These all meet at the ear and interfere, amplitude likely changes a bit, for example there is likely less highs diffrating around back of the head (bigger angle around). For example, if you simulate ideal point source on surface of a sphere you'll see some sound actually goes all around the sphere and appears in small off-axis frequency responses as small interference ripple. It's not full bandwidth ripple though, but perhaps mostly wavelengths that are closer to the sphere size. Do you mean this stuff or something else?

Some more observations from images in #113: since ears are about 2kHz apart it's roughly full wavelength around the head no matter what the angle, and it's always average constructive interference around there at few kilohertz. So all we have is difference below ~2kHz and roughly above say ~6kHz.
 
Last edited:
At 30deg angle there is some extra path length to the other ear, and since head is 3D object multiple paths all around the head.

You mean the sound wave snaking around the head and find its way to the other ear? I thought sound waves is the just waves generated by the oscillating molecules in air. As the waves comes in expending bubble form, it can well excite the molecules Infront and above the head to reach the other ear. here is a measurements of a dummy head binaural at 45 degrees left.

so where is the peak and still going back to the original question it still doesn’t explain why you get difference ITD and how sound travels faster at different frequency.

Sorry to the OP for hijacking your thread.

1726832928426.jpeg


This is somewhat similar to Moller graphs. Although, here the dip in the immediate ear is around 9kHz while Mollers is around 12kHz. But that I believe here the mic is at the ear drum position deep so there could be some difference.

Now where is the peak at 2kHz? The only peak we at 2 kHz is when you do cancelation and the highest cancellation was around as per my own XTC and measurement video that I shared here. if you are one of the four viewer you would have noticed that.

Also notice that the opposite the frequency around the 9kHz is actually louder! Than the immediate ear.
 
I have tried to justify by giving examples in subsequent posts.
Oh, I meant a reference regarding Audio Note speaker setup and who the expert was. I actually wondered at first if you meant Audio Physic, but Joachim Gerhard's recommended subtended angle was towards the bottom of the 70-90 degree range, depending on speaker dispersion, so I figured that couldn't be it. Most of what I've read about Audio Note speaker setup was much more vague and more along the lines of how they were supposed to do pretty well proximate to boundaries.
 
You mean the sound wave snaking around the head and find its way to the other ear? I thought sound waves is the just waves generated by the oscillating molecules in air. As the waves comes in expending bubble form, it can well excite the molecules Infront and above the head to reach the other ear. here is a measurements of a dummy head binaural at 45 degrees left.

so where is the peak and still going back to the original question it still doesn’t explain why you get difference ITD and how sound travels faster at different frequency.

Sorry to the OP for hijacking your thread.

View attachment 393555

This is somewhat similar to Moller graphs. Although, here the dip in the immediate ear is around 9kHz while Mollers is around 12kHz. But that I believe here the mic is at the ear drum position deep so there could be some difference.

Now where is the peak at 2kHz? The only peak we at 2 kHz is when you do cancelation and the highest cancellation was around as per my own XTC and measurement video that I shared here. if you are one of the four viewer you would have noticed that.

Also notice that the opposite the frequency around the 9kHz is actually louder! Than the immediate ear.
Hi, sorry I'm not too familiar with all the material posted in the thead and mixing things up. I need to look for it in order to not confuse things further. I was looking at Tim Link posted graph in post #113.

I'm not sure where you see different ITD happening? j_j mentioned it and explained basically same thing, sound diffracts around the head and different wavelenghts have bit varying path lengths due to shape of the head. Sound expands like a bubble as you say, but it diffracts around objects, that is why you hear sound in another room, it diffracts around corners like the door, same happens with any waves, light, sea. Same with the head, sound goes all around it basically. Sound diffracting back of the head has bit longer path length than sound diffracting above, or through the face, even under your chin, and mic in ear canal records interference of all of these. Speed of sound stays the same, path lenght varies, also short wavelengths get attenuated more and so on, basic sound propagation stuff.

Here some fancy sims visualizing diffraction if it helps: https://blog.soton.ac.uk/soundwaves/wave-interaction/4-diffraction/

Here crude simulation of head, nose points up, green and red probes in place of left and right ear and a point source top right roughly at 45 angle sending a pulse of sound. The sound diffracts around the "head" and hits the further ear (green probe) from both in front and from back of the "head", and they arrive to the other ear slighty different time. Sound is the black shades in the image.
1726855836802.png

Click link below to see it happen and you can play with position of things and other stuff, doing this live real time is much more illustrative than this still image.
Unfortunately the simulator doesn't work on mobile I think.
 
Last edited:
  • Like
Reactions: STC
You mean the sound wave snaking around the head and find its way to the other ear? I thought sound waves is the just waves generated by the oscillating molecules in air. As the waves comes in expending bubble form, it can well excite the molecules Infront and above the head to reach the other ear. here is a measurements of a dummy head binaural at 45 degrees left.

so where is the peak and still going back to the original question it still doesn’t explain why you get difference ITD and how sound travels faster at different frequency.

For goodness sake, the frequency response of the HRTF is not flat, right? And it has a phase response, right? (Yes, it does.) You can remove the "constant delay" part, yes?

Once you've done that, you find that you have a non-zero,non-flat phase response.

PHASE SHIFT IS TIME DELAY. It's that simple. It's simply acoustics around the head, it is not so much "sound travels faster", even though, in fact it does, that is a teeny-tiny minimal effect compared to the PHASE (and thus time) RESPONSE of the HRTF's. It's that simple.
 
Oh, I meant a reference regarding Audio Note speaker setup and who the expert was.
I think it was in the late 2000. The “expert” was from audiophile world. But little bit of Googling about Audio Show in my country ( Msia) you can see his name. Nothing to do with Audio Physics. The point about wide spread of speakers is that it creates weak centre and as you already referenced the relevant standards by AES, ITU and EBU the recommended angle is about 45 to 65.


Most of what I've read about Audio Note speaker setup was much more vague and more along the lines of how they were supposed to do pretty well proximate to boundaries.

Yeah. Some sort of placement closer to the room boundaries.

I'm not sure where you see different ITD happening? j_j mentioned it and explained basically same thing, sound diffracts around the head and different wavelenghts have bit varying path lengths due to shape of the head.

It travels longer to reach the other ear due the distance. ITD and ILD are some of the cues used by the brain for localization. if you need HRTF for stereo to work then all stereo recording without HRTF not supposed to work. mics such as XY only uses level. Recordings often use pan potting to shift image and that is without any ITD cues.
Mics like
1726873819291.jpeg


Do not have head or torso but works pretty well for binaural .

The animation is great and it shows how sound waves travel. How the head affect the ILD and the distance cause ITD. But still doesn’t explain
You can remove the "constant delay" part, yes?

Once you've done that, you find that you have a non-zero,non-flat phase response.

Would appreaciate examples. Non- zero and non-flat means not zero and not flat. But that’s how it was before we remove constant delay. ( I don’t know how to do that nor what it is).

PHASE SHIFT IS TIME DELAY. It's that simple

yes. One cycle consists of positive and negative phase. And it will have delays due to distance. So what’s you point?

It's simply acoustics around the head, it is not so much "sound travels faster", even though, in fact it does, that is a teeny-tiny minimal

that’s what I have been asking from the beginning. Since ITD matters in μs for localization how much is teeny-tiny. Because μs itself already teeny-tiny.

compared to the PHASE (and thus time) RESPONSE of the HRTF's

Yes that’s why ITD is only relevant just up to 1400. I still don’t get the point nor you are helping to share your knowledge by answering direct question raised to you.
 
Last edited:
that’s what I have been asking from the beginning. Since ITD matters in μs for localization how much is teeny-tiny. Because μs itself already teeny-tiny.
It matters a lot for spatial perception and sense of width and externalization.

Yes that’s why ITD is only relevant just up to 1400. I still don’t get the point nor you are helping to share your knowledge by answering direct question raised to you.

I'm afraid you're not understanding the answers, AND you're dead wrong about ITD only relevant up to 1400 Hz. As I've said many times on this forum, and in many other places, interaural delay related to PHASE starts to not matter below 1kHz and is mostly dead by 2kHz. HOWEVER, at frequencies above 1kHz, ENVELOPE ARRIVAL in a given ERB is very important. This means, for instance, that a continuous sine wave has little directional information (without head movement to localize via hrtf) at, say, 4kHz, but if you have a signal that has a pulse-like envelope centered at the same frequency, you will absolutely get directional information from it, without head movement. An example would be a glockenspiel attack.

This is, in fact, the source of the "glockenspiel problem" in which you strongly localize the attack at one place, but the ring notes at some other location in a 2-channel panpotted presentation, even though the source direction is constant.

At low frequencies it's phase, at high frequencies it's envelope onset, as far as ITD's are concerned, but it's still sensitivity to ITD, just in a different way and not for all signals.

You apparently have access to matlab and some source of head related impulse responses. Take a great big, long, FFT of that impulse response. Calculate the phase of the positive frequencies using atan2. Then "unwrap" the phase. Do a first order linear fit (polyfit) to the phase. Subtract out the part amounting to the linear fit from the phase. The slope of that linear fit, by the way, is exactly the overall time delay. Take the difference. In real data I've seen, that will not be a straight line. The variation from the straight line can be directly converted to +- time of arrival across frequency, remember phi = 2 * pi * f *t You have phi and f, now calculate 't'.
 
It matters a lot for spatial perception and sense of width and externalization.

With respect, how is this relevant to what I ask. I am just asking a simple question which is why and how the ITD is different for different frequency.

I just finished watching your Auditory Mechanism and Spatial Hearing. And I see very detailed explanation about how frequencies is processed inside the ear before signals reaching the brain. But nothing so far on ITD .

Now you are mentioning about externalization and I don’t even know why and how it relevant to the question which now you are claiming I don’t understand the answers.

Just to refresh your mind. Here are the sequence of the questions.

James - For this, actual ITD's are important, because ITD isn't actually the same at every frequency for a given angle.



ST - You mean speed of sound is frequency dependent? I only thought it was ILD that based due to HRTF between the ears.



James - While speed of sound varies SLIGHTLY, the irregular shape of the head, etc, causes delay to change a bit at different frequencies. The primary (mean) number is certainly due to the path around the head from the source direction, of course. But it's not the speed of sound that matters much here, it's the irregular shape of the head and the path from one side to the other.



ST - @j_j do you have any reference for difference in ITD for different frequencies.



[Then there was an another point your raised which was addressed to Tim and that is omitted here. ]



ST- cont/ Anyway, I would appreciate the reference or link which you are stated due to HRTF you have difference speed of sound for different frequencies.



James- Evaluate the phase of an HRTF. Remove the constant delay part. It's not flat, therefore time of arrival varies with frequency. I doubt anyone's bothered to publish that, it's rather obvious.



Then my reply as below.

I am still searching for the answers which you stated.

Your statement is very important to me or anyone interested in crosstalk cancellation because this give a far superior attenuation level.

Now going to the other point which said:-


This is new again. So what is the phase of HRTF? Or what is HRTF and the purpose?



Constant delay of time difference between left and right ear? You mean ITD. But you said they are constant and changes according to frequency. So where does the constant delay is coming from?



What is not flat? Frequencies? I don’t think anyone disputing that. So what else is not flat here?



I hope at least you could provide a simple
explanation since this is so basic but still beyond my understanding. And since I am the only one asking I am sure others too understood your statement and therefore they could be kind enough share their valuable knowledge.

Then again you said the following to a reply to another member.



James - For goodness sake, the frequency response of the HRTF is not flat, right? And it has a phase response, right? (Yes, it does.) You can remove the "constant delay" part, yes?

Once you've done that, you find that you have a non-zero,non-flat phase response.

ST - Would appreaciate examples. Non- zero and non-flat means not zero and not flat. But that’s how it was before we remove constant delay. ( I don’t know how to do that nor what it is).

James- PHASE SHIFT IS TIME DELAY. It's that simple.

ST - yes. One cycle consists of positive and negative phase. And it will have delays due to distance. So what’s you point?

James- It's simply acoustics around the head, it is not so much "sound travels faster", even though, in fact it does, that is a teeny-tiny minimal effect compared to the PHASE (and thus time) RESPONSE of the HRTF's. It's that simple.



ST- that’s what I have been asking from the beginning. Since ITD matters in μs for localization how much is teeny-tiny. Because μs itself already teeny-tiny.



James- compared to the PHASE (and thus time) RESPONSE of the HRTF's. It's that simple.



ST- Yes that’s why ITD is only relevant just up to 1400. I still don’t get the point nor you are helping to share your knowledge by answering direct question raised to you.


I'm afraid you're not understanding the answers, AND you're dead wrong about ITD only relevant up to 1400 Hz

I concede. I should have said most sensitive.


All sound waves where half wave length is shorter than 17cm or whatever size is the head circumference will have the same phase on both ears. if my calculation is correct that is about all frequencies below about 1000Hz depending on the size/distant of head/ears will have the same phase on both ears. Is this correct?

And since you mentioned about externalization. Do you say if the pinnae are completely removed externalization would still exist? is that you are saying because you are bringing in ITD for externalization.
It matters a lot for spatial perception and sense of width and externalization.
 
Last edited:
Although I'm tempted to suggest that you just Google something, I wonder if you're at cross-purposes here
With respect, how is this relevant to what I ask. I am just asking a simple question which is why and how the ITD is different for different frequency.
Phase difference or delay depends on frequency

"A phase difference \<t> corresponds to an interaural time difference (ITD) of It = ±<t>K2Trf) for a tone with frequency f." https://www.cogsci.msu.edu/DSS/2019-2020/Hartmann/Hartmann_1999.pdf

Relationship between ITD and phase delay simplified but still demonstrating frequency dependence: https://web.pa.msu.edu/acoustics/koller.pdf
James - While speed of sound varies SLIGHTLY, the irregular shape of the head, etc, causes delay to change a bit at different frequencies. The primary (mean) number is certainly due to the path around the head from the source direction, of course. But it's not the speed of sound that matters much here, it's the irregular shape of the head and the path from one side to the other.
I bolded delay here
James- PHASE SHIFT IS TIME DELAY. It's that simple.
Frequency dependence of that relationship as calculated above
James- It's simply acoustics around the head, it is not so much "sound travels faster", even though, in fact it does, that is a teeny-tiny minimal effect compared to the PHASE (and thus time) RESPONSE of the HRTF's. It's that simple.
I believe that you're thinking of ITD as a relative constant whereas @j_j is discussing time and phase as inextricably linked
All sound waves where half wave length is shorter than 17cm or whatever size is the head circumference will have the same phase on both ears. if my calculation is correct that is about all frequencies below about 1000Hz depending on the size/distant of head/ears will have the same phase on both ears. Is this correct?
No, all frequencies will not have the same phase on both ears. See above. How could the phase for lower frequencies be the same at both ears when the time of arrival is different?
And since you mentioned about externalization. Do you say if the pinnae are completely removed externalization would still exist? is that you are saying because you are bringing in ITD for externalization.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7488874/
 
Although I'm tempted to suggest that you just Google something, I wonder if you're at cross-purposes here

Phase difference or delay depends on frequency

"A phase difference \<t> corresponds to an interaural time difference (ITD) of It = ±<t>K2Trf) for a tone with frequency f." https://www.cogsci.msu.edu/DSS/2019-2020/Hartmann/Hartmann_1999.pdf

Relationship between ITD and phase delay simplified but still demonstrating frequency dependence: https://web.pa.msu.edu/acoustics/koller.pdf

I bolded delay here

Frequency dependence of that relationship as calculated above

I believe that you're thinking of ITD as a relative constant whereas @j_j is discussing time and phase as inextricably linked

No, all frequencies will not have the same phase on both ears. See above. How could the phase for lower frequencies be the same at both ears when the time of arrival is different?

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7488874/

No, all frequencies will not have the same phase on both ears. See above. How could the phase for lower frequencies be the same at both ears when the time of arrival is different

I was referring to positive and negative phase otherwise rest of the comment would not make sense. The phase will be different but then all will be either positive or negative up to half wave length of your ear/head distance/circumference.
 
I was referring to positive and negative phase otherwise rest of the comment would not make sense. The phase will be different but then all will be either positive or negative up to half wave length of your ear/head distance/circumference.
Next time, measure it. If they are all "positive or negative" you didn't extract the mean delay component.
 
This thread is a great learning opportunity for me and I want to benefit from it as much as I can!

J J has assigned us some homework, and I am not going to reap the full benefits without doing the homework. I hope, by not ignoring his suggestions, I can also encourage J J to keep educating us. I am also aware that some members here are not familiar with the math, and hopefully this may help them too.

I am using the 30 years old HRTF data from the MIT Media Lab. They are measured in MIT's anechoic chamber using a KEMAR HATS (head and torso simulator). The HRTF data (actually the time domain equivalent, HRIR -- head related impulse response) I used is from the "full.zip" file. The IR data are in WAV files. The IR lengths are 512 samples; sampling rate is 44100 Hz; values are signed int16.

I am starting with J J's post #97. Only the "left ear" data from the KEMAR are used. The HRIR files used are: L0e330a.wav (0 deg elevation, 330 deg azimuth = left speaker); L0e030a.wav (0 deg elevation, 030 deg azimuth = right speaker); and L0e000a.wav (0 deg elevation, 000 deg azimuth = center speaker). Below are the impulse responses and the FR magnitudes. The phantom center is the average of the left and right speakers. The IRs are manually time shifted so that the IR of the left speaker (the speaker closest to the left ear) starts at t ≈ 0. The ITDs can be observed from the IRs.
hrir.png

fr_mag.png

And here are the differences in the FR magnitudes between the phantom center and real center, displaying the phantom center dip J J mentioned.
fr_diff.png

Advancing to post #130. These are the phase response plots processed as described (as far as I understood) in the post. The Python Jupyter notebook is attached.
fr_phase.png
 

Attachments

  • HRTF Analysis Homework Ver 1.zip
    2.3 KB · Views: 22
Next time, measure it. If they are all "positive or negative" you didn't extract the mean delay component.

The topic is about stereo triangle. Stereo works because of ‘interaural’ difference in level and timing( although timing difference is optional). Keep it simple. You need to analyze both ears arrival time, phase and level to understand the stereo illusion. Stereo working pretty well without knowing what is ITD envelope or how the signals is interpreted or transmitted in the ear by the 3500 inner hair cells.

Read the title. Read the context of the answers. And if something is wrong be generous in correcting them.
 
Last edited:
The topic is about stereo triangle. Stereo works because of ‘interaural’ difference in level and timing( although timing difference is optional). Keep it simple. You need to analyze both ears arrival time, phase and level to understand the stereo illusion. Stereo working pretty well without knowing what is ITD envelope or how the signals is interpreted or transmitted in the ear by the 3500 inner hair cells.

Read the title. Read the context of the answers. And if something is wrong be generous in correcting them.
You started out complaining about the 'difference in speed of sound', and now you're trying to argue about stereo images instead. What's more, yeah, you need to know the ITD as a function of frequency to understand why, for instance, the "glockenspiel problem" exists with a 2-channel panpotted signal. So, it seems to me that all you want to do is to change the subject here.
 
Last edited:
This thread is a great learning opportunity for me and I want to benefit from it as much as I can!

J J has assigned us some homework, and I am not going to reap the full benefits without doing the homework. I hope, by not ignoring his suggestions, I can also encourage J J to keep educating us. I am also aware that some members here are not familiar with the math, and hopefully this may help them too.

I am using the 30 years old HRTF data from the MIT Media Lab. They are measured in MIT's anechoic chamber using a KEMAR HATS (head and torso simulator). The HRTF data (actually the time domain equivalent, HRIR -- head related impulse response) I used is from the "full.zip" file. The IR data are in WAV files. The IR lengths are 512 samples; sampling rate is 44100 Hz; values are signed int16.

I am starting with J J's post #97. Only the "left ear" data from the KEMAR are used. The HRIR files used are: L0e330a.wav (0 deg elevation, 330 deg azimuth = left speaker); L0e030a.wav (0 deg elevation, 030 deg azimuth = right speaker); and L0e000a.wav (0 deg elevation, 000 deg azimuth = center speaker). Below are the impulse responses and the FR magnitudes. The phantom center is the average of the left and right speakers. The IRs are manually time shifted so that the IR of the left speaker (the speaker closest to the left ear) starts at t ≈ 0. The ITDs can be observed from the IRs.
View attachment 393946
View attachment 393947
And here are the differences in the FR magnitudes between the phantom center and real center, displaying the phantom center dip J J mentioned.
View attachment 393949
Advancing to post #130. These are the phase response plots processed as described (as far as I understood) in the post. The Python Jupyter notebook is attached.
View attachment 393950

You get an A+ in my book. Very nicely done. Notice how that "dip" is smack in the ITD range, too, as well as the floor-bounce range. This is part of why the distance cues get squashed in a 2-channel setting.

Also notice that there are effectively two peaks in the "phantom center" HRIR sum, and that's part of what mucks up the senses of distance, as well, since it's in the "envelope sensitivity" area.

The plots with delay extracted are as clean as I've seen outside my own desk.
 
  • Like
Reactions: NTK
Understanding phase. To the ears sound is simply about the molecules hitting the ear drum.

It only process the excitement of ear drum. There is no positive or negative phase. It is only compression and rarefaction which is mathematical represented by sine waves.

1726966325350.jpeg
 
Last edited:
Understanding phase. To the ears sound is simply about the molecules hitting the ear drum.

It only process the excitement of ear drum. There is no positive or negative phase. It is only compression and rarefaction which is mathematical represented by sine waves.

Yeah, that would be wrong. You do understand that phi (phase in radians) = 2 pi F T where F is frequency in Hz and T is time in seconds. Yes?

Ergo, if there is delay, there is phase shift. If the phase shift is not a straight line, there is also a time differential at different frequencies.

For further discussion you are referred to Norman Morrison's text on Fourier Analysis.

tl:dr You're just wrong, so wrong it's hard to explain why.
 
Back
Top Bottom