• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Locating bass <80Hz?

If you recap the numbers, the room under investigation doesn't allow for localization below 150Hz or so at any position.
Based on what? While the way a sound wave bounces around in a room is well understood and explained by physics, the way a human being can locate a sub in a room is not completely understood and is most likely a combination of things.
 
Last edited:
Based on what? While the way a sound wave bounces around in a room is well understood and explained by physics, the way a human being can locate a sub in a room is not completely understood and is most likely a combination of things.
What I can think of right now which can play a role:
What type of room?: shape, size and acoustics in the room.
How is the subwoofer placed?
Is it tested with tones, pink noise, music?
With music; what music? With music full range; how are the speakers and subwoofer placed?
What type of LP filter does the subwoofer have? EQ bass FR or not? Subwoofer distortion. SPL level.
Full range music; in that case speakers with HP filters? Are sound levels measured between subwoofer and speakers to see if they are equal?
How trained, or skilled is the listener? (think of the difference between individuals in detecting distortion for example)
 
Has anyone ever tried tones, single-channel listening, and stereo listening, starting at 100 Hz and going down?
obviously listening from the stereo triangle
 
I tried to be a bit ironic, and that actually misfired due to language barrier. I actually know what you and other are talking about. I understand your point(s) pretty well. I simply don't agree to the notion that the talks is relevant in the context given by the original posting.

Reiterated, at best you apparently want to align some observation of A (sub location) with observation B (Griesiger/Lund) because it's "subwoofer", without having a mental model of A or B respectively. Maybe you're trying to make up something in this regard, but--for the time being, there's, in my book, nothing that would remotely fit.

I was correcting you about envelopment and ASW and attitudes toward 'preference', and now you've shifted back to subwoofer localization.

You have no idea what I 'apparently want', and I'm finding your rhetorical antics tiresome, language barrier or no.
 
Last edited:
My personal opinion is that one could most likely achieve this without "ever more complicated, and costly tech" simply by listening in a larger room with more effective bass absorption,

I'll stop you there, because one has to balance 'ever more complicated and costly tech'* versus the cost of a larger room, and the cost of effective bass absorbers. Neither tend to be cheap, and fully assessing the objective improvement of either, requires ...measurement tech.

I'll bow out now.
I hope not. Your contributions are always interesting.



*does adding subwoofers equate to 'ever more complicated and costly tech'?
 
But still it has nothing to do with the record creator's intentions.
I do not know the record creator's original intentions. It could simply be to sell records, or for one to enjoy the music. Or it could be to listened to only at the stroke of midnight when a full moon is present using brand ZYZ speakers, nearfield, naked.

I do, however, know the intentions of *my* system. And that is to enjoy the music and to try and get it closer to the impression I get when I listen to music performed live.

Adding manipulations to the bass to try and remove an artifact of reproduction in a small room is, in my eyes, no different than other forms of room EQ. For even greater horrors, I am taking the 2 channel source and processing it up to 17 channels, and 2 subs.
 
I was correcting you about envelopment and ASW and attitudes toward 'preference', and now you've shifted back to subwoofer localization.
So you're upset because I didn't pick up the gauntlet? I'm corrected.

I figure we can’t really stay on track here. The topic is being discussed by psychoacoustics experts, and it feels like we’re only catching the emotions, not the wisdom.
 
I found this study interesting. I'm pasting a screenshot of the conclusions. There is more interesting information in that study. Last words in conclusion:
it depends :)

View attachment 483502
View attachment 483503View attachment 483504


Incidentally, this it depends... has been raised quite a few times in this thread by ASR members.:)
Big emphasis on the size of the room in that study.
____
This study also seems interesting. Unfortunately I can't download the pdf at the moment:

View attachment 483507

Now I was able to get the pdf. Pasting the beginning of it. Seems interesting. I'll read that report::)
View attachment 483522

Here is this same problem occuring again:

1760661460216.png


ITD has no application for low frequencies. And since ITD is transient detection, it will not improve with a longer window. This paper can not have been peer reviewed by someone who knows anything about signal theory.
 
ITD has no application for low frequencies. And since ITD is transient detection, it will not improve with a longer window. This paper can not have been peer reviewed by someone who knows anything about signal theory.
The problem is that your definition of ITD is incompatible with theirs. For low frequencies, ITD and what you call "IPD" are the same thing. The inner hair cells behave much like half-wave rectifiers and their firing rates spike at the zero crossings of the (filtered) waveform. The difference in timing of these spikes between the two ears constitutes the time difference; ITD has nothing to do with the envelope in the subwoofer range.
 
No, that is not correct. You can not talk about phase and timing as if they were the same thing. And that is where the issue lies, there are papers on papers being published where there are these types of mistakes. The problem is not what words are used to describe one thing, but the fact that the physics does not work the way it is being described. If you alter ITD by 1ms you get a very precise sense of a fixed spatial position, but if you alter phase by 1ms you do not get anything close to the same effect, because it is not the same thing.
 
No, that is not correct. You can not talk about phase and timing as if they were the same thing.
ITD specifically refers to onset. IPD concerns the same phenomena but expresses the difference in degrees over the period of the stimulus.
 
No, that is not correct. You can not talk about phase and timing as if they were the same thing.
If you consider how the auditory system works, you certainly can. For low frequencies, there is a transient produced by the cochlea at each zero crossing. Measure the time difference between the two ears and you have your ITD.
 
ITD specifically refers to onset. IPD concerns the same phenomena but expresses the difference in degrees over the period of the stimulus.

If you consider how the auditory system works, you certainly can. For low frequencies, there is a transient produced by the cochlea at each zero crossing. Measure the time difference between the two ears and you have your ITD.


A transient is a set of frequencies with a set of phase information and in addition to that there is a macro level function that determines the time behavior of the frequencies. Lastly, that may be split into a set of macro level functions for each frequency, and this is where you get something that looks like a waterfall.

Phase is just the imaginary part of a sine wave. It is not, and will never be the same thing, and it is so far from being the same thing that this discussion simply makes no sense.

In addition to that, the ITD and IPD functions of the auditory system does not work the same way. We do not have precise localization of phase distorted signals because they rarely occur naturally, so the hearing has not evolved to be precise at this at all. It is like when we add something artificial to the sound. We can easily hear it, but is just sounds strange and we can not tell what it is because our ears are not programmed to handle it well.

And ITD is not about transients that may originate somewhere inside the ear. It is about transients coming from an outside source.
 
A transient is a set of frequencies with a set of phase information and in addition to that there is a macro level function that determines the time behavior of the frequencies. Lastly, that may be split into a set of macro level functions for each frequency, and this is where you get something that looks like a waterfall.

Phase is just the imaginary part of a sine wave. It is not, and will never be the same thing, and it is so far from being the same thing that this discussion simply makes no sense.

In addition to that, the ITD and IPD functions of the auditory system does not work the same way. We do not have precise localization of phase distorted signals because they rarely occur naturally, so the hearing has not evolved to be precise at this at all. It is like when we add something artificial to the sound. We can easily hear it, but is just sounds strange and we can not tell what it is because our ears are not programmed to handle it well.

And ITD is not about transients that may originate somewhere inside the ear. It is about transients coming from an outside source.
There is of course a clear difference between phase and timing. The phenomenon in question also depends on the physical point of reference. ITD is not only measured acoustically, but also through neural probes with animals and electro-encephalograms (EEGs) with both humans and animals.

To measure ITD for impulsive sounds, the envelope is used. To measure for periodic sounds, phase is used.
 
Here is this same problem occuring again:

View attachment 483668

ITD has no application for low frequencies. And since ITD is transient detection, it will not improve with a longer window. This paper can not have been peer reviewed by someone who knows anything about signal theory.
That may be the case. I don't have enough knowledge to comment on that. I'm following the discussion in the thread that followed after you wrote that post.:)
 
ITD has no application for low frequencies. And since ITD is transient detection, it will not improve with a longer window ...
The inner hair cells behave much like half-wave rectifiers and their firing rates spike at the zero crossings of the (filtered) waveform. The difference in timing of these spikes between the two ears constitutes the time difference ...
... the auditory system does not work the same way. We do not have precise localization of phase distorted signals because they rarely occur naturally, ...
There is of course a clear difference between phase and timing. ...

Does anyone work in the field, or has some training in physics, maths, neuro and acoustics?

Look, combine two ondulating "waves", one rides on top of the other, different frequencies. Imagine one to be as strong as the other, imagine that not being the case. Pick up a pencil and depict the two situations on a fresh piece of real paper. What about the zero-crossings--righty right, a change depending on the relative amplitude!

Hearing sports 'critical bands', it has bandpass filters built in. How wide are these, are they fixed or floating, e/g signal dependend? Apply to the zero-crossing feature above.

Apply the whole sermon to a comparison of localization of an irregular composite signal (noise) to that of a set of steady sine waves (a musical chord, organ style) in non-reflecting space. Just for fun.

The Griesinger defenders might want to clarify by what supersticious means a standing wave would carry info on where the energy source is/was. For motivation, if there is, the Nobel in physics is granted: you solved the quantum mechanics riddle.

I already asked AI, out of desperation, what such discussion would lead to. It said, "Nice and nicer, its a common pattern." And then it explicated a fair bit on it, particularly reflecting on audio discussion on the internet.

What I get is banging the can not exactly in musical ways. That would be a systematical approach, based on own research, not only theorizing w/o thorough background over random papers.

I like to remind us of the problem statement: is a single subwoofer localizable. I dare to extend it: would two solve the problem sufficently in order to playback regular recordings. It must be pretty much clear that the "why" of localization will remain obscure anyway. We want a "what" to do against it. Next paper?
 
To measure ITD for impulsive sounds, the envelope is used. To measure for periodic sounds, phase is used.

I understand that this might come across as semantics, but please bare with me, I want to try to shed some light on the topic to make it intuitive.

The term "impulsive sounds" may refer to a transient. But lets first establish a crucial definition. If we look at any change in amplitude from zero to some kind of signal, it is a change to the steady state (which was zero, but has become non zero). This change is called a transient. When a sine wave starts or stops, it is a transient. But if we play back a sine wave burst through a subwoofer, these start/stop events are mostly filtered out.

So if we take a look at Fouriers take, which is generally accepted as fact, in order to have a pure sine, you can not have a start and stop, it has to last forever. Even a slight change in gain would mean there is a very slight transient involved. This is what can be described as a pure single frequency event. In a Fourier transformation (which is a transformation between time domain and frequency domain for a given signal) we can see that if this is infinitely narrow in the frequency domain, it is infinitely wide in the time domain.

On the other hand, if we make a perfect transient, it will be infinitely sharp and short lasting. This would be infinitely narrow in the time domain, and infinitely wide in the frequency domain, so in this case, we will need to have far more frequencies than the audio band.

But there is another property to this that is quite interesting. If we look at a sine wave being introduced gradually, it will be quite long lasting in the time domain, and it will be almost as narrow as a single frequency in the frequency domain, but not 100% pure. We can add that we also turn down the gain again afterwards, ending up with a smooth rise and fall to the sine wave. If we look at the macro level of this, it is pretty much the same as the gain setting we used. This is what is called the signal envelope of this signal. We do not follow the curve shape, just the amount of all the frequencies occuring in the signal.

With a sine wave as pure as this, we may talk about a sine wave for practical reasons, and talking about signal envelope is also relevant. However, we have to remember that we have very little frequency content, so we do not have a leading edge. This means, if we look at the wave shape in the time domain, it is even hard to visually see where this signal really starts, and when it does, the SPL is probably way below the noise floor. This kind of signal envelope does not contain information that is useful for our ears in terms of detecting a timing reference, and therefore ITD does not apply.

We can also look at a case where we do have a transient that is near perfect considered that we ignore mathematically perfect for a while and only focuses on the audio band. If we look at the frequency content, the duration of the transient will give us information on how low this goes in frequency. The relationship between the raise/fall and the peak amplitude will give us the upper frequency. This means that if we have a DC step (a sudden and permanent change in steady state voltage from say zero to 1 volt) we even include 0Hz in the transient. We can look at a typical transient where all the frequencies are in phase, and we see that it represents a sudden rise and fall both when we look at the time domain waveform, and when we look at the signal envelope. So here the signal envelope and the waveform are actually the same.

Then we can add some phasing errors. If we introduce a couple of phase shifts, we will see that the signal wave form is now suddenly consisting of a mess of positive and negative peaks. This means the transient waveform looks very different, but the signal envelope is unchanged, so now the two have become very different.

We can apply this to hearing by trying to compare these three cases:
1: A perfect transient.
2: A perfect transient with phase errors.
3: A filtered low frequency signal.

1: If we have perfect speakers and a perfect waveform, we will have a super precise location ability by our ears ability to measure ITD precisely. It is not much more to say about this, other than that talking about signal envelope or transient does not make a practical difference, and phase does not play a role since this is all just a leading edge.

2: This is where things start to get interesting. Our ears are evolved by nature to provide us with the ability hear stuff that could help us survive, simply by killing those who could not locate the subtle sound of the approaching danger. This noise would normally represent a transient, and our ability to locate the angle and determine the distance was key for our survival. Once we use this function of our ears, but we introduce some phase errors, we get most of this ability, but we loose some. For determining distance by transients we rely mostly on frequencies above 1kHz, and if we have a phase error above that, we tend to loose the ability to precisely determine distance. In some cases, we are also able to disturb our angular perception, but then the phase errors has to be quite large. The beginning of the leading edge is relatively intact if the phase errors are kept below 1k. But as I explained above, we have now messed up the transient, and if this occurs at high frequencies we do get a huge difference between the signal envelope and the transient waveform. The signal processing in our brain is not a very conscious process. We get used to such errors over time and will mostly filter them out as redundant information, but we can not by default pick and choose very well what to do with this added information. In other words, here ITD and IPD will work against each other. And that is really the key with IPD, it is not a natural event that we are evolved to deal with. We do hear it quite easily but it just sounds odd and it does not represent a way of locating sound sources since there are no common ways where this might give us anywhere near as much information as the timing information in a transient.

3: So if we make a perfect transient, filter it to <80Hz, and listen to it, we take away the high frequencies. This means we remove the leading edge completely, and replace it with a relatively slo fade-in as described above. We are now left with a signal envelope that looks like it could have some kind of timing information to it, but the leading edge is completely gone so the timing information from the transient is no longer there. It is impossible to say conclusively when the signal actually starts and stops since it is a very definition dependent question where the goal post is movable. So what happens if we run to the IPD-rescue?

This is probably the most interesting part of this discussion. But there is a rather problematic issue that stands in our way. A <80Hz subwoofer asked to reproduce a transient with a relatively good amount of low frequency content in it, will typically excite two frequencies, or maybe three. We can imagine the phase being similarly distorted for all of these frequencies for one ear relative to the other one. However, if we look at how we hear single tone phase distortion, it is not very precise. It just changes slightly between hollow, maybe to one side a bit, to very hollow and all over the place, to distinctive mono again. This is because we do not have a great set of skills to sort out what to do with this information. We also need the phase difference to be significant to be able to determine that there is actually something going on with the phase to begin with. Now, if we introduce more tones, but with the same time difference, the phase difference will be different depending on which frequency we look at. The common argument here is that we can use ITD to resolve this, but ITD relies on a leading edge, which we do not have, because we have filtered it out.

I hope this helps to show why this confusion of terminology is important, and how it is the foundation to understanding what the important difference between impulses, tones and signal envelopes means for actually understanding hearing.
 
3: ... remove the leading edge completely, and replace it with a relatively slow fade-in ... It is impossible to say conclusively when the signal actually starts and stops since it is a very definition dependent question where the goal post is movable. So what happens if we run to the IPD-rescue?
The theoretical folks knew it all, and considers Fourier even whith shaving strategies on a daily basis.

A <80Hz subwoofer asked to reproduce a transient with a relatively good amount of low frequency content in it, will typically excite two frequencies, or maybe three.
Due to its nature as a filtered transient, it will show a continuous spectrum in frequency domain.

We can imagine the phase being similarly distorted for all of these frequencies for one ear relative to the other one. However, if we look at how we hear single tone phase distortion, it is not very precise. ...
The hearing apparatus combined with remainders of my brain isn't a Fourier analyser. I'm afraid that's not a special condition. I'm coming to like the idea of little rectifiers for the hair-cells. The ear going digital, somehow. The Fourier transformation is done by binning the immediate impression cell by cell into (flexible, level etc dependent) bands, prefiltered by the chochlea. Rest is pattern recognition. Patterns are trained and remebered all life long. Conclusion: listening stereo is an acquired skill. Detecting a mono sub maybe also.

From the 60s, Germany:
"Now You Listen Stereo"
"Sound experience Sterephonics"

Remeber, master voice of doggy was mono; any conclusive science if mamals or birds get fooled by a contemporary stereo recording?
 
Last edited:
This kind of signal envelope does not contain information that is useful for our ears in terms of detecting a timing reference, and therefore ITD does not apply.
As I said before, the hearing apparatus does not use the envelope to determine ITD at low frequencies. It uses the zero-crossing times after bandpass filtering by the cochlea. There is no problem detecting precise zero-crossing times for near-continuous periodic signals (or low-pass filtered impulses, for that matter).

Maybe this talk by j_j will clarify some things.
 
I understand that this might come across as semantics, but please bare with me, I want to try to shed some light on the topic to make it intuitive.

The term "impulsive sounds" may refer to a transient. But lets first establish a crucial definition. If we look at any change in amplitude from zero to some kind of signal, it is a change to the steady state (which was zero, but has become non zero). This change is called a transient. When a sine wave starts or stops, it is a transient. But if we play back a sine wave burst through a subwoofer, these start/stop events are mostly filtered out.

So if we take a look at Fouriers take, which is generally accepted as fact, in order to have a pure sine, you can not have a start and stop, it has to last forever. Even a slight change in gain would mean there is a very slight transient involved. This is what can be described as a pure single frequency event. In a Fourier transformation (which is a transformation between time domain and frequency domain for a given signal) we can see that if this is infinitely narrow in the frequency domain, it is infinitely wide in the time domain.

On the other hand, if we make a perfect transient, it will be infinitely sharp and short lasting. This would be infinitely narrow in the time domain, and infinitely wide in the frequency domain, so in this case, we will need to have far more frequencies than the audio band.

But there is another property to this that is quite interesting. If we look at a sine wave being introduced gradually, it will be quite long lasting in the time domain, and it will be almost as narrow as a single frequency in the frequency domain, but not 100% pure. We can add that we also turn down the gain again afterwards, ending up with a smooth rise and fall to the sine wave. If we look at the macro level of this, it is pretty much the same as the gain setting we used. This is what is called the signal envelope of this signal. We do not follow the curve shape, just the amount of all the frequencies occuring in the signal.

With a sine wave as pure as this, we may talk about a sine wave for practical reasons, and talking about signal envelope is also relevant. However, we have to remember that we have very little frequency content, so we do not have a leading edge. This means, if we look at the wave shape in the time domain, it is even hard to visually see where this signal really starts, and when it does, the SPL is probably way below the noise floor. This kind of signal envelope does not contain information that is useful for our ears in terms of detecting a timing reference, and therefore ITD does not apply.

We can also look at a case where we do have a transient that is near perfect considered that we ignore mathematically perfect for a while and only focuses on the audio band. If we look at the frequency content, the duration of the transient will give us information on how low this goes in frequency. The relationship between the raise/fall and the peak amplitude will give us the upper frequency. This means that if we have a DC step (a sudden and permanent change in steady state voltage from say zero to 1 volt) we even include 0Hz in the transient. We can look at a typical transient where all the frequencies are in phase, and we see that it represents a sudden rise and fall both when we look at the time domain waveform, and when we look at the signal envelope. So here the signal envelope and the waveform are actually the same.

Then we can add some phasing errors. If we introduce a couple of phase shifts, we will see that the signal wave form is now suddenly consisting of a mess of positive and negative peaks. This means the transient waveform looks very different, but the signal envelope is unchanged, so now the two have become very different.

We can apply this to hearing by trying to compare these three cases:
1: A perfect transient.
2: A perfect transient with phase errors.
3: A filtered low frequency signal.

1: If we have perfect speakers and a perfect waveform, we will have a super precise location ability by our ears ability to measure ITD precisely. It is not much more to say about this, other than that talking about signal envelope or transient does not make a practical difference, and phase does not play a role since this is all just a leading edge.

2: This is where things start to get interesting. Our ears are evolved by nature to provide us with the ability hear stuff that could help us survive, simply by killing those who could not locate the subtle sound of the approaching danger. This noise would normally represent a transient, and our ability to locate the angle and determine the distance was key for our survival. Once we use this function of our ears, but we introduce some phase errors, we get most of this ability, but we loose some. For determining distance by transients we rely mostly on frequencies above 1kHz, and if we have a phase error above that, we tend to loose the ability to precisely determine distance. In some cases, we are also able to disturb our angular perception, but then the phase errors has to be quite large. The beginning of the leading edge is relatively intact if the phase errors are kept below 1k. But as I explained above, we have now messed up the transient, and if this occurs at high frequencies we do get a huge difference between the signal envelope and the transient waveform. The signal processing in our brain is not a very conscious process. We get used to such errors over time and will mostly filter them out as redundant information, but we can not by default pick and choose very well what to do with this added information. In other words, here ITD and IPD will work against each other. And that is really the key with IPD, it is not a natural event that we are evolved to deal with. We do hear it quite easily but it just sounds odd and it does not represent a way of locating sound sources since there are no common ways where this might give us anywhere near as much information as the timing information in a transient.

3: So if we make a perfect transient, filter it to <80Hz, and listen to it, we take away the high frequencies. This means we remove the leading edge completely, and replace it with a relatively slo fade-in as described above. We are now left with a signal envelope that looks like it could have some kind of timing information to it, but the leading edge is completely gone so the timing information from the transient is no longer there. It is impossible to say conclusively when the signal actually starts and stops since it is a very definition dependent question where the goal post is movable. So what happens if we run to the IPD-rescue?

This is probably the most interesting part of this discussion. But there is a rather problematic issue that stands in our way. A <80Hz subwoofer asked to reproduce a transient with a relatively good amount of low frequency content in it, will typically excite two frequencies, or maybe three. We can imagine the phase being similarly distorted for all of these frequencies for one ear relative to the other one. However, if we look at how we hear single tone phase distortion, it is not very precise. It just changes slightly between hollow, maybe to one side a bit, to very hollow and all over the place, to distinctive mono again. This is because we do not have a great set of skills to sort out what to do with this information. We also need the phase difference to be significant to be able to determine that there is actually something going on with the phase to begin with. Now, if we introduce more tones, but with the same time difference, the phase difference will be different depending on which frequency we look at. The common argument here is that we can use ITD to resolve this, but ITD relies on a leading edge, which we do not have, because we have filtered it out.

I hope this helps to show why this confusion of terminology is important, and how it is the foundation to understanding what the important difference between impulses, tones and signal envelopes means for actually understanding hearing.
Can we get something straight? You cannot reason into psychoacoustics from physics. It is an experimental field. "Experimental" means that the majority of findings (limits, thresholds, constants) have been established through controlled experiments, not because the underlying mechanism is known or can be derived from straightforward physical facts. No one wants to cut into humans and mess with the brain or inner ear and ask the subject victim to report. So instead, working with cadavers, injured people and animal testing has been the norm for a long time, but the majority of findings come from listening tests.

I'm typing this out because I don't have a digital copy. Quoting Brian Moore, who is one the leading psychoacoustics researchers of the past 50 years, from An Introduction to the Psychology of Hearing, 6th edition, Department of Experimental Psychology, Cambridge, 2013. Chapter 7 on space perception.
  • ITDs range from 0 (for a sound straight ahead) to about 690us for a sound at 90 degrees azimuth (directly opposite one ear. The time difference can be calculated from the path difference between two ears... In practice, the ITD for a given azimuth does vary slightly with frequency... For a sinusoidal tone, an ITD is the equivalent to a phase difference between the two ears, called an interaural phase difference (IPD). For example, for a 200Hz tone, with a period of 5000us, an ITD of 500us is equivalent to an IPD of 36 degrees (one-tenth of a cycle). For low-frequency tones, IPD provides effective and unambiguous information about the location of the sound. However, for higher-frequency sounds, IPD provides an ambiguous cue. For example, for a 4000Hz sinusoid, the period is only 250us, and an ITD of 500us would result in two whole cycles of IPD. For high frequency sounds, the auditory system has no way of determining which cycle in the left ear corresponds to a given cycle in the right ear. Ambiguities start to occur when the period of the sound is about twice the maximum possible ITD... In summary, for sinusoids, the physical cue of ILD should be most useful at high frequencies, while the cue of ITD should be most useful at low frequencies. The idea that sound localization is based on ILDs at high frequencies and ITDs at low frequencies has been called "duplex theory" and it dates back to Lord Rayleigh (1907). Although it appears to hold reasonably well for pure tones, it is not strictly accurate for complex sounds.
What accounts for localization ability with complex sounds? Many more mechanisms of the ear. It's not just one cue that's used.
  • Head movement to fix source location through multiple exposures and introducing additional, varied delays.
  • Binaural beats, where phase differences cause lateralization (inside the head shifts left or right) of low frequency images.
  • Frequency-specific ILD and ITD cues, where inconsistencies are apparently discarded. Sounds should have consistent ITDs and ILDs across frequencies.
  • The above is related to echo suppression through masking, which allows the envelope to be distinguished.
  • Time-intensity trading, between ITD and ILD, although there is no direct equivalency in terms of microseconds per dB.
However, when Moore writes about low frequencies, he generally has in mind 1kHz and below, down to around 250Hz. His research and the research of the majority of psychoacousticians concerns medicine and communication (being able to effectively process important informational sounds like speech). Low frequencies are not very relevant for this research. So unfortunately there is very limited testing. The theory on its own does not take us very far and doesn't have much predictive power when we wander outside of observed findings.

When you write that low frequencies have no leading edge, meaning that there is no mechanism for localization, you aren't taking into account what is already known and experimentally established about hearing. Again, at low frequencies, the measured results of neural pulse trains show that they synchronize with the acoustic frequency. This is known as phase-locking. They do not rely solely on leading edge detection, which is most relevant at high frequencies.

There has been research on minimum audible angle (MAA). This is the standard chart used in textbooks concerning hearing, from AW Mills, On the Minimum Audible Angle, Journal of the Acoustical Society of America, 1958.

1760728809472.png


Top to bottom, the traces are 0, 30, 60 and 75 degree reference directions. They show how much a sound has to shift position before it is detected. The lack of data for frequencies below 200Hz doesn't mean the ability collapses to nothing. It's just difficult to characterize. We could ask questions about the experimental setup and why it might have biased higher-frequency results.

Albert Bregman, who published his book Auditory Scene Analysis in 1999, showed there are additional higher-level cognitive functions that organize the perception of sounds. Using his language, within an auditory scene (soundstage, although he has in mind the whole of what is heard, not just speaker/recording features) our hearing uses stream segregation to differentiate between different events and group them into meaningful patterns. This includes, for spatial hearing, distinguishing between reverberant and direct sound, finely located vs. enveloping sound, proximate vs. distant sound. Within this list of sound perceptions is low bass and its specific range of effects.

The study posted by @DanielT is very useful because it is recent experimental confirmation that there is more to learn.
 
Back
Top Bottom