• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Locating bass <80Hz?

My personal opinion is that one could most likely achieve this without "ever more complicated, and costly tech" simply by listening in a larger room with more effective bass absorption, at relatively closer proximity to transducers, listening position further from boundaries, and with two bass sources positioned closer to 90 degrees azimuth from median plane driven by non-mono signal. For musical content with correlated bass, one should get the benefits of mode cancellation, while musical content with relatively uncorrelated bass might provide possible perceptual benefit.
Dr. Griesinger took his work on this and added a processing option in Lexicon processors to try and increase the spaciousness of bass, called bass enhance. I have used this for many years in several different multichannel music systems. The effect is a little difficult to describe. In a system with stereo subwoofers to the sides of the listener turning this on changed the feel of the space. The side walls seemed to melt away and the impression of the acoustic space increased. The bass seemed to leave your head too. Kind of a like a subtler form on having 2 side and 2 rear channels fed mono vs them being fed decorrelated material.
 
We do have the ability to hear IID at low frequencies easily, but we need the difference to be significant. Headphones can do this. We have the same issue with phase differences. It does not give us any real angular resolution, especially not at low frequencies, but we can also easily determine phase differences with headphones. We can do both in cases where there are significant local nulls and peaks in a room, but it is not the same as directional sound.
You talking about localization, or just the ability in general?

As noted below for the basic science. This is not the final word on the matter since the neural pathways and signal transformations are being mapped. We know the effect of IID and ITD are not nothing below threshold, but that small contribution is hard to characterize.

1760576865129.png


1760576962915.png


1760577023649.png


Edit: HRTF in the chart above relates to vertical localization.
 

Attachments

And again, I can refer you to the summary work of Dr. Toole, where he notes research indicating that there are two basic types of listeners in this regard , ones who enjoy the spacious sound, and those -- typically in music production -- who prefer a dryer, more detailed presentation.

Your sarcasm about 'preference' is quite misfired. No one is discounting its reality.
I tried to be a bit ironic, and that actually misfired due to language barrier. I actually know what you and other are talking about. I understand your point(s) pretty well. I simply don't agree to the notion that the talks is relevant in the context given by the original posting.

Reiterated, at best you apparently want to align some observation of A (sub location) with observation B (Griesiger/Lund) because it's "subwoofer", without having a mental model of A or B respectively. Maybe you're trying to make up something in this regard, but--for the time being, there's, in my book, nothing that would remotely fit.

Frankly spoken, ... you know it yourselves. All this terminology on ILD, IIT, ABC, XYZ in papers ... a bunch of, a read for weeks, and no land in sight. May I argue, we, maybe you are laymen, just over-engaged amateurs that desperately try to make sense of "what one knows". That's how I read it.

I took an engineering stance (whilst not remotely being one), and tried to solve the problem of a question given the technology at hand--today. See: post

When I look at Griesinger's and Lund's there's still a gap to bridge before applicability to say the least, a gap as far as the moon. Don't you acknowledge? C'mon, it speaks right in your face. That's what my "brain" feels.
 
Last edited:
Dr. Griesinger took his work on this and added a processing option in Lexicon processors ... The side walls seemed to melt away ... a subtler form on having 2 side and 2 rear channels fed mono vs them being fed decorrelated material.
So it is a synthetic simulation of space using subwoofers in a multi ch/ setup. It has nothing - nil, to do w/ the original recording, the creator's intentions. Like the aural enhancer, 2nd harmoic generator? Would you mind to connect this to the original question stated in the headline?

Btw: did you evaluate it blindfolded, may we evaluate measurements of your's, you know freq/ response, the decorrelation claimed etc?
 
Last edited:
If you read Griesinger’s work he points that localization is difficult with sine waves but considerably easier with filtered noise. And gives the mechanism for why that is.

Which is exactly what the original post was about.
 
If you read Griesinger’s work he points that localization is difficult with sine waves but considerably easier with filtered noise. And gives the mechanism for why that is.

Which is exactly what the original post was about.
First, the posters here didn't refer to that directly, neither would that have been useful. You mean this :
=> note

Let me quote:
"In free field we detect the azimuth of sounds at low frequencies by detecting the time differences between the zero-crossings of the pressure waveform at the two ears. Human perception is particularly sensitive to sounds with sharp onsets."

Sharp onset equals high freqency content.

"The second reason is that standing waves preserve the original sound direction information if you average over enough of them. Low frequency noise or a pulse will excite many room modes all at once, and the time differences between the ears will correspond to the source azimuth."

a) could you explain how that would be possible, technically? Reposition the speaker to the opposing wall, same pattern, different location.
b) maximising the number of room modes, well, that's basically equalizing the steady state amplitude over frequency response.

"Room impression depends on the detection of sound motion in the reverberant component of a musical sound. We detect as room impression the beautiful shimmer to the reverberation once the direct sound has gone by."

Agreed, but that's amplitude variation over time, not exactly phase or time of arrival.

And so forth. I don't want to pick on him. What do you think?
 
So it is a synthetic simulation of space using subwoofers in a multi ch/ setup. It has nothing - nil, to do w/ the original recording, the creator's intentions.
Recordings are completely artificial productions. We are to the recording what the recording is to the event. Audio at the moment is a series of compromises.

Griesinger's later work is specifically on concert music and characterization of halls. The bass enhancer is meant to bring one closer to the envelopment that is experienced in good halls, in the good seats. He even proposed the limit of localization distance (LLD) measurement, which he found valid even for small rooms, to do with the auditory sense of proximity. And by the way, all modern concert halls use very subtly integrated PA systems, which Griesinger also worked on, to ensure even coverage and reverberation.

Halls are large, acoustically speaking. This means that bass wavelengths fully extend into open air before hitting a wall, unlike small rooms. One of the key differences to small rooms this causes is temporary, shifting buildups of bass that pass through the hall. In small rooms, this does not occur and instead there are steadystate modes. Small rooms artificially limit VLF envelopment, in that sense.

The chair spinning of the OP simulates the interaural level and phase fluctuation that causes envelopment.
 
@ChrisG If you don't mind, can you take a few more measurements? An RTA of your room noise, save the trace, and then an RTA of music. Infinite averages and no smoothing in both cases. And then a tight, slow moving MMM of your listening position with pink noise. Same settings. I would set the speakes for the usual volume where you find the sub annoying.
 
Griesinger's later work is specifically on concert music and characterization of halls. The bass enhancer is meant to bring one closer to the envelopment ...
But still it has nothing to do with the record creator's intentions. It's an addition very much like the aural enhancer and other tools to alter a record's sound for some 'better'. Finally, I doubt Griesinger's reasoning as discussed briefly in my post #126.
Sound is physical reality, and as such rules perception. In post #95 I gave some hints--that by the way I developed using own experimentation just that evening for the sake of being helpful--from an engineering perspective. You may want to theorize and generalize ad ultimo. Thin air.
 
But still it has nothing to do with the record creator's intentions. It's an addition very much like the aural enhancer and other tools to alter a record's sound for some 'better'. Finally, I doubt Griesinger's reasoning as discussed briefly in my post #126.
Sound is physical reality, and as such rules perception. In post #95 I gave some hints--that by the way I developed using own experimentation just that evening for the sake of being helpful--from an engineering perspective. You may want to theorize and generalize ad ultimo. Thin air.
About the creator's intention:
I'm sure no baritone wants his mouth to appear as if it was a cave the size of Son Doong.

Cause that's exactly what happens sometimes chasing other goals, proportions are lost.
Any mains-subs, with subs far from mains combination I have listen to looses this proportions big time if x-overed high.

Instruments can't show that to the same extend as a voice, although sometimes co-exist at some setups with small violins.
It's only common sense.
 
But still it has nothing to do with the record creator's intentions.
You are setting up an arbitrary personal standard. Admit that you have no insight into the intentions of some distant third party. What kind of speakers and frequency response fit the intentions? What playback level? What kind of room shape and acoustic signature? None of that is in our hands. All that's present and workable is the signal data of the medium.
"In free field we detect the azimuth of sounds at low frequencies by detecting the time differences between the zero-crossings of the pressure waveform at the two ears. Human perception is particularly sensitive to sounds with sharp onsets."

Sharp onset equals high freqency content.
The most common frustrating thing in audio is that even very qualified engineers do not open books on anatomy and perception.

The context is low frequency sounds. All pressure impacting the outer ear is converted into electrical impulses. The inner ear and auditory cortex are tonotopically organized. There are specific bundles of nerves and synapses that fire when the stimulus breaches an electrochemical threshold specific to them, and carry signals to where they are eventually binaurally integrated. The stimulus is received as a whole, split into different pathways, and eventually recomposed. This allows for attention-based manipulation, which is why ear training is so important.

The paper you quote is a based on a presentation, not a journal article, and Griesinger expects the audience to follow along. All that statement really refers to is ITD.
"The second reason is that standing waves preserve the original sound direction information if you average over enough of them. Low frequency noise or a pulse will excite many room modes all at once, and the time differences between the ears will correspond to the source azimuth."

a) could you explain how that would be possible, technically? Reposition the speaker to the opposing wall, same pattern, different location.
b) maximising the number of room modes, well, that's basically equalizing the steady state amplitude over frequency response.
What exactly are you asking? Griesinger is describing the propagation of pressure gradients in rooms.
"Room impression depends on the detection of sound motion in the reverberant component of a musical sound. We detect as room impression the beautiful shimmer to the reverberation once the direct sound has gone by."

Agreed, but that's amplitude variation over time, not exactly phase or time of arrival.
Reverberation is entirely about the chaotically fluctuating amplitude, relative phase and time of arrival of reflected sounds. That specific pattern of density and decay is the "room impression".
Sound is physical reality, and as such rules perception.
Yes, but you can't talk only of the physics and skip psychophysics.
 
I found this study interesting. I'm pasting a screenshot of the conclusions. There is more interesting information in that study. Last words in conclusion:
it depends :)

Screenshot_2025-10-16_151821.jpg

Screenshot_2025-10-16_151926.jpgScreenshot_2025-10-16_151119.jpg


Incidentally, this it depends... has been raised quite a few times in this thread by ASR members.:)
Big emphasis on the size of the room in that study.
____
This study also seems interesting. Unfortunately I can't download the pdf at the moment:

Screenshot_2025-10-16_152600.jpg

Now I was able to get the pdf. Pasting the beginning of it. Seems interesting. I'll read that report::)
Screenshot_2025-10-16_161710.jpg
 
Last edited:
Yes, but you can't talk only of the physics and skip psychophysics.
You may argue where I'm coming from, not the psychoacousts for sure. Physics on sound is a mostly settled topic. If there are claims that (implicitely) contradict basic math/ or phys/ those shall come with extraordinary evidence. Not even the Harman experiement, though, was conclusive. I won't dare to call the theory weired, but didn't I just do so?

ps: on pressure gradient detection--what makes the wave move, even a standing wave? Correct.
pps: Andrew Jones was accused lately to have no clue, because he was "only" a physicist--he didn't understand the maths the board concluded, go figure.

Really off.

On topic:
Do you agree, coming from any weired theory, that two subs would help out with localizing, being offended by that, a single sub? What ever makes and what nature the localization is. Talking about regular program, standard but not Griesinger pseudo enhanced stereo, Please, let's leave those funny things beside and solve real problems.
 
Last edited:
I tried this in practice, and here's what I found: when the subwoofer is running, it's often not noticeable as a separate source. Instead, the bass seems to come from the main speakers. You only really notice it as a separate source when the phase or level between the satellites and the subwoofer isn't quite right.
I've also had very good experiences with high-quality drivers, for example, the Dayton Audio EPIQUE with its patented Dual-Gap MMAG driver design, or drivers with XBL² technology. They deliver clean, precise bass that you can actually hear. Subwoofers with magnet force modulation rings in the gap make the bass tight and controlled across the entire frequency range.
One thing I've noticed: connecting drivers in parallel usually avoids phase summation issues. Filtered setups, on the other hand, if not properly designed, can cause audible localisation.
So yes, theory is all very well, but in practice, the biggest differences come down to how the system is implemented.
 
Do you agree, coming from any weired theory, that two subs would help out with localizing, being offended by that, a single sub? What ever makes and what nature the localization is. Talking about regular program
@youngho Summarized the approach.
My personal opinion is that one could most likely achieve this without "ever more complicated, and costly tech" simply by listening in a larger room with more effective bass absorption, at relatively closer proximity to transducers, listening position further from boundaries, and with two bass sources positioned closer to 90 degrees azimuth from median plane driven by non-mono signal. For musical content with correlated bass, one should get the benefits of mode cancellation, while musical content with relatively uncorrelated bass might provide possible perceptual benefit.
Where this isn't practical or possible for some listening spaces (like mine):
  • Have mains that extend as low as possible.
  • Follow Lund's advice to set the crossover to 40Hz, or as low as is practical. Consider having bandpassed subs under mains if mains cannot handle bass that low.
  • Follow Welti's or Geddes's advice below 40Hz (or whatever the lower limit) using multiple subs.
This will work for stereo and likely fit bass management matrixes in multichannel formats. I don't know the best approach for Dirac ART.
 
I found this study interesting. I'm pasting a screenshot of the conclusions. There is more interesting information in that study. Last words in conclusion:
it depends :)

View attachment 483502
View attachment 483503View attachment 483504


Incidentally, this it depends... has been raised quite a few times in this thread by ASR members.:)
Big emphasis on the size of the room in that study.
____
This study also seems interesting. Unfortunately I can't download the pdf at the moment:

View attachment 483507

Now I was able to get the pdf. Pasting the beginning of it. Seems interesting. I'll read that report::)
View attachment 483522
Where possible, please quote more precisely instead of full pages or paragraphs.
 
Where possible, please quote more precisely instead of full pages or paragraphs.
When you say it, I see it myself now. Huge amounts of text, paragraphs. Could be summarized. Or to draw some conclusions from. As the authors of the research report: Low-frequency sound source localization as a function of closed acoustic spaces did: :)

Screenshot_2025-10-16_152600.jpg
Overall, the theory of low-frequency localization in closed spaces is now better
understood. Where previous work makes gross generalizations (such as nothing
under 100 or 200 Hz can be localized in any space), this research has shown that
the minimum localizable frequency is a function of the configuration of a space
(dimensions, source/listener location, absorption).
 
Last edited:
Hi all,

Thought this might be an interesting discussion.

I'd like to start with an observation: I think I can reliably hear my subwoofer's location.
Context: 2x Genelec 8030C, 1x Genelec SE7261A, driven from an RME UCX-II. I've used the built-in RoomEQ to implement a 4th order 60Hz lowpass on the AES feed to the subwoofer. The frequency response is good and flat, extending down to just below 20Hz in-room. I've also used the sub's built-in lowpass at 85Hz to absolutely minimise any content in the >100Hz range, where people tend to be able to locate sources very easily.

The subwoofer is a little off to the side from the centre. During music playback, its location is very audible to me, to a point where I'll often leave it switched off to reduce the distraction of it being "wrong".

To expand on my observation a little, test tones sound as you'd expect: they fill the room with no obvious origin, and if I walk around, the level comes and goes.
Playing time-varying signals (music playback, repeated tone bursts, etc) with only the subwoofer switched on, I can point to the source of the sound reliably. If I spin on my chair, the source seems to spin - as would be expected.

I've tried moving the sub to the other side of the TV, re-did the crossover etc etc - same result.

This has been a problem for me in the past, but I'd typically attributed the issue being harmonic distortion generated by the (lower-quality) subwoofer(s) providing me with location cues from the higher-frequency output. The SE7261A, though, is rated for low distortion, and I'm running at very moderate levels. I believe the sub is working correctly, too: no nasty noises etc, even turned up really loud. It sounds clean and flat down to VLF. I can just hear where it is.


You're welcome to tell me I'm crazy. It's a possibility I haven't yet ruled out.

If anyone else has had similar experience, or would like me to do some more testing (I could take binaural measurements to see how much of a level/phase difference there is L-R), let me know.

For what it may be worth, here has been my experience with two different systems that I have:

In my office, for which the area and volume are approximately 15.1m^2 and 56m^3, respectively, I have a pair of KEF LS60 active speakers and a SVS SB-1000 Pro subwoofer. (In that small room the LS60s really don't need a subwoofer, but I inhereited it). The subwoofer is located about 0.5m to the right of the right speaker. Using the KEF's subwoofer output and and the KEF mobile app's subwoofer control, I first tried crossing over a 80Hz, but I could hear the subwoofer's location and it degraded the imaging for me. I kept lowering the crossover frequency until I no longer could hear its location, ending up at 40Hz.

In my family room, for which the area and volume are approximately 60m^2 and 354m^3, respetively, and which is open to the kitchen and other areas of the house, I have a pair of Elac Uni-Fi Reference bookshelf speakers with active crossovers and an old Velodyne HGS-18 servo controlled subwoofer. The subwoofer is located in a cabinet under the staircase in the front right corner, 1m to 3m from the right speaker depending on where I place the speakers (I set them up optimally for my listening chair for serious listening, but otherwise place them near the front wall centered on either side of the TV). Using a miniDSP HTx for DSP, at a 100Hz crossover frequency with 8th order slopes I cannot audibly detect the subwoofer's location. Recently, I experimented with 4th order crossover slopes at a 90Hz crossover frequency and still did not audibly notice its location.

The setups are quite different and there probably isn't one parameter that explains the difference in subwoofer detectability. The SVS SB-1000 Pro has higher THD than the HGS-18 below 40Hz, but I doubt that explains the difference. Perhaps the crossover slope the KEF app is using isn't steep enough for use with higher crossover frequencies. Perhaps placement of my subwoofer in the corner in my family room also helps, but I don't know.
 
When you say it, I see it myself now. Huge amounts of text, paragraphs. Could be summarized. Or to draw some conclusions from. As the authors of the research report: Low-frequency sound source localization as a function of closed acoustic spaces did: :)

View attachment 483554
Overall, the theory of low-frequency localization in closed spaces is now better
understood. Where previous work makes gross generalizations (such as nothing
under 100 or 200 Hz can be localized in any space), this research has shown that
the minimum localizable frequency is a function of the configuration of a space
(dimensions, source/listener location, absorption).
I read it yesterday quick. It contradicts Griesinger. Room reflections are deemed to be destructive. More so with standing waves aka room resonances. Griesinger says, those resonances carry exactly localization cues.

They say, localization needs about 1 and a half wavelengths, Griesinger focuses on zero crossing of pressure, three times with a 1,5 waverlengths.

If you recap the numbers, the room under investigation doesn't allow for localization below 150Hz or so at any position.
 
Back
Top Bottom