• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Bass and subwoofers

tested, and of those, only 5-6 turned out to be any good. David told me the problem was the microphone technique they used. If I were to provide a subjective impression of the sound of those recordings, the ones that turned out to be "good" were the ones that had the lowest direct to reflected sound and captured the revberberant sound of the space the most. One of those recordings I had heard when it was recorded, it was a saxaphone and Organ in a Church. What I heard in real life and what I heard in the final recording never jived. I would guess that our brains ability to ignore these reflections (the precedence effect) made it sound more "direct" in real life than in the recording, where such cues are obfuscated in the recording.
Interesting notes on the types of recordings--I wonder if certain techniques like Blumlein or the typical approach of certain labels like Yarlung might be more conducive to this effect.

Yes, there's certainly the visual aspect in directing attention, but also the problem of presenting diffuse field reverberation from the frontal direction...
 
We found that virtually no current music had stereo bass of any kind meaning there is no phase variation in low frequencies. You can point to classical or live natural music, but that isn't what most people listen to and it is not the majority of music right now. So ultimately I found it not even useful for new music. With older music, it was a mixed bag. Sometimes it was there and sometimes it wasn't. With the music halls (which again, is a very small portion of the music people listen to now), it was only ever audible with music supplied to me by David and it was extremly subtle, easily disturbed, and not obviously better.
I run a 2.4 setup with stereo subwoofer pairs in my office (one front and rear sub per side). I also run a display with 2 spectrum analyzers on it. The top is L/R split full range, and the bottom is L/R split in the subwoofer range specifically. It makes it immediately visible if the music is stereo bass or mono bass (clearly stereo bass in the pic below).

I listen to mostly Jazz, Jazz fusion stuff, downtempo electronica, and smattering of Americana/folk. Most of the music that goes thru my system is clearly visibly stereo bass. I admittingly do not listen to pop music, but when people say almost nothing has stereo bass I am flabbergasted because I can see it in almost everything I listen to.

Note: the red outline and descriptive text is not part of the display and was just added to help explain the display layout/use

cava_display.jpeg
 
Last edited:
It is an older article so of course it references older material. It was inspired by a years worth of experimentation with David specifically to hear this effect, as I had been unfamiliar with the concept before talking with him. However, I walked away more unimpressed than anything.
Thanks for chiming in and adding context. Makes me glad I posted your article, which I did because I wanted to help weave together this story as it's unfolded in various places on the internet.
 
1) Welti's approach requires sending a mono signal to all subwoofers. Yes a mid wall placement is consistent with one of his setups, but it must recieve a mono signal to work. Soundfield management is contingent on the input to the DSP being mono. Same for MSO. Same for Geddes approach. So they all take this stereo bass signal and sum it to mono.
2) My comment is specific to current as in pop music, not classical or audiophile music.

Good for you. Now, since you have claimed that I do not understand either of the two approaches that I've rejected, perhaps you can actually tell me what my position is?

Can you do that?
 
Yes, there's certainly the visual aspect in directing attention, but also the problem of presenting diffuse field reverberation from the frontal direction...

*ding* give that man a cigar! :) People confuse phase shift with diffusion all the time, it seems.
 
AE is undoubtedly a fundamental human percept, recognisable even by very young and very old naive listeners, discussed e.g. at the Tonmeistertagung conference last November.

Expectations about reproduction of potential envelopment appears to be a topic where monitoring and recreational listening differs markedly. Recreational sound seems to generally take the pragmatic approach, that AE is difficult, if not impossible, to control under a domestic setting. Maybe therefore, most effort is directed at an even LF frequency response across the listening space, thereby at least ensuring other elation.

Many studies on LF reproduction, and on the use of subwoofers, were based on old “stereo” content where LF collapse was more prevalent, especially in pop music. New pop collapsers may still be found, despite there no longer being technical reason for doing so. It is sad, however, if such practice is continued merely to increase the loudness a bit more, or because of fora like this where old dogma are uncritically repeated.

Do yourself a favour and listen under controlled conditions, see earlier posts about requirements, including test tracks and E/L magnitude vs freq graphs. Chances are, it will be an auditory revelation following you literally to the end.
 
Thanks. I am using Auditory Envelopment (AE) solely about the sensation elicited by low frequency inter-aural fluctuations, so it is an elementary, perceptual definition narrower than Listener Envelopment (LEV) used in acoustical engineering. Over decades of research into reverb and spatialisation, we found AE to be one of the most important percepts of audio production and reproduction; to an extent listening testers started craving that particular quality.

“Distortion” in this context is therefore any influence during distribution or reproduction that changes potential AE, i.e. prevents it from reaching a listener. Overall frequency response is not so important, if it is reasonably smooth, and it extends to 50 Hz, individually per channel. In a small room, LF stasis between 40 and 700 Hz because of room modes (see earlier comments on movement) is generally the main enemy. A large room, however, may blur or even replace potential AE with a certain time-domain signature of its own.

Considering reproduction, deliberate acoustical treatment can help reduce AE-destructive modes of a (small) room. It can also be effective when the listening room is imposing its own time-domain signature, washing out AE contrasts in the content. Control of loudspeaker directivity below 700 Hz, or moving closer to lhe loudspeakers when monitoring, may also help.
Is there any specific published research on your comments and recommendations?

Why would being closer to the loudspeakers help given the wavelengths at play? What wavelengths are even of interest in AE?

You've also frequently said that humans can localize down DC. Not just perceive (the infrasound research I've seen confirms this), but localize. Can you provide a paper?

Many studies on LF reproduction, and on the use of subwoofers, were based on old “stereo” content where LF collapse was more prevalent, especially in pop music. New pop collapsers may still be found, despite there no longer being technical reason for doing so. It is sad, however, if such practice is continued merely to increase the loudness a bit more, or because of fora like this where old dogma are uncritically repeated.
By "technical reasons" for "LF collapse" do you mean mastering techniques for bass in vinyl?

What kind of production environment are you thinking of in your comments? A control room? If yes then are you comments relevant for 2-channel stereo or do you have immersive or multichannel in mind?

This really is a crucial question for production that I really want to understand.

Thanks.
 
Is there any specific published research on your comments and recommendations?

Why would being closer to the loudspeakers help given the wavelengths at play? What wavelengths are even of interest in AE?

You've also frequently said that humans can localize down DC. Not just perceive (the infrasound research I've seen confirms this), but localize. Can you provide a paper?

By "technical reasons" for "LF collapse" do you mean mastering techniques for bass in vinyl?

What kind of production environment are you thinking of in your comments? A control room? If yes then are you comments relevant for 2-channel stereo or do you have immersive or multichannel in mind?

Good questions. An explosion, thunder or other natural sources can provide outdoor opportunities to notice direction of even infrasound; or a subwoofer in an open field may be used in controlled experiments. Either way, the azimuth of a VLF sound source is easy to tell, also for a child, and that superpower is not lost when we step into a room.

I can't share findings related to reverb research, but a paper covering most of your AE questions is included in a proceedings journal, with publication expected last month. Must be close now.

Wavelengths conducive to the sensation of AE generally range from 0.5 to 8.5 m in air. The reason why nearfield or even ultra-nearfield listening may be relevant when monitoring for potential AE (i.e. AE latent in the content) is to reduce LF interference from the listening room. In a fine room, however, potential AE can be experienced at a distance of 3 m or longer. Stereo is capable of this, but *good* 3D reproduction even more so.

You are right that vinyl is an example of calculated LF collapse, though pick-up rumble itself actually is not in phase. Collapse can also happen in distribution (lossy codec, format conversion etc.) or in reproduction (bass management, listening room LF time-domain distortion etc.)
 
An explosion, thunder or other natural sources can provide outdoor opportunities to notice direction of even infrasound; or a subwoofer in an open field may be used in controlled experiments. Either way, the azimuth of a VLF sound source is easy to tell, also for a child, and that superpower is not lost when we step into a room.
It is confounded by the room, but you allude to that later:
Wavelengths conducive to the sensation of AE generally range from 0.5 to 8.5 m in air. The reason why nearfield or even ultra-nearfield listening may be relevant when monitoring for potential AE (i.e. AE latent in the content) is to reduce LF interference from the listening room.
 
Good questions. An explosion, thunder or other natural sources can provide outdoor opportunities to notice direction of even infrasound; or a subwoofer in an open field may be used in controlled experiments. Either way, the azimuth of a VLF sound source is easy to tell, also for a child, and that superpower is not lost when we step into a room.

I can't share findings related to reverb research, but a paper covering most of your AE questions is included in a proceedings journal, with publication expected last month. Must be close now.

Wavelengths conducive to the sensation of AE generally range from 0.5 to 8.5 m in air. The reason why nearfield or even ultra-nearfield listening may be relevant when monitoring for potential AE (i.e. AE latent in the content) is to reduce LF interference from the listening room. In a fine room, however, potential AE can be experienced at a distance of 3 m or longer. Stereo is capable of this, but *good* 3D reproduction even more so.

You are right that vinyl is an example of calculated LF collapse, though pick-up rumble itself actually is not in phase. Collapse can also happen in distribution (lossy codec, format conversion etc.) or in reproduction (bass management, listening room LF time-domain distortion etc.)
Is the paper you refer to in this post published yet?
 
Is the paper you refer to in this post published yet?
Surprisingly, no papers have yet been published from Tonmeistertagung 2023; which is unlike previous experiences with this otherwise well-organized conference.

I am writing this going home from the AES convention in Madrid, where the topic was also extensively debated and exemplified, considering monitoring. However, taking into account how LF time-domain coherency in reproduction has been neglected for 35+ years, because of flawed papers in the 80’ies centered around one-sub-enough, waiting a few more months for results on other LF qualities than punch would seem of little consequence. For now, if in doubt, please use your ears. Even a young child can appreciate this quale.
 
Thank you @youngho for gathering and distilling this information!

Many years ago when I was investigating subwoofer setup strategies for conveying the sensation of spaciousness with a wide variety of recordings, I read several of David Griesinger's papers that touched on the subject. @youngho's post presents additional sources of arguably relevant information; the bolding for emphasis is mine:











Connecting the dots, so to speak, resulted in a subwoofer setup strategy that includes placing subwoofers to the left and right of the listening area. Given that relatively few recordings have true stereo separation well down into the subwoofer region (which would be ideal if done well), and given that an interaural phase difference and/or lateral pressure gradient is a contributor to the sensation of spaciousness, I suggest synthesizing this interaural phase difference/lateral pressure gradient by introducing a 90 degree phase differential between the subs on the left and right sides of the room. The only special equipment called for is a sufficient range of phase adjustability on the subwoofers themselves.
If you crank in some amount of fixed phase difference into the setup then are you not making a deliberate alteration to what is inherently in the recording you are listening to? And doesn't that take a step back from trying to honor the performance and the way it was recorded? It seems to me that this takes our listening setup in the wrong direction. Or am I missing something here?
 
If you crank in some amount of fixed phase difference into the setup then are you not making a deliberate alteration to what is inherently in the recording you are listening to? And doesn't that take a step back from trying to honor the performance and the way it was recorded? It seems to me that this takes our listening setup in the wrong direction. Or am I missing something here?

Except that almost all recordings mono the bass, not keeping it "as it was" to start with. Now, some variation in phase at low frequencies might be a good idea, or not, but you DO NOT KNOW if there was any in the original master, because most often it's been removed, and the bass is mono.

There are many issues involved in why this started to happen (LP cutting), continued (woofer excursions, etc), and just lives on. The discussion and various ideas remain, with much unhelpful name calling (don't mean here!) and tossing about of metrics that "do not mean what you think they mean". Yeah, it's a bit frustrating.
 
Except that almost all recordings mono the bass, not keeping it "as it was" to start with. Now, some variation in phase at low frequencies might be a good idea, or not, but you DO NOT KNOW if there was any in the original master, because most often it's been removed, and the bass is mono.
I just can't agree with this, unless you are talking popular (as in pop/country) music genres. Do you have a citation of any sort? Almost everything I listen to (jazz from every era, downtempo, folk/americana) has stereo bass that is visible on my audio display and has definitely not been mixed down to mono. I hear people say this, but everything I listen to is clearly otherwise (not even relying on my ears).
 
If you crank in some amount of fixed phase difference into the setup then are you not making a deliberate alteration to what is inherently in the recording you are listening to? And doesn't that take a step back from trying to honor the performance and the way it was recorded? It seems to me that this takes our listening setup in the wrong direction. Or am I missing something here?

It is an alteration to the playback of the recording, but it may result in a perceptually closer approximation of the performance. To the extent that introducing a phase differential in the subwoofer region reduces the small room signature of the playback room, it may "unmask" the immersion/envelopment signature on the recording. Or it may synthesize the illusion of being in a larger space than the playback room, which could arguably be a step in the right direction, assuming the "right direction" is the performance rather than the recording when there is a discrepancy between the two. I have heard setups where imo the phase quadrature configuration "worked", and I have heard setups where imo it did not. The good news is, it's easily reversible if it makes no worthwhile net improvement and/or if it goes against one's priorities.

And I can understand a philosophical reluctance to deliberately alter what's on the recording, but as @j_j points out the recording is probably not faithful to the performance in this area to begin with. And then we have informed listeners (like Floyd Toole) who use "tasteful upmixing" to enhance the spatial quality of two-channel playback by making a deliberate alteration to the in-room reflection field over much more of the spectrum.

My understanding is that David Griesinger's original Lexicon processor made a lot more (and a lot more sophisticated) deliberate alterations in the bass region than this simplistic technique does, and by all accounts that I am aware of the results were superb.
 
Last edited:
I just can't agree with this, unless you are talking popular (as in pop/country) music genres. Do you have a citation of any sort? Almost everything I listen to (jazz from every era, downtempo, folk/americana) has stereo bass that is visible on my audio display and has definitely not been mixed down to mono. I hear people say this, but everything I listen to is clearly otherwise (not even relying on my ears).
Well, I've measured a lot of recordings. I won't say "all" but "Most" are either mono bass, or sometimes contain one-sided bass. Having independent sound below 100Hz is rare for several reasons.

Quadrature bass is not the thing I would do, either. Something more complex is called for. Exactly what is not always clear, there are so few bands to measure. Cross-correlation alone is not often done.
 
Quadrature bass is not the thing I would do, either. Something more complex is called for. Exactly what is not always clear, there are so few bands to measure. Cross-correlation alone is not often done.

Is there a configuration you would suggest that does not require processing beyond the controls normally available with subwoofers (level, lowpass frequency, phase, hopefully some EQ)?
 
Is there a configuration you would suggest that does not require processing beyond the controls normally available with subwoofers (level, lowpass frequency, phase, hopefully some EQ)?
Regrettably, nothing close to that. It's hard to even explain unless you speak serious math.
 
Regrettably, nothing close to that. It's hard to even explain unless you speak serious math.

Thanks. It took me three tries to pass introductory Calculus, and that was forty-five years ago so... nope!
 
It is an alteration to the playback of the recording, but it may result in a perceptually closer approximation of the performance. To the extent that introducing a phase differential in the subwoofer region reduces the small room signature of the playback room, it may "unmask" the immersion/envelopment signature on the recording. Or it may synthesize the illusion of being in a larger space than the playback room, which could arguably be a step in the right direction, assuming the "right direction" is the performance rather than the recording when there is a discrepancy between the two. I have heard setups where imo the phase quadrature configuration "worked", and I have heard setups where imo it did not. The good news is, it's easily reversible if it makes no worthwhile net improvement and/or if it goes against one's priorities.

And I can understand a philosophical reluctance to deliberately alter what's on the recording, but as @j_j points out the recording is probably not faithful to the performance in this area to begin with. And then we have informed listeners (like Floyd Toole) who use "tasteful upmixing" to enhance the spatial quality of two-channel playback by making a deliberate alteration to the in-room reflection field over much more of the spectrum.

My understanding is that David Griesinger's original Lexicon processor made a lot more (and a lot more sophisticated) deliberate alterations in the bass region than this simplistic technique does, and by all accounts that I am aware of the results were superb.
I understand what you are saying...this adjustment is intended to correct the coloring caused by the listening environment. Fixing the listening room in my experience is often difficult and from a time invested and value returned aspect it gets into that slippery slope of diminishing returns but if you can correct for a room problem by turning a couple of knobs, that's a time investment worth trying for sure. The whole exercise is complicated by SAF problems as well so an electronic solution to these subtle things seems worth trying especially since it doesn't contribute to spousal discord. When I get a pair of subs that have infinitely adjustable phase knobs I will surely give it a try.
 
Back
Top Bottom