• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Bass and subwoofers

I haven't seen this presentation by David Griesenger posted on this thread, apologies if I've overlooked it. It was uploaded on the 10th November 2018 so it's not new and including the latest research but it may be useful to some, I'm not sure where the event was.
Ironically the sound is not great. There are some listening examples so you may find it worthwhile using something better than laptop speakers.
There are a couple of points where apparent audience reactions show how hard it seems to be to consistently replicate the LF effect.


In the explanation under the YouTube video is a Dropbox link with various mono to stereo audio comparison tracks which may help demonstrate what David is pointing towards in the presentation.

 
Last edited:
The discussion of a low frequency source in one channel is relevant from an artistic point of view (Bowie), but not really regarding baked-in AE.

Perceptually, AE is a declarer of (auditory) life or death. Do we experience LF movement or stasis in reproduction, and is the outcome decided by the content; and/or by something else (e.g. audio format, lossy codec, listening room, speaker setup, listener location, bass mismanagement)?

The human auditory system is extremely sensitive to even random inter-aural change (LF noise), so take the SMPTE paper finding of a low frequency limit to sensing AE (“D” tests) with a large grain of salt. Those tests only had about an octave of random change for the auditory system to lock on to, at the lowest frequencies, while fine natural recordings of music or atmosphere present LF patterns of inter-aural change.

With music, the sensation of AE quite possibly extends as low in frequency the reproduction setup is able to support it.

For the time being, AE noise tests should therefore be interpreted like this: With a given audio format, room and system, if you are able to hear AE contrasts with LF noise samples, you certainly also are with music.
 
With music, the sensation of AE quite possibly extends as low in frequency the reproduction setup is able to support it.


In this presentation you briefly touch on the subject of very low frequency sensing through the vagus nerve. If there's decorrelation at these frequencies, how does the auditory system integrate information that comes both from the body sensation and the two ears?

Are we particularly sensitive to time domain performance when differentiating between mono and stereo component?
 

In this presentation you briefly touch on the subject of very low frequency sensing through the vagus nerve. If there's decorrelation at these frequencies, how does the auditory system integrate information that comes both from the body sensation and the two ears?

Are we particularly sensitive to time domain performance when differentiating between mono and stereo component?

Interesting presentation, I have just come 11 minutes into the video but already have a question for @Thomas Lund.

You mentioned that a spacing of microphones of about 2.5 meters is required for them to pick up decorrelation in the low frequencies. I take it as this is a limitation related to microphones if our hearing would be able to hear it if we were listening to the performance in person in the concert hall. But for other aspects, other than for the sensation of "decorrelation in the low frequencies", I have often found that recordings made with microphones in X-Y (or single stereo microphones in OneMic recordings found on YouTube) stereo configuration (where I guess the distance between the microphones is meant to simulate the distance between the ears) to be more convincing-sounding when it comes to the overall perceived three-dimensionality.

So to my main question:
When someone is recording a performance using both microphones in an X-Y stereo configuration and an additional spaced pair of microphones, will our hearing still be able to take in all the information and combine the quality of the more convincing-sounding three-dimensionality of the X-Y and the 2.5 meters spaced pair of microphones, without the phase differences between multiple microphones destroying/masking one or the other quality.
In other words, can we "have it all" by combining many types of microphone configurations at the same time?
 
Last edited:
In this presentation you briefly touch on the subject of very low frequency sensing through the vagus nerve. If there's decorrelation at these frequencies, how does the auditory system integrate information that comes both from the body sensation and the two ears?
Good question. In all likelihood, it doesn’t, considering AE [ref “C” tests in the recent study]. VLF localization, however, might be (partly) based on other pathways. Physiologically, many mechanoreceptors have been identified, of which the Pacinian corpuscle is one. The sensory system may integrate (conjecture) afferents from receptors throughout the body in VLF localization.

The (broad) Mikroforum talk is from 2018, where we were more open to the possibility of receptors (e.g. in the mesentery) driving afferent pathways of the Vagal nerves at high enough rates to be integrated with auditory inputs in the brainstem, in the sensation of AE.

Are we particularly sensitive to time domain performance when differentiating between mono and stereo component?
The auditory system is super-sensitive to inter-aural temporal changes at LF. Stereo stands a better chance than mono of generating such. 3D audio even more so, in a room.

You mentioned that a spacing of microphones of about 2.5 meters is required for them to pick up decorrelation in the low frequencies. I take it as this is a limitation related to microphones if our hearing would be able to hear it if we were listening to the performance in person in the concert hall.
Yes, a limitation of in-room pickup and in-room reproduction.

When someone is recording a performance using both microphones in an X-Y stereo configuration and an additional spaced pair of microphones, will our hearing still be able to take in all the information and combine the quality of the more convincing-sounding three-dimensionality of the X-Y and the 2.5 meters spaced pair of microphones, without the phase differences between multiple microphones destroying/masking one or the other quality. In other words, can we "have it all" by combining many types of microphone configurations at the same time?
I am primarily studying AE from physiological and reproduction angles. Excellent recording artists such as Jack Renner, Leslie Ann Jones, Kimio Hamasaki, Morten Lindberg, Florian Camerer etc. have discovered reliable methods of picking-up and prioritising AE, imaging and localization. Check out their techniques.

2.5 meter distance between mics in acoustic recording can give a decent pressure difference at 35 Hz (1/4 wavelength), but there are valid reasons for choosing shorter distances; and even for creating stereo phantom images (deliberate correlation). In pop music, a main purpose of LF sound is “punch", and that is part of the balancing act, too.
 
The auditory system is super-sensitive to inter-aural temporal changes at LF. Stereo stands a better chance than mono of generating such. 3D audio even more so, in a room.

Thank you, this has been my experience but also effort in trying to understand why, given the complexity of auditory inputs and lack of confidence that steady state measurements in room tell us the whole story.
 
I am primarily studying AE from physiological and reproduction angles. Excellent recording artists such as Jack Renner, Leslie Ann Jones, Kimio Hamasaki, Morten Lindberg, Florian Camerer etc. have discovered reliable methods of picking-up and prioritising AE, imaging and localization. Check out their techniques.

Thank you, I will look it up!

Do you know any specific recording made by those gentlemen that are exceptionally good when it comes to AE?
 
Featured in Stereophile.

 
Do you know any specific recording made by those gentlemen that are exceptionally good when it comes to AE?

In most of their work, those recording artists have managed to capture AE. A good location with spaced omnis, feeding just one channel each, is often at the core, be it stereo, surround or 3D. Stereo examples of recorded AE:

Mahler’s Third, Philharmonia Orchestra conducted by Benjamin Zander (2004)
https://www.benjaminzander.org/library/mahler-symphony-no-3-2/
Whatever encoding is used in that link doesn’t destroy AE, picked up so well in the recording.

Excellent-sounding pop albums often have AE contrasts, for example DSOM, Avalon, Brothers in Arms, Buena Vista Social Club etc. Newer artists like Beyoncé and Billie Eilish have also discovered production is not only about squashing and loudness.

Regarding 3D sound, listen to a well-produced movie in a fine cinema. Room acoustics and speaker systems are controlled, and theatrical Atmos is not AE-compromised like consumer Atmos. Great film scores - like Leslie’s - or atmosphere can provide an almost super-natural experience.

Or consider attending a pro audio conference where companies like Harman, Neumann, PMC and Genelec do their best to establish transparent playback of linear 3D content. It makes a huge difference.
 
In most of their work, those recording artists have managed to capture AE. A good location with spaced omnis, feeding just one channel each, is often at the core, be it stereo, surround or 3D. Stereo examples of recorded AE:

Mahler’s Third, Philharmonia Orchestra conducted by Benjamin Zander (2004)
https://www.benjaminzander.org/library/mahler-symphony-no-3-2/
Whatever encoding is used in that link doesn’t destroy AE, picked up so well in the recording.

Excellent-sounding pop albums often have AE contrasts, for example DSOM, Avalon, Brothers in Arms, Buena Vista Social Club etc. Newer artists like Beyoncé and Billie Eilish have also discovered production is not only about squashing and loudness.

..

That's true. I find there’s a lot of well-produced music available on streaming services today, covering a wide range of genres like country, indie-folk-pop, EDM, and electronic. You just have to explore a bit and not just click on the stuff that flashes on the front page of the app.
 
Whatever encoding is used in that link doesn’t destroy AE, picked up so well in the recording.
Unless different files are served to different browsers/OSs, it's 192kb/s MP3. I'd guess that encoder settings play a role in how well AE is preserved, but I know very little about lossy audio compression.

I've listened to several of the Telarc recordings available on that site (as MP3s) and they all seem quite good in terms of AE. Observation on an XY display confirms that stereo separation is preserved at low frequencies.
 

You don't want to support the modes in the listening room, you DO want to support the modes in the intended RECORDING space. There's so many things mixed together in that article I hardly know where to start.
Like a Magna book… go back to front.
 
@Thomas Lund - what are you thoughts on a speaker’s step function or impulse response?
And phase in general?

Your talk mentioned <1kHz as being time (or phase) based.
I am assuming it is really phase, but maybe those 700 synapses are really time based?
It would be easier if the two ear were semi connected like an interference grid.

And those question percolate down to whether the speaker’s emitted wave front (in the time domain) should replicate what the microphone recorded as the SPL in the time domain?
 
Anyway, now that I have come up with a method to examine for stereo bass, I analysed some of my other CD rips. I found that all of them did not null below 100Hz. In other words, every recording I looked at contained stereo bass.
Funny and satisfying to see this, as that's the same thing I thought to do a few years back when I had a similar disagreement with a few audio old-timers in a different forum. I took a random sampling of 20 songs from my collection, mostly newer not older, every genre I could think of including pop and electronica, and couldn't find more than one that had a L-R sub-bass difference signal lower than -20 dBFS, all other 19 samples were higher. And -24 is still significant, so I could say I got the same 100% result as you. :)

StereoBassEverywhere.jpg
 
Last edited:
I looked at 26 randomly chosen songs from all types of genres. 20 of them contained stereo bass, and many of them had pretty large deviation between the channels when looking at the waveforms in real-time. I was a bit surprised by this as I was expecting most of those tracks to have mono bass, as many of them were common pop and rock songs.
 
I looked at 26 randomly chosen songs from all types of genres. 20 of them contained stereo bass, and many of them had pretty large deviation between the channels when looking at the waveforms in real-time. I was a bit surprised by this as I was expecting most of those tracks to have mono bass, as many of them were common pop and rock songs.

It's good this is happening. I did quite a bit of analysis on a lot of early CD tracks, and a surprising number were mono'ed bass.
 
It's good this is happening. I did quite a bit of analysis on a lot of early CD tracks, and a surprising number were mono'ed bass.

These are the tracks I tested. Some tracks had up to 20 dB deviations between the left and right channels.

Bass in mono (under 80 Hz):
Michael Jackson - Bad
Adele - Oh My God
Daft Punk - Get Lucky
Maroon 5 - Hand All Over
Norah Jones - Running
The Doors - Roadhouse Blues

Bass in stereo (under 80 Hz):
Adrianne Lenker - Ruined
AC/DC - Back In Black
Bob Dylan - Isis
Bob Dylan - Man in the Long Black Coat
Bruce Springsteen - Born in the U.S.A.
Dave Brubeck - Take Five
AC/DC - Dirty Deeds Done Dirt Cheap
Dominique Fils-Aime - Home
Fugazi - Forensic Scene
III. Allegro - Ben Marcato
Lady Blackbird - It's Not That Easy
Lorde - Green Light
Midnight Oil - Beds Are Burning
Neil Young - Crime in the City
Nirvana - Serve The Servants
Norah Jones - Staring at the Wall
Norah Jones - Don't Know Why
PJ Harvey - Rub 'Til It Bleeds
Simple Minds - Woman
Steely Dan - Aja


Dominique Fils-Aime - Home
1741595995153.png



Steely Dan - Aja
1741596030377.png
 
I have now tested more than a hundred with Deltawave, mostly classical.
I found about ten to be mono,the rest are stereo bass and lots of them are old stuff.

But even the ones who appear mono at FR have big phase differences sometimes. I don't know what is causing that.
 
An example from the 50's, 01 Scheherazade, op. 35- The Sea and Sinbad's Ship from 1996 RCA/Victor CD remastered from the 1956-1963 performances of Reiner and CSO:

FR.PNG


Original Spectrum


Delta of spectra.PNG


Delta of Spectra

Delta of phase.PNG


Delta of Phase down low.
 
Last edited:
I just ran about 30 tracks through matlab. Clunky, slow. I filtered the L and R to 60Hz, with a filter cutting off by 80dB at 120Hz. (long FIR constant delay filter) and then rudely calculated the normalized cross-correlation between the two channels ( this means I ignored level differences, but captured all changes due to phase, frequency, etc).

This is a number that can vary between -1 and 1. -1 means that the two signals are precisely out of phase and identical (except for level). 1 means they are identical. 0 means they are uncorrelated.

Lowest number I saw was .2 on an acoustic recording with a multimike panpotted arrangement with widely spaced mikes. Unquestionably this did not do any "mix to mono". A bunch of pop recordings came in very precisely at .95 +- .02. A LOT of them. That's actually kind of weird, because they are "almost mono bass". I'd say 50% were above .9 and about 10% under .5. I didn't find any track overall that was under 0. Out of about 30 tracks, only ONE hit '1'. No, it wasn't a mono recording.

Now, this is a very, very rough, broad approximation, it's 2AM, I was bored, and I did the fastest (by code writing) calculation I could imagine. I will go back and do more like I did long ago, calculating both level and phase mismatch, zero out quiet parts, and get a histogram of the actual "sameness" <or lack thereof>. But not tonight. Since this very broad measure would tend to hide differences in level, I'll have to say that there is much less mono bass in modern recordings.
 
Back
Top Bottom