• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Mid/Side method for recording, mixing and mastering, and playback.

Tim Link

Addicted to Fun and Learning
Joined
Apr 10, 2020
Messages
813
Likes
692
Location
Eugene, OR
I've been using a stereo playback method for about a year now that involves up-mixing 2 channel stereo into 3 channels, L-R, L+R, R-L.
A thread about it here: https://www.audiosciencereview.com/...k-elimination-reduction-par-excellence.39157/

This video explains that what I'm doing is a version of the Mid/Side method that is sometimes done with microphone setup and mixing and mastering:

I've been continuing to refine this system, and in working with it and comparing to regular 2 speaker stereo playback I've come to be more aware of tonal differences in sounds panned to the sides compared to center panned sounds, in both my setup and regular 2 speaker stereo. The lack of consistency in tone across the sound stage is, in my opinion, a serious detractor to sound quality, making it difficult or impossible to get the tone set correctly for everything in the recording. My 3 speaker upmix tends to cancel out bass in the side channels, since the bass tends to be more mono in a lot of recordings, so the signal differencing attenuates or completely eliminates it. This sounds to me like brighter, thinner tone to the sides. If I try to correct that with overall EQ, the center sounds get too thick and heavy. To solve that, I can EQ the side information independently of the center. When done correctly, the result of a more consistent tone across the sound stage is kind of startling. It's just not something I've heard that often.

This can also be useful for headphone listening. Take a 2 channel stereo track and create the same three channel mix, L-R, L+R, and R-L.
mix
L-R and L+R into the left ear and
R-L and L+R into right ear.

You're back to where you started, L in left ear and R in right ear, so what's the point?

The point is that before you mix them back together you can do EQ on the L-R and R-L without effecting the center as much, and EQ on the center without effecting the sides as much.

This gives you a considerably more powerful tone control. So far with headphones I've been intrigued again with the results of boosting the mids and bass on just the side channels. For whatever reason, the side panned sounds can seem slightly thin and bright to me compared to center panned images.

Another interesting thing you can do is adjust the center level up or down. For may main speaker system, I'm finding that about 1.5 or 2 dB of reduction of the mid levels has an overall desirable effect, helping to reduce the center speaker from dominating as it is the only truly coherent sound source in the room, which makes center panned singers just a bit too strong, dominating panned instruments and their own stereo ambience.
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
21,240
Likes
38,629
I assume you are aware of Alan Blumlein's shuffler idea. The side signals received a boost starting at 700 hz and shelving around 200 hz at 3-5 db higher. I also think with what you are doing the center channel would be reduced by 3 db though you've indicated 2 db helps.
 
OP
T

Tim Link

Addicted to Fun and Learning
Joined
Apr 10, 2020
Messages
813
Likes
692
Location
Eugene, OR
I assume you are aware of Alan Blumlein's shuffler idea. The side signals received a boost starting at 700 hz and shelving around 200 hz at 3-5 db higher. I also think with what you are doing the center channel would be reduced by 3 db though you've indicated 2 db helps.
That's interesting. I had heard of the shuffler idea but didn't know what it was. Makes sense. After trying to run the center at full level for quite a while I've finally come back to my senses and started experimenting with turning it down again. I tried -3dB and that was a potent effect. My system was not well equalized at the time, with excess treble, so I thought it sounded like too much, especially in the treble. I should try it again now that I'm tracking the Harman curve pretty well. I'm fairly happy with -2dB so far. I think it's about right because center vocalists still sound present and clear, but some side panned sounds have gained significant prominence and I don't think they need any more. I'll try it again anyway.

One complication for me with turning down the center is that I'm running the bass in regular stereo, so I was struggling to get the center bass to go down without the side bass being diminished. Turns out that is super simple. For -3dB in the center, just add a -3dB inverted signal from the opposite channel. Anything panned center will be -3dB, side panned bass will remain unchanged. This makes the bass seem quite a bit different, and might be a useful signal process for any 2 speaker setup. I need to listen more to it to describe the perception, but needless to say it's different. You can't get the same effect by just adjusting the overall bass level.
 

kemmler3D

Major Contributor
Forum Donor
Joined
Aug 25, 2022
Messages
3,952
Likes
7,999
Location
San Francisco
Am I correct in that you're listening to M/S sound with the side played on left and right, and mid on a center channel?
 
OP
T

Tim Link

Addicted to Fun and Learning
Joined
Apr 10, 2020
Messages
813
Likes
692
Location
Eugene, OR
Am I correct in that you're listening to M/S sound with the side played on left and right, and mid on a center channel?
Yes. The surprising result came from having the 3 speakers fairly close together, which produces a very wide and clear sound stage at the sweet spot.
 

kemmler3D

Major Contributor
Forum Donor
Joined
Aug 25, 2022
Messages
3,952
Likes
7,999
Location
San Francisco
Yes. The surprising result came from having the 3 speakers fairly close together, which produces a very wide and clear sound stage at the sweet spot.
Huh, interesting. But isn't the Left / Right information lost this way?
 
OP
T

Tim Link

Addicted to Fun and Learning
Joined
Apr 10, 2020
Messages
813
Likes
692
Location
Eugene, OR
Huh, interesting. But isn't the Left / Right information lost this way?
If you get far enough away and the speakers are too close together, then yes. I've found that for frequencies above about 1.2 kHz there is excellent stereo separation at a listening distance of 8 feet with the speakers spaced 1 foot center to center. Below 1.2 kHz down to about 300 or 400 Hz a center to center distance of 2 feet works. That will give a stereo sound stage of about 120 degrees when I test it by panning hard left and right. Below 300Hz I go back to regular stereo speakers, spaced widely apart, maybe with some inverse opposite channel added in. I'm just starting to experiment with that. It does seem to make the bass more spacey. Maybe that's good, maybe not.
 

kemmler3D

Major Contributor
Forum Donor
Joined
Aug 25, 2022
Messages
3,952
Likes
7,999
Location
San Francisco
If you get far enough away and the speakers are too close together, then yes. I've found that for frequencies above about 1.2 kHz there is excellent stereo separation at a listening distance of 8 feet with the speakers spaced 1 foot center to center. Below 1.2 kHz down to about 300 or 400 Hz a center to center distance of 2 feet works. That will give a stereo sound stage of about 120 degrees when I test it by panning hard left and right. Below 300Hz I go back to regular stereo speakers, spaced widely apart, maybe with some inverse opposite channel added in. I'm just starting to experiment with that. It does seem to make the bass more spacey. Maybe that's good, maybe not.
I think I get it. You have two different difference channels on the left and right, and mixing back to L/R happens acoustically when they null in the air?
 
OP
T

Tim Link

Addicted to Fun and Learning
Joined
Apr 10, 2020
Messages
813
Likes
692
Location
Eugene, OR
I think I get it. You have two different difference channels on the left and right, and mixing back to L/R happens acoustically when they null in the air?
Yes. When I first thought of trying this I thought the drivers would need to be ear distance apart so that each ear would be equidistance between the center speaker and the corresponding side speaker. It turns out that they don't have to be that close. Interference patterns emerge that create a loud lobe on the side that the sound is panned to, and a quiet lobe on the opposite side. For higher frequencies it's better if the drivers are closer together because the lobes become fatter and easier to keep your head positioned. As the frequency goes down you have to move the drivers further apart or no interference patterns will emerge. Interestingly, you still perceive a strong stereo effect even when the interference patterns lessen down below 500Hz because a phase shift is still maintained across the head, and at lower midrange frequencies phase difference in each ear becomes more important than amplitude difference because the waves just wrap around our head with little attenuation. In the real world, if there's ever a significant volume difference between the left and right ear at bass frequencies, it means something is making bass noises very close to one of your ears.
 
OP
T

Tim Link

Addicted to Fun and Learning
Joined
Apr 10, 2020
Messages
813
Likes
692
Location
Eugene, OR
After a few days of listening I've decided that mixing a little of the opposite channel into regular stereo bass from 400Hz down does more harm than good on most recordings. Minus 3dB on the center channel is too much. -2dB is about right.
 

BenB

Active Member
Joined
Apr 18, 2020
Messages
285
Likes
447
Location
Virginia
I coded up a method to use adaptive signal processing to separate stereo tracks into 4 channels: One for things that are primarily in the left, one for things that are primarily in the right, one for things that are about the same level and phase in both channels, and one for things that are about the same level, but opposite phase. I'd be happy to perform this separation on audio of your choice and provide the results for you to play around with. The reason I wanted to do this was in order to remix the separated channels to allow me to control the soundstage width. I haven't really had the need for that much, but maybe you'll like the way the adaptive signal processing separates the channels.
 
OP
T

Tim Link

Addicted to Fun and Learning
Joined
Apr 10, 2020
Messages
813
Likes
692
Location
Eugene, OR
I coded up a method to use adaptive signal processing to separate stereo tracks into 4 channels: One for things that are primarily in the left, one for things that are primarily in the right, one for things that are about the same level and phase in both channels, and one for things that are about the same level, but opposite phase. I'd be happy to perform this separation on audio of your choice and provide the results for you to play around with. The reason I wanted to do this was in order to remix the separated channels to allow me to control the soundstage width. I haven't really had the need for that much, but maybe you'll like the way the adaptive signal processing separates the channels.
Sounds interesting. How did isolate things that are primarily in the right from things primarily in the left?
 

BenB

Active Member
Joined
Apr 18, 2020
Messages
285
Likes
447
Location
Virginia
Sounds interesting. How did isolate things that are primarily in the right from things primarily in the left?
I started with MVDR, then added some extra robustness constraints to get it to behave and do what I want. Adaptive signal processing can sometimes do weird and outlandish things if you don't lock it down a bit.

In MVDR beamforming, you create a "cross power spectral matrix", which represents the covariance between your channels. You can then use that to create an optimum set of weights to filter out everything that doesn't match your model vector (d), without distorting or attenuating the things that do match your model vector. So for a left channel, your model vector looks like [1 0], and for a right channel, your model vector looks like [0 1]. For the center channel (mid), your model vector would be [1 1], and for the surround channel (in pro-logic surround), your model vector woule be [1 -1].
If you do this, there's no gurantee that your four channels sum back to the original, because content not matching these model vectors has an unpredictable level of attenuation. Therefore, I chose not to use an MVDR solution to create my "mid" channel, but instead calculate it using [1 1]-WL-WR, where WL is the weight solution for my left channel, and WR is my weight solution for my right channel.
 
OP
T

Tim Link

Addicted to Fun and Learning
Joined
Apr 10, 2020
Messages
813
Likes
692
Location
Eugene, OR
I started with MVDR, then added some extra robustness constraints to get it to behave and do what I want. Adaptive signal processing can sometimes do weird and outlandish things if you don't lock it down a bit.

In MVDR beamforming, you create a "cross power spectral matrix", which represents the covariance between your channels. You can then use that to create an optimum set of weights to filter out everything that doesn't match your model vector (d), without distorting or attenuating the things that do match your model vector. So for a left channel, your model vector looks like [1 0], and for a right channel, your model vector looks like [0 1]. For the center channel (mid), your model vector would be [1 1], and for the surround channel (in pro-logic surround), your model vector woule be [1 -1].
If you do this, there's no gurantee that your four channels sum back to the original, because content not matching these model vectors has an unpredictable level of attenuation. Therefore, I chose not to use an MVDR solution to create my "mid" channel, but instead calculate it using [1 1]-WL-WR, where WL is the weight solution for my left channel, and WR is my weight solution for my right channel.
Thanks. How does this compare to something like Dolby Pro Logic that comes with AVRs? So if I give you a sound file what multi-channel format will you put it in? I just realized that I've freed up all the components of my 5.1 system so I can get that running again for multi-channel experiments. My main rig with it's 3 speaker system is only set up to accept 2 channel recordings.
 
OP
T

Tim Link

Addicted to Fun and Learning
Joined
Apr 10, 2020
Messages
813
Likes
692
Location
Eugene, OR
for real center extraction, I recommend this
That's interesting. The name makes sense. I don't really have a true mid as I'm not filtering out the side information from the middle channel. That kind of processing is over my head.
 

BenB

Active Member
Joined
Apr 18, 2020
Messages
285
Likes
447
Location
Virginia
Thanks. How does this compare to something like Dolby Pro Logic that comes with AVRs? So if I give you a sound file what multi-channel format will you put it in? I just realized that I've freed up all the components of my 5.1 system so I can get that running again for multi-channel experiments. My main rig with it's 3 speaker system is only set up to accept 2 channel recordings.
My matlab script outputs 4 separate .wav files, which I usually load into audacity to try different level adjustments during the recombination process. I assume you have some type of DAW software, right? I believe Pro Logic decoders work very differently from what I've described, but I don't know that much about it. I know the original mono surround channel was coded as out of phase in the L and R channels. Perhaps the decode process is proprietary? I'm not certain.
 
OP
T

Tim Link

Addicted to Fun and Learning
Joined
Apr 10, 2020
Messages
813
Likes
692
Location
Eugene, OR
My matlab script outputs 4 separate .wav files, which I usually load into audacity to try different level adjustments during the recombination process. I assume you have some type of DAW software, right? I believe Pro Logic decoders work very differently from what I've described, but I don't know that much about it. I know the original mono surround channel was coded as out of phase in the L and R channels. Perhaps the decode process is proprietary? I'm not certain.
I don't have a DAW, but I can use Audacity to combine the wav files and output a multi-channel file that my receiver will understand.
 

BenB

Active Member
Joined
Apr 18, 2020
Messages
285
Likes
447
Location
Virginia
I don't have a DAW, but I can use Audacity to combine the wav files and output a multi-channel file that my receiver will understand.
Sounds like we're all set. Would you like to provide a piece of music to separate, or would you like me to pick something? I don't know what kind of music you might have been focusing on... I've been focusing on orchestral recordings.
 
OP
T

Tim Link

Addicted to Fun and Learning
Joined
Apr 10, 2020
Messages
813
Likes
692
Location
Eugene, OR
I've got to figure out where I'm going to setup my second system that can actually play back a multi-track recording. I'll think of some good tracks from various genres. Is this something that could potentially be done on-the-fly with minimal latency? Or does it require a lot of horsepower to process in advance of playback? I ask that because anything I'm actually going to consider employing will have to be on-the-fly.
 
Top Bottom