• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Soundstage

Theo

Active Member
Joined
Mar 31, 2018
Messages
288
Likes
182
In his latest post of his blog, Archimago raises the question of the stereo "soundstage" and mentions a solution proposed by John Atkinson (among others, I'm sure) to assess the quality of the system by mean of a "...controlled "standard" signal like the mono pink noise to check for the ability of a hi-fi system to allow faithful reproduction of a synthetic signal that is intended to be "staged" right at the center between stereo speakers when heard...".

This seems indeed a good way to verify that the sound so produced is actually centered between the two speakers, suggesting that the system should produce an accurate soundstage. The method is obviously including the room in the experiment. So this looks like a good tool.

However, this is a sighted listening test, with all the subjectivity that can be attached to it. Of course, recording a pulse response and checking waterfall, distortion or whatever plots may represent a reliable measure of the accuracy of the system, provided we find the good spot to put the microphone. Pulse responses that I have found in most of the publications are far, far from perfect and often show more than 1% THD value. So, how do we discriminate between a good and bad response in term of soundstage? I've heard small speakers with bad rendering of the timbre of instruments that actually provide reasonable focus. This seems to have to do with phase consistency more than frequency spectrum and even harmonic distortion does not seem to matter that much as long as it is identical in both channels.

Archimago defines what he calls "soundstage" as the "...result of the placement of "sound objects" be they voices, instruments, noises as captured by the microphone in whatever configuration, processed by the audio engineer in the studio, and then laid down in the 2-channel carrier whether as physical media or virtual files....". Soundstage is a notion that is often used in audiofool reviews, together with the "air", "veil lifted", "density"... that populate hardcore audio addicts dreams. It is then usually discarded by some objectivists as BS. I personally don't think so and I find that good recordings (not the heavily compressed ones) do offer some sort of image which I like to listen to. Do you have the same feeling?

Are you okay with the above definition of "soundstage"? How do we measure the "soundstage" accuracy?
 

sergeauckland

Major Contributor
Forum Donor
Joined
Mar 16, 2016
Messages
3,460
Likes
9,158
Location
Suffolk UK
'Sounstage' relies on the creation of phantom images, between the loudspeakers. As such, these phantom images are created in the brain of the listener, and don't have an objective reality that can be measured.

What using a mono noise signal offers, is a signal with a wide bandwidth,more or less random, but identical in both channels. If the loudspeakers are well matched, and the room benign, then there will be a very narrow central phantom image created. Any mismatch in the frequency response of the loudspeakers or indeed room reflections, will result in the image shifting to one side or the other at certain frequencies. As all frequencies are occurring essentially together, this widens the phantom image such that instead of a sharp single virtual source of noise, it apparently comes from a more diffuse, less precisely defined source.

That's why for accurate stereo soundstaging, loudspeakers have to be closely matched. The actual frequency response is less important, that defines the character of the sound, but close pair matching is required for accurate soundstaging. Measuring frequency response is relatively easy, but as mentioned above, however, measuring soundstaging isn't possible objectively.

S
 

andreasmaaan

Master Contributor
Forum Donor
Joined
Jun 19, 2018
Messages
6,652
Likes
9,406
I find Archimago's definition interesting, as I'd always considered soundstage to be also about the size of the sonic scene and the images therein. In terms of the placement of these objects specifically, I'd prefer the term "imaging". But perhaps my way of looking at it is idiosyncratic - these words get thrown around a lot without any specific definition given.

This seems to have to do with phase consistency more than frequency spectrum and even harmonic distortion does not seem to matter that much as long as it is identical in both channels.

I think you're correct, except to add that phase and amplitude consistency are important, i.e. the frequency response between the two channels need to be closely matched. Overall accuracy of the speaker also helps IME, but I agree that decent imaging can be achieved without it so long as the two channels closely match and are placed symmetrically in the room.
 

Krunok

Major Contributor
Joined
Mar 25, 2018
Messages
4,600
Likes
3,067
Location
Zg, Cro
Pulse responses that I have found in most of the publications are far, far from perfect and often show more than 1% THD value. So, how do we discriminate between a good and bad response in term of soundstage? I've heard small speakers with bad rendering of the timbre of instruments that actually provide reasonable focus. This seems to have to do with phase consistency more than frequency spectrum and even harmonic distortion does not seem to matter that much as long as it is identical in both channels.

I agree with @andreasmaaan - I think it is both, phase and frequency linearity, that will make good soundstage. Distortion is more related to correct rendering of the timbre of instruments and vocals.

Btw, from what I have seen with various THD measurements speakers should be measured with mic less than 1m from them or closer otherwise room will affect the measurement. When looking at various measurements one can tell that speakers that manage to stay below 1% of THD in the 100Hz-10kHz range are very good and those which manage to stay below 0.5% in that range are excellent.
 

daftcombo

Major Contributor
Forum Donor
Joined
Feb 5, 2019
Messages
3,688
Likes
4,069
Isn't channel separation the most important parameter for soundstage?
 

andreasmaaan

Master Contributor
Forum Donor
Joined
Jun 19, 2018
Messages
6,652
Likes
9,406
Isn't channel separation the most important parameter for soundstage?

Not in most cases, since channel separation of even average-quality electronics (other than vinyl, arguably) generally exceeds the limits of audibility.

Compare that, for example, to the audibility threshold for interchannel timing differences, which is as little as 5us.
 

daftcombo

Major Contributor
Forum Donor
Joined
Feb 5, 2019
Messages
3,688
Likes
4,069
Thanks. Would you say that, with a decent enough stereo separation, a perfectly flat phase & regularly descending frequency response curve at the listening spot would give the best soundstage possible?
 

Krunok

Major Contributor
Joined
Mar 25, 2018
Messages
4,600
Likes
3,067
Location
Zg, Cro
Thanks. Would you say that, with a decent enough stereo separation, a perfectly flat phase & regularly descending frequency response curve at the listening spot would give the best soundstage possible?

Yes. It would also help if your LP would be at the 3rd angle of the triangle with equal sides formed by your speakers and your LP.
 

andreasmaaan

Master Contributor
Forum Donor
Joined
Jun 19, 2018
Messages
6,652
Likes
9,406
Thanks. Would you say that, with a decent enough stereo separation, a perfectly flat phase & regularly descending frequency response curve at the listening spot would give the best soundstage possible?

I'd partially agree with that.

Re: phase response, it needn't necessarily be flat (the audibility thresholds for phase distortion are much more generous - and complicated - than the audibility thresholds for interchannel phase differences), although a flat phase response matched in both channels would of course be the optimum condition theoretically.

Re: the frequency response curve at the listening position, not really IMHO. The key things are that the frequency response matches between the two channels. This will occur (1) if the anechoic response of the speakers matches and (2) if the speakers are symmetrically placed in a (reasonably) symmetrical room. But I can't see why a "regularly descending" response should give better imaging than other possible in-room responses.

IME a flat on-axis response and a smooth off-axis response should also help, which I think is what you're getting at - but this may not necessarily result in a regularly descending response in a given room.
 
Last edited:

Krunok

Major Contributor
Joined
Mar 25, 2018
Messages
4,600
Likes
3,067
Location
Zg, Cro
Well, now that I've noticed you said "regularly descending" - I don't think it has anything to do with soundstage. IMHO such curves has more to do with your personal preference than with correct soundstage. Flat curve would do the soundstage imaging job equally well as regularly descending one or any other one as long as other codnitions which @andreasmaaan just mentioned are met.
 

Cosmik

Major Contributor
Joined
Apr 24, 2016
Messages
3,075
Likes
2,180
Location
UK
I think it is both, phase and frequency linearity, that will make good soundstage. Distortion is more related to correct rendering of the timbre of instruments and vocals.
But if your channels are playing different signals (which they will be more often than not) then you will have different distortion i.e. different intermodulation between the acoustic sources, in each channel and therefore no longer a match between the channels. For your ears to create the phantom image, you need each acoustic source to match in both channels.

Again, the frequency domain view of audio hides a problem: distortion isn't 'harmonic' except for steady tones, and it doesn't produce the same amount and type of distortion at different signal levels (because it's really just a bent transfer function, not a high tech harmonic analysis and synthesis device). The frequency domain simplification of audio might mislead you into believing that as long as both channels have identical hardware, you will get a match between the channels. You won't, because of intermodulation between the sources in hardware that distorts, and different source levels in both channels.

So low distortion is essential for good stereo imaging and/or soundstage - whatever the distinction is.
 

Krunok

Major Contributor
Joined
Mar 25, 2018
Messages
4,600
Likes
3,067
Location
Zg, Cro
But if your channels are playing different signals (which they will be more often than not) then you will have different distortion i.e. different intermodulation between the acoustic sources, in each channel and therefore no longer a match between the channels. For your ears to create the phantom image, you need each acoustic source to match in both channels.

Or you will have a good soundstage with bad timbre because of distortion? :)
IMHO, as @andreasmaaan pointed out, as long as both speakers have decently flat response and both have excess phase curve decently flat and close to zero I think you will have decent soundstage. But not necessarily a good sound, as that depends on distortion.

Again, the frequency domain view of audio hides a problem: distortion isn't 'harmonic' except for steady tones, and it doesn't produce the same amount and type of distortion at different signal levels (because it's really just a bent transfer function, not a high tech harmonic analysis and synthesis device). The frequency domain simplification of audio might mislead you into believing that as long as both channels have identical hardware, you will get a match between the channels. You won't, because of intermodulation between the sources in hardware that distorts.

Flat excessive phase close to zero covers the time domain as well. Athough not many folks believe that having that is crucial for the sound it certainly doesn't hurt to have it as it can only help with imaging.
 

Cosmik

Major Contributor
Joined
Apr 24, 2016
Messages
3,075
Likes
2,180
Location
UK
Or you will have a good soundstage with bad timbre because of distortion? :)
No. Imaging requires 'the object' that you wish to place in the stereo scene to match in both channels (identical but with different levels or, with some mic techniques, delayed relative to each other). If you were to distort 'the object' electronically with an effect prior to recording it, then no problem: it would appear as the object with modified timbre as you suggest.

However, if you are applying composite distortion to each channel, and the distortion varies with level and signal content, and both channels comprise different signals, you will get mismatching distortion products spawned from 'the object' in the two channels. Your ears will have to try to track the distortion products, and how they relate to 'the object', as they come and go in the two channels separately. There are effectively new objects being generated that are moving around in the stereo scene. Blurred imaging, fatigue and sound 'sticking to the speakers' must be the result.
 

Sergei

Senior Member
Forum Donor
Joined
Nov 20, 2018
Messages
361
Likes
272
Location
Palo Alto, CA, USA
Or you will have a good soundstage with bad timbre because of distortion?

Mammals track the direction and distance to sound sources using sophisticated "digital signal processing", running on massively parallel "wetware". Its function has been mostly deciphered and reproduced by the mid-1990s. Its simplified versions are implemented in some of contemporary hearing aids, which amplify a predetermined number of the strongest sources that can be tracked, and suppress the rest.

A sound source, such as a person speaking in a noisy room, or an instrument playing a part of a complex music arrangement, is identified by its time-frequency domain slowly evolving signature, which includes the timbral envelope. If the distortions are high enough to affect the perceived timbre of a sound source, the accuracy of the sound source identifications and subsequent tracking can be affected too.

Perceptually, such identification and tracking errors may manifest themselves as a larger number of apparent sound sources quickly appearing, disappearing, moving around, shrinking, growing, splitting, merging, and colliding, while the signal is reproduced through a system of lower fidelity, whereas the original intent of the music creators was to present a limited number of stable, well-separated, and slowly moving sound sources.

Such unintended apparent activity of perceived sound sources may transform a well-composed and well-performed sophisticated symphony into an undecipherable mess. On the other hand, these effects may greatly enliven an otherwise boringly simple, static, and repetitive composition. My personal preference is accurate sound reproduction, conducive to perceiving the original intent of the music creators. YMMV.
 

Krunok

Major Contributor
Joined
Mar 25, 2018
Messages
4,600
Likes
3,067
Location
Zg, Cro
If the distortions are high enough to affect the perceived timbre of a sound source, the accuracy of the sound source identifications and subsequent tracking can be affected too.

This is really an interesting information, thank you!
 

RayDunzl

Grand Contributor
Central Scrutinizer
Joined
Mar 9, 2016
Messages
13,250
Likes
17,186
Location
Riverview FL
You hear what you heard...


And for a while, you can't un-hear it. Play it again.

But, it fades. Come back in a couple of weeks and try again.
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,748
Likes
37,572
You hear what you heard...


And for a while, you can't un-hear it. Play it again.

But, it fades. Come back in a couple of weeks and try again.
I could hear it and I've not seen this in few months. My aren't I special. But I do remember trying it before and the second time it was just as unitelligible as the first time. Maybe third time is the charm.
 

daftcombo

Major Contributor
Forum Donor
Joined
Feb 5, 2019
Messages
3,688
Likes
4,069
If the distortions are high enough to affect the perceived timbre of a sound source, the accuracy of the sound source identifications and subsequent tracking can be affected too.

Do you have any hint of what "high enough" could refer to in the measurements territory?
 
OP
Theo

Theo

Active Member
Joined
Mar 31, 2018
Messages
288
Likes
182
Isn't channel separation the most important parameter for soundstage?
Channel separation is perfect when using headphones. However, IME imaging is terrible. So it is more complicated than that. It seems that we need both ears to hear both signals through the filter created by the head/ear pavilions geometry as our brain has learned to process it that way.
However, both channel being perfectly paired does help. Good speaker manufacturers check the pairing and sell their product by pairs.
 
Top Bottom