• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Soundstage

HuskerDu

Member
Joined
Mar 27, 2019
Messages
62
Likes
44
Location
Houston
An example of what audio engineers regard as "one of the best mixes of all times" is Grenade by Bruno Mars. Not only does it masterfully convey the intended emotion, it also sounds darn nearly exactly same on every sound system I listened to it through - a very sophisticated DSP went into that.

If sound engineers (recording, mixing, mastering) are asked to create a masterpiece that may only sound well on a high-end system, they do precisely that. An example of that is Fields of Gold by Sting. It sounds "meh" on cheap systems. It is drop-dead-gorgeous in a mastering studio.

Wow! There is no substitute for experience. Once you explain it, it seems obvious, but before you explained, I hadn't really thought about what all those sliding sliders actually DO! Thanks!
 

HuskerDu

Member
Joined
Mar 27, 2019
Messages
62
Likes
44
Location
Houston
If I were to pick up a workhorse most often encountered around here, that would be large 3-way ATC monitors, such as SCM100ASL

Wow!

(Sorry about the huge screen cap, earlier. Looks like I have a lot more control with a real computer, as opposed to my phone.)
1554778847582.png
 
Last edited:

Duke

Major Contributor
Audio Company
Forum Donor
Joined
Apr 22, 2016
Messages
1,558
Likes
3,865
Location
Princeton, Texas
Thank you very much Cosmik for taking the time to write an in-depth reply.

Basically to be a single acoustic source with uniform directivity at all frequencies, reproducing the signal as it is recorded in the time domain not just 'spectrally correct' - as much as is possible. (See my user profile for the full version!).

I looked at your profile . I share some of your skepticisms, notably about in-room measurements and "room correction". Excellent analysis of the flaws of "room correction".

Apparently you have a very comprehensive set of ideas about of what is right and what is wrong when it comes to loudspeaker design.

Could you clarify your concluding statement? "In audio, it is better to design things based on feedforward logic and justified assumptions, rather than experimental feedback which is, ultimately, based on unjustified assumptions." What do you mean by "feedforward" logic? The implication seems to be that you reject error-correction mechanisms, so what happens if your initial assumptions include mistakes? How does an assumption become "justified?" Why is experimental feedback ultimately based on "unjustified assumptions?" (Yes I read your paragraph about someone knowing they are taking part in a listening test, and that didn't convince me.)

On a more nuts-n-bolts level, how do you "compensate for directivity characteristics" using "calculated in-room EQ"? That is not something I would have thought of, so it sounds very interesting to me, whatever it is.

"1. [Dipoles] create reflections that do not resemble the direct sound and are therefore breaking mechanisms within human hearing that rely on similarities of envelope shapes and polarities of wavefronts to be perceived as reflections and not new sound sources - essential for correct imaging and soundstage.

So, if the ear/brain system uses the envelope shape (spectral balance) to identify reflections, then a dipole's reflections will be correctly classified as such. And if the ear/brain system uses the polarity of the reflection's wavefront, then a dipole's reflections will be mis-identified as new sounds and the imaging and soundstaging will be a disaster.

Am I understanding you correctly?

2. [Dipoles] create direct comb filtering that reflects from surfaces around the speaker."

It depends. Most dipoles have narrow vertical dispersion, so they generate fewer floor-and-ceiling reflections. Many dipoles have fairly narrow horizontal dispersion, and correspondingly generate less sidewall reflections (not to mention their nulls to the side). The backwave will generate comb filter effects that look like a total disaster in a frequency response measurement but will be benign to the ear, just as any live instrument in any room will have comb filter effects from room reflections that look horrible on paper but are benign to the ear.

3. [Dipoles] create egregious comb filtering that changes dynamically as the listener moves, heard as 'phasiness' and in-head effects.

If you are talking about dipoles with side-by-side line source drivers, then I agree, that's a source of comb filtering which changes the sound as the listener moves. Not the best choice for playing air guitar!!

4. [Dipoles] create indirect comb filtering from interactions of the positive and negative reflections.

The reverberant field will be decorrelated at medium and high frequencies whether the source is a dipole or a monopole. At low frequencies, the reverberant field transitions to being fairly well correlated, as the wavelengths become long relative to the room's dimensions. I think dipoles generally produce a more decorrelated low frequency soundfield than monopoles do, which would contribute to their in-room bass smoothness, and imo that's a good thing.

I don't see any obvious downsides to dipoles from "interactions of their positive and negative reflections."

5. In order to reduce (but not eliminate) some of these negative effects, [dipoles] become more difficult to position in the room.

Dipoles need sufficient space behind them to give a fairly long path-length-induced time delay to the backwave energy. I have explained why that matters, but can do so again if you would like. Horizontal spacing and toe-in angles are chosen using the same criteria as any other speaker, except that dipoles are not going to get boomy if placed right up against the side walls because they have nulls to the side.

I can't think of any other placement difficulties imposed by the "negative effects" you listed. Can you?

As I was careful to say earlier, I was replying to the whole open baffle 'school', not anyone in particular.

Really??

Here is the entire sentence, cut and pasted. These are your exact words:

"If you wanted to define why a speaker should be a dipole, then I would be interested - rather than just saying that existing dipoles are often adequate and sometimes generate a really nice effect.”

The first half of the sentence is clearly addressed to one person, me. Did you REALLY transition to “replying to the whole open baffle 'school', not anyone in particular” halfway through the sentence?

I think you're much too good a writer to carelessly switch from second person to third person in the middle of a sentence.

* * * *

I guess I was hoping to find more common ground. Well, I do agree with you about this part: "Uniform directivity at all frequencies, reproducing the signal as it is recorded in the time domain not just 'spectrally correct' - as much as is possible" would be very nice to have.

(Ironically, some dipoles come pretty darn close to this ideal. For example, the classic SoundLab electrostat is a single fullrange driver whose radiation pattern is ninety degrees front-and-back at frequencies where the panel's directivity dominates, transitioning to the classic dipole figure-8 as the wavelengths become long relative to the panel width.)

Do you have a particular radiation pattern width or shape in mind for your uniform directivity?
 
Last edited:

Juhazi

Major Contributor
Joined
Sep 15, 2018
Messages
1,723
Likes
2,908
Location
Finland
We must remember, that many "dipole" speakers are dipole only below 1-2000Hz. This is because the radiator width makes it directive and backside of the radiator is not symmetrical to the frontside. Large panels still radiate high F backwards, but in haphazard horizontal pattern. Cone drivers suffer from motor structure and often tweeters have a closed back. A backside-mounted auxiliary tweeter helps, but it is always out of phase.

Cosmik seems to stick with the negative antiphase idea. It is totally wrong idea considering reflections in a room! Rearside radiation starts with reversed phase, but because of the path length difference of direct and reflected sound, there will always be phase mismatch at listener's ear/microphone - even with omnipoles. With dipoles, the nulls just are at different frequencies.

On previous page Ray showed measurements of monopole vs. dipole, and dipole had less reflections! That is just one case, but I have noticed the same effect in many of my own measurements.
 

Duke

Major Contributor
Audio Company
Forum Donor
Joined
Apr 22, 2016
Messages
1,558
Likes
3,865
Location
Princeton, Texas
Cosmik seems to stick with the negative antiphase idea. It is totally wrong idea considering reflections in a room! Rearside radiation starts with reversed phase, but because of the path length difference of direct and reflected sound, there will always be phase mismatch at listener's ear/microphone - even with omnipoles. With dipoles, the nulls just are at different frequencies.

Excellent point!

Some of the wording on Cosmik's profile page seems to me to be saying that he approaches audio design by giving such complete authority to certain assumptions that one need only follow them, as there is no need for observation of results and possible course-correction. This is a very interesting audio philosophy, presumably derived from an equally interesting general life philosophy, and the only theoretical weakness that comes to my mind is the implied assumption that his assumptions are infallible.

(Cosmik, if I have mischaracterized the essay on your profile page, please correct me.)

On a different subject, the speaker in your profile picture... I don't recognize it, but Jorma Salmi comes to mind. Is it one of his designs, or inspired by his work?
 
Last edited:

Cosmik

Major Contributor
Joined
Apr 24, 2016
Messages
3,075
Likes
2,180
Location
UK
We must remember, that many "dipole" speakers are dipole only below 1-2000Hz. This is because the radiator width makes it directive and backside of the radiator is not symmetrical to the frontside. Large panels still radiate high F backwards, but in haphazard horizontal pattern. Cone drivers suffer from motor structure and often tweeters have a closed back. A backside-mounted auxiliary tweeter helps, but it is always out of phase.

Cosmik seems to stick with the negative antiphase idea. It is totally wrong idea considering reflections in a room! Rearside radiation starts with reversed phase, but because of the path length difference of direct and reflected sound, there will always be phase mismatch at listener's ear/microphone - even with omnipoles. With dipoles, the nulls just are at different frequencies.

On previous page Ray showed measurements of monopole vs. dipole, and dipole had less reflections! That is just one case, but I have noticed the same effect in many of my own measurements.
I was deliberately not mentioning large panels versus small panels. For sure, the large panels introduce a new factor: 'beaming' at higher frequencies, or in general producing a different shape of wavefront. This may show up in the measurements (and the sound) of course as reduced influence of reflections as frequency goes up, possibly as 'clarity' as though sitting closer to a conventional pair of speakers. Just as a phased array of monopoles can 'beam' a signal to the listener's ears if they are sitting in the right place. And yes, a dipole in general has its own 'phased array' dimension that alters its directivity in a radical way.

So if we want to eliminate the sound of the room, there are several ways to do it quite deliberately, and also accidentally as a side effect of a certain type of transducer.

But do we want to eliminate the sound of the room? I don't: I want the room to add some 'non-static' acoustics to the recording, and also to be a natural environment to be in, with my own voice blending naturally with the recording and those of any companions if we are talking. A reasonably broad, non-radical directivity characteristic from the speaker will be conducive to this.

The word "decorrelated" is bandied about as though it is something real, a binary distinction compared to "correlated". Wire a speaker backwards and the nausea-inducing effect is only an issue because the sounds are "correlated". Add a bit of delay or some multiple reflections and they are no longer "correlated". And in Ray's image earlier, the frequency response 'fur' is ignored (for sure it's an unsmoothed plot in this case, but the implication then is that we should squint and see past the 'fur').

If we take an arrangement with two monopole speakers back to back in a room, and play a sine wave through them, and we measure the level of what reaches a microphone at the listener's position, we will get a certain measured magnitude. If we flick a switch and reverse the polarity of what comes out of the back, the reflections at the rear will have their phase reversed, bounce back towards us and the mic, and reverberate around the room mixing with the 'positive' direct sound and reflections, and we will measure (and hear) a changed level - which could be lower or higher.

The reflections are not "decorrelated" at all. They are the 'fur' in the in-room measurement, and they are real, and they change dynamically as the listener moves. They should tell the human listener something about the room dimensions and location of the source, but by reversing the rear radiation this mechanism is to some extent (that can no doubt be written off as negligible by open baffle enthusiasts) confounded.

For sure the fur is too complex to understand visually on the laptop screen, hence the (misguided) idea that human hearing can't possibly be doing anything with it, but this is just based on an assumed single mechanism for human hearing. My 'philosophy' (that I see has just been called into question!) is that the path to audio nirvana is not to second guess how hearing works. The behaviour of the audio system should be as 'straight' as possible. I am not in the business of testing to see how many arbitrary changes to the signal can be piled onto each other without the average person consciously noticing.

(The above experiment calls the bluff of the listening test proponents who say that difference is a more objective use of listening tests than preference: the change in a single tone *will* be heard clearly; I would guess that the change in noise would also be heard clearly, certainly at the moment of changeover. In certain types of music it would therefore be heard clearly. Slowly fading in and out with a gap in between would be a way to disguise the difference, but is that then a proper listening test?)
 

Juhazi

Major Contributor
Joined
Sep 15, 2018
Messages
1,723
Likes
2,908
Location
Finland

svart-hvitt

Major Contributor
Joined
Aug 31, 2017
Messages
2,375
Likes
1,253
Just one question: Is soundstage a psychoacoustic phenomenon or a physical one that can be formulated using mathematics?
 

Cosmik

Major Contributor
Joined
Apr 24, 2016
Messages
3,075
Likes
2,180
Location
UK
Just one question: Is soundstage a psychoacoustic phenomenon or a physical one that can be formulated using mathematics?
If the system is 'straight', I think the answer is both. The stereo image can be modelled mathematically, but the model needs to assume something about what the human ears/brain is doing with the information.

If the system is not straight, the soundstage is anybody's guess. 'Antiphase' is the stuff that gives rise to an image that comes 'from nowhere and/or everywhere' (and can be used judiciously in recordings deliberately to do that e.g. QSound). Sprinkle a bit of arbitrary antiphase into your stereo system, and it may produce amazing effects.
 

svart-hvitt

Major Contributor
Joined
Aug 31, 2017
Messages
2,375
Likes
1,253
If the system is 'straight', I think the answer is both. The stereo image can be modelled mathematically, but the model needs to assume something about what the human ears/brain is doing with the information.

If the system is not straight, the soundstage is anybody's guess. 'Antiphase' is the stuff that gives rise to an image that comes 'from nowhere and everywhere' (and can be used judiciously in recordings deliberately to do that e.g. QSound). Sprinkle a bit of antiphase into your stereo system, and it may produce amazing effects.

Ok. So what’s the difference between stereo as a field of scientific inquiry and digital room compensation as a field of scientific inquiry? Why not let psychoacoustic tests, blind tests, decide if drc works or not?

I don’t expect you to answer? ;)

It was just my two cents worth. The human mind can be led to believe things and I wonder why not use drc to enhance psychoacoustic effects too?
 

Juhazi

Major Contributor
Joined
Sep 15, 2018
Messages
1,723
Likes
2,908
Location
Finland
Imaging, soundstage and envelopment are perceptual, psycoacoustical phenomena.

More about these explained by David Griesinger

http://www.davidgriesinger.com/Acoustics_Today/AES_preprint_2012_2.pdf
"ABSTRACT
Standard models for both timbre detection and sound localization do not account for our acuity of localization in reverberant environments or when there are several simultaneous sound sources. They also do not account for our near instant ability to determine whether a sound is near or far. This paper presents data on how both semantic content and localization information is encoded in the harmonics of complex tones, and the method by which the brain separates this data from multiple sources and from noise and reverberation. Much of the information in these harmonics is lost when a sound field is recorded and reproduced, leading to a sound image which may be plausible, but is not remotely as clear as the original sound field. "



http://www.davidgriesinger.com/spac4.pdf
"Abstract:
Conventional wisdom holds that spaciousness and envelopment are caused by lateral sound energy in rooms, and that it is the early arriving lateral energy which is most responsible. However small rooms often have many early lateral reflections, but by common definition small rooms are not spacious. This paper (briefly) describes a series of experiments into the perception of spaciousness and envelopment. The perceptions are found to be related most commonly to the lateral (diffuse) energy in halls at least 50ms after the ends of notes (the background reverberation) and less often but importantly to the properties of the sound field as the notes are held. Experiments with orchestral music at high reverberant level indicate that it is the very late >300ms reflected energy which is most responsible for spaciousness. A measure for spaciousness - Lateral Early Decay Time (LEDT) is suggested, and results of this measure in several halls are given. A good match between the new measure and subjective impressions of the halls is found. "
 
Last edited:

Cosmik

Major Contributor
Joined
Apr 24, 2016
Messages
3,075
Likes
2,180
Location
UK
Ok. So what’s the difference between stereo as a field of scientific inquiry and digital room compensation as a field of scientific inquiry? Why not let psychoacoustic tests, blind tests, decide if drc works or not?

I don’t expect you to answer? ;)

It was just my two cents worth. The human mind can be led to believe things and I wonder why not use drc to enhance psychoacoustic effects too?
Why would you not expect me to answer? :)

I think there's a difference between having a theory and just 'try-it-and-see'. It can also be viewed in terms of 'dimension explosion'. Every arbitrary dimension that you allow into your system is something that you would have include in your experiments in order to eliminate it from your enquiries. If you find that dipole panel speakers produce a pleasing effect, is it the fact they are (a) 3'x2' and beaming at high frequencies, (b) dipoles and spraying antiphase out of the back, (c) 7'6" from the front wall, (d) toed in at 37 degrees, (e) crossed over to a subwoofer with a 24dB/octave filter response with phase errors, (f) only being used with highly 'ambient' recordings, etc. etc. Without a huge number of tests you can never pronounce "Dipoles give great soundstage" because you can't know whether that's true, mainly because you don't know *why* it is true.

I am not claiming that my system gives great soundstage; my 'figure of merit' is how 'straight' it is. It will then give whatever soundstage it gives.

I am not primarily a try-it-and-see person. I have 'theories' and to the best of my ability, I am going to put them into practice. And then I find that they produce such a stable, benign, repeatable, result, that I feel I must be onto something. Other people seem to have great difficulty making their systems pleasing to their ears - and indeed I hear the results and share their pain sometimes.

For example, with my software I may be the only person who can demonstrate that if you take a (sealed, three-way, box) multi-way speaker that has been set up more-or-less properly, and switch between radically different crossover shapes, slopes and frequencies while listening, you hear virtually no difference (but definitely not if you listen to the individual drivers) - a demonstration I have done at a couple of shows and which seemed surprising to people. This goes counter to the received orthodoxy that tells us that it is all super-critical.

Why is it super-critical to them? Because so many arbitrary inconsistencies are built into their systems that they cannot change crossover slopes and frequencies in isolation; they are also inevitably changing phase shifts, time alignment, proportion of antiphase versus in-phase, and so on. And they are never hearing 'straight' audio with any setup.
 

Duke

Major Contributor
Audio Company
Forum Donor
Joined
Apr 22, 2016
Messages
1,558
Likes
3,865
Location
Princeton, Texas
For example, with my software I may be the only person who can demonstrate that if you take a (sealed, three-way, box) multi-way speaker that has been set up more-or-less properly, and switch between radically different crossover shapes, slopes and frequencies while listening, you hear virtually no difference (but definitely not if you listen to the individual drivers) - a demonstration I have done at a couple of shows and which seemed surprising to people. This goes counter to the received orthodoxy that tells us that it is all super-critical.

I totally believe you! The things that matter most to the ears are much bigger than crossover details (assuming the different crossovers are well executed).

The "received orthodoxy" is imo largely driven by marketing departments, whose jobs include trumpeting high-tech solutions to problems that are perceptually minor. (A fairly well-known audio industry professional who is also a professor in one of the best math departments in the US once remarked to me, "Speaker designers are getting better and better at solving the wrong problems.")

But do we want to eliminate the sound of the room? I don't: I want the room to add some 'non-static' acoustics to the recording... A reasonably broad, non-radical directivity characteristic from the speaker will be conducive to this.

Suppose we have found (whether by logical extrapolation of justified assumptions, or by listening tests) that a uniform pattern about 150 degrees wide seems to be optimum. (The exact number doesn't matter, this is just a thought experiment.)

And suppose it occurs to us that reducing the amount of comb filtering from early reflections would be desirable. We could make the pattern narrower but then we'd correspondingly lose some of the sound of the room, which we previously found to be beneficial. Hmmm.

Here's a somewhat unorthodox alternative: Divide that 150 degree uniform pattern into a 75-degree forward-facing pattern and a 75-degree rearward-facing pattern. In other words, deliberately manipulate the default, orthodox radiation pattern to reduce undesirable comb filtering from early reflections while retaining the same (desirable) amount of energy going out into the room.

So one pertinent question is, would decreasing the amount of early-onset room sound and proportionally increasing the amount of late-onset room sound be audibly beneficial?

If so, a follow-up question is, would it be worth the trouble?

What do your assumptions say? (<- That's not intended to be a loaded question; I'm not trying to trap you, but I am interested in understanding your thought process so I'm using a hypothetical that is somewhat familiar to me. There's a certain, I dunno, integrity shall we say, about following one's principles that catches my attention, and I'd like to see what that looks like in this context.)
 
Last edited:

andreasmaaan

Master Contributor
Forum Donor
Joined
Jun 19, 2018
Messages
6,652
Likes
9,403
Regarding the concerns raised by @Cosmik and others about the rearward wave from a dipole being in opposite phase to the front wave, I can't help noting that the path length of the first frontwall reflection from a pair of stereo speakers will inevitably be different from the path length of the first front wall reflection of an acoustic source in the location of the phantom image.

To illustrate, the actual path length of the first front wall reflection from the right speaker is shown in red, while the path length of an imagined source in the location of a phantom image is shown in pink (speakers are black circles, phantom "source" is a grey circle, and listener is a blue circle):

1554838029211.png


In other words, there is a mismatch between the path length of the first front wall reflection from the speaker and the path length of the first front wall reflection that a real object in the location of the phantom centre would have created.

As a result, the relative phase of the direct sound and the reflected sound will be different from a pair of stereo speakers than it would have been from a sound source in the location of any phantom image.

How different it is will depend on:
  • the location of the phantom image
  • the distances between each speaker and the front wall, the other speaker, and the listener
  • and moreover will vary with frequency
Nevertheless, we interpret sound objects as clearly coming from the phantom centre (and other phantom locations between the speakers).

My point is that having a speaker that produces in-phase output both forward and rearward will never (in stereo) result in the phase of the front wall reflection matching the phase information of a hypothetical sound source in the location of the phantom image - at all or even at most frequencies.

(Of course, the same goes for all other reflections too, but we are discussing dipoles here so I'm limiting this comment to front wall reflections in this case.)

In fact, it is conceivable that dipoles in a certain position relative to each other and the listener, and at certain distances from the walls, will produce a front wall reflection that, at certain frequencies, better matches the front wall reflection that a phantom image would have produced. Indeed, at certain frequencies it's almost certain that this will be the case.

Given this, perhaps it's unsurprising that dipoles don't sound as weird as one might imagine they should.
 
Last edited:

Duke

Major Contributor
Audio Company
Forum Donor
Joined
Apr 22, 2016
Messages
1,558
Likes
3,865
Location
Princeton, Texas
My point is that having a speaker that produces in-phase output both forward and rearward will never (in stereo) result in the phase of the front wall reflection matching the phase information of a hypothetical sound source in the location of the phantom image...

Given this, perhaps it's unsurprising that dipoles don't sound as weird as one might imagine they should.

My understanding is that the ear looks primarily at the spectrum of an incoming sound to determine whether it is a reflection or a new sound. Far as I know the ear does not look at the absolute phase of the reflection to determine whether or not it is a reflection, but the relative phase within the reflection itself does affect clarity, which plays a secondary role in soundstaging (secondary to localization and perception of spaciousness). Anyway this conceptual model allows dipoles to have good clarity and good imaging & soundstaging with proper setup, for which there is abundant anecdotal evidence.
 

andreasmaaan

Master Contributor
Forum Donor
Joined
Jun 19, 2018
Messages
6,652
Likes
9,403
...but the relative phase within the reflection itself does affect clarity, which plays a secondary role in soundstaging (secondary to localization and perception of spaciousness).

Perhaps you could elaborate on this? It's not an idea that I've seen investigated before.
 

Duke

Major Contributor
Audio Company
Forum Donor
Joined
Apr 22, 2016
Messages
1,558
Likes
3,865
Location
Princeton, Texas
Perhaps you could elaborate on this? It's not an idea that I've seen investigated before.

In an overly reverberant environment, like sitting in the back of a big college lecture hall, the phase of the reverberant energy - most notably the upper harmonics - is scrambled by the overabundance of reflections. This contributes greatly to the loss of clarity. I do not know to what extent this phenomenon occurs in a home audio listening room, but presumably the same principle applies. That being said, I don't think it's a big issue in a home audio setting (we'll come back to this in a minute).

Clarity is necessary for good soundstaging because the ear/brain system must be able to differentiate between the direct sound and the reverberant sound. The direct sound is where we get our localization cues from, and the reverberant sound is where we get spaciousness (envelopment, immersion) from.

My understanding is that, in concert halls at least, early reflections are more detrimental to clarity than late reflections are. If there are too many early reflections, then the ear/brain system cannot do a very good job of differentiating between the direct sound and the reverberant sound, so envelopment & immersion also suffer. The earlier a reflection arrives, the more detrimental it is to clarity.

Now presumably the late reflections are the ones with the phase of their harmonics scrambled the worst by interaction with many many other reflections, yet it is the early reflections which are the most detrimental to clarity in a concert hall, and I expect this to hold true at home as well. So in a home audio setting, where the reflection path lengths are much shorter, I would be more concerned about the arrival times of the reflections than about their phase.

Incidentally the ear/brain system is able to extract the ambience and timbral information for each note of each instrument from the reverberant energy bouncing around the listening room because it recognizes the harmonic patterns of each sound even amidst all the background clutter and stitches together all the little pieces that come from the same sound. So we want to strike a balance where we have enough reverberant energy to transport us into the acoustic space where the recording was made (or its engineered counterpart), assuming a suitable recording, but not so much (particularly of the early reflections) that clarity is degraded.

Imo.
 
Last edited:

kevinh

Senior Member
Joined
Apr 1, 2019
Messages
338
Likes
275
The effect for dipoles Given a sufficient distance from rear walls is the Haas Effect from his Phd thesis in 1949.

https://en.wikipedia.org/wiki/Precedence_effect

from that link:

Ambience extraction[edit]
The precedence effect can be employed to increase the perception of ambience during the playback of stereo recordings.[11] If two speakers are placed to the left and right of the listener (in addition to the main speakers), and fed with the program material delayed by 10 to 20 milliseconds, the random-phase ambience components of the sound will become sufficiently decorrelated that they cannot be localized. This effectively extracts the recording's existing ambience, while leaving its foreground "direct" sounds still appearing to come from the front.[12][13]


So a faceted panel like the Sound LAb Electrostatics Duke refers to can do a great job of allowing the recordings ambience to come through. Diffusion that enhance the 'random-phase ambience components' of the original recording.

A Large Panel Speaker, will become a Controlled Directivity Line Source. The effect is very nice. The issue is having a room large enough to accommodate the Panels.
 

Juhazi

Major Contributor
Joined
Sep 15, 2018
Messages
1,723
Likes
2,908
Location
Finland
This thread covers many issues that are discussed also in threads about imaging and room correction lately, mostly by same people. "Dipole sound" at best and multichannel/multispeaker effects are much alike, they add reflections in late/multiply reflected "ambience" range 10-20ms after direct signal.

Andreasmaan showed dipoles in post #134, and I must add that it would be even better if speakers were angled so that backside energy is directed to front corners of a wide room (speakers on the long wall). Dipoles are very much room-dependent, too low or too hig RT don't work. Multichannel effects like Dolby ProLogic IIx Music are adjustable, to get better match in different rooms.
 

Attachments

  • wide room reflections dip vs mono.png
    wide room reflections dip vs mono.png
    47.8 KB · Views: 121
Last edited:

andreasmaaan

Master Contributor
Forum Donor
Joined
Jun 19, 2018
Messages
6,652
Likes
9,403
In an overly reverberant environment, like sitting in the back of a big college lecture hall, the phase of the reverberant energy - most notably the upper harmonics - is scrambled by the overabundance of reflections. This contributes greatly to the loss of clarity. I do not know to what extent this phenomenon occurs in a home audio listening room, but presumably the same principle applies. That being said, I don't think it's a big issue in a home audio setting (we'll come back to this in a minute).

Clarity is necessary for good soundstaging because the ear/brain system must be able to differentiate between the direct sound and the reverberant sound. The direct sound is where we get our localization cues from, and the reverberant sound is where we get spaciousness (envelopment, immersion) from.

My understanding is that, in concert halls at least, early reflections are more detrimental to clarity than late reflections are. If there are too many early reflections, then the ear/brain system cannot do a very good job of differentiating between the direct sound and the reverberant sound, so envelopment & immersion also suffer. The earlier a reflection arrives, the more detrimental it is to clarity.

Now presumably the late reflections are the ones with the phase of their harmonics scrambled the worst by interaction with many many other reflections, yet it is the early reflections which are the most detrimental to clarity in a concert hall, and I expect this to hold true at home as well. So in a home audio setting, where the reflection path lengths are much shorter, I would be more concerned about the arrival times of the reflections than about their phase.

Incidentally the ear/brain system is able to extract the ambience and timbral information for each note of each instrument from the reverberant energy bouncing around the listening room because it recognizes the harmonic patterns of each sound even amidst all the background clutter and stitches together all the little pieces that come from the same sound. So we want to strike a balance where we have enough reverberant energy to transport us into the acoustic space where the recording was made (or its engineered counterpart), assuming a suitable recording, but not so much (particularly of the early reflections) that clarity is degraded.

Imo.

Ok I see what you mean. Had misunderstood actually - thought you meant there was a problem if the phase within a single reflection was scrambled, which is not something that would normally happen, as phase is maintained when a sound is reflected off a hard surface. That clarifies it.

OTOH, contrary (I think?) to what you're suggesting, research suggests in fact that strong early reflections actually improve speech intelligibility (clarity). Have a look here, in particular at Section 3.3:

"It has long been recognized that early reflections improve speech intelligibility, so long as they arrive within the “integration interval” for speech, about 30 ms [45]. More recent investigations found that intelligibility improves progressively as the delay of a single reflection is reduced, although the subjective effect is less than would be predicted by a perfect energy summation of direct and reflected sounds."
 
Top Bottom