• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Paul McGowan - getting speaker close to wall changes tonal balance?

pozz

Слава Україні
Forum Donor
Editor
Joined
May 21, 2019
Messages
4,036
Likes
6,827
Also, in theory, shouldn't the fusion time for the precedence effect be affected by the severity of the distortion of the reflection?
Why would the fusion time (interval?) change? Not following the reasoning with regard to spectral differences of reflections.
 

audio2design

Major Contributor
Joined
Nov 29, 2020
Messages
1,769
Likes
1,832
We, as humans, dont hear like a microphone at all when the distance from the soundsource ( loudspeaker ) to the ear is longer than 5 ms ( 1,7 metre ) . The brain locks to the first arriving sound, often the direct sound from the loudspeaker, and attenuates all other sounds about 10 dB.
The brain starts to select sounds, but the microphone takes up all the sound.

This is frequency dependent. Making this statement and the Haas effect right and wrong.
 

youngho

Senior Member
Joined
Apr 21, 2019
Messages
487
Likes
802
Why would the fusion time (interval?) change? Not following the reasoning with regard to spectral differences of reflections.
This is speculation on my part. In Toole's book, he quotes Haas: "The human auditory system combines the information contained in a set of
reduplicated sound sequences and hears them as though they were a single entity, provided (a) that these sequences are reasonably similar in their spectral and temporal patterns and (b) that most of them arrive within a time interval of about 40 ms following the arrival of the first member of the set." Also, "Within the precedence effect fusion interval, there is no masking—all of the reflected (delayed) sounds are
audible, making their contributions to timbre and loudness, but the early reflections simply are not heard as spatially separate events. They are perceived as coming from the direction of the first sound; this, and only this, is the essence of the “fusion.” The widely held belief that there is a “Haas fusion zone,” approximately the first 20 ms after the direct sound, within which everything gets innocently combined, is simply untrue."

He then goes on to show a series of threshold curves showing the level of a simulated reflection on the Y-axis and the relative delay on the X-axis, suggesting that lower level reflections present later may not be detectable or else may contribute to image shift or spreading, but are still not heard as separate events, hence fusion is still occurring. Once delayed reflections reached a certain threshold, then the precedence effect broke down, and these individual reflections were heard as separate events or "images."

Here are two such graphs:

Screen Shot 2021-11-09 at 11.40.10 PM.png

Screen Shot 2021-11-09 at 11.40.28 PM.png

Towards the end of that chapter, he references an experiment that he and Olive did where they low-passed the simulated reflection below 500 Hz, which actually changed the level of detection:

Screen Shot 2021-11-09 at 11.40.59 PM.png

He writes "The amplitudes are rather similar, although the low-pass filtered version is a little higher, which seems to make sense considering
that slightly over 5 octaves of the audible spectrum have been removed from the signal. Recall that these signals have been adjusted to produce the same subjective effect—a threshold detection—and it would be logical for a reduced bandwidth signal to be higher in level...The message is that we need to know the spectrum level of reflections to be able to gauge their relative audible effects."

I thought this might suggest that the difference in the threshold detection in terms of relative level for a distorted reflection might be reflected (sorry, no pun intended) in the precedence-effect fusion interval, but I was asking it as a question.
 

pozz

Слава Україні
Forum Donor
Editor
Joined
May 21, 2019
Messages
4,036
Likes
6,827
This is speculation on my part. In Toole's book, he quotes Haas: "The human auditory system combines the information contained in a set of
reduplicated sound sequences and hears them as though they were a single entity, provided (a) that these sequences are reasonably similar in their spectral and temporal patterns and (b) that most of them arrive within a time interval of about 40 ms following the arrival of the first member of the set." Also, "Within the precedence effect fusion interval, there is no masking—all of the reflected (delayed) sounds are
audible, making their contributions to timbre and loudness, but the early reflections simply are not heard as spatially separate events. They are perceived as coming from the direction of the first sound; this, and only this, is the essence of the “fusion.” The widely held belief that there is a “Haas fusion zone,” approximately the first 20 ms after the direct sound, within which everything gets innocently combined, is simply untrue."

He then goes on to show a series of threshold curves showing the level of a simulated reflection on the Y-axis and the relative delay on the X-axis, suggesting that lower level reflections present later may not be detectable or else may contribute to image shift or spreading, but are still not heard as separate events, hence fusion is still occurring. Once delayed reflections reached a certain threshold, then the precedence effect broke down, and these individual reflections were heard as separate events or "images."

Here are two such graphs:

View attachment 164477
View attachment 164478
Towards the end of that chapter, he references an experiment that he and Olive did where they low-passed the simulated reflection below 500 Hz, which actually changed the level of detection:

View attachment 164479
He writes "The amplitudes are rather similar, although the low-pass filtered version is a little higher, which seems to make sense considering
that slightly over 5 octaves of the audible spectrum have been removed from the signal. Recall that these signals have been adjusted to produce the same subjective effect—a threshold detection—and it would be logical for a reduced bandwidth signal to be higher in level...The message is that we need to know the spectrum level of reflections to be able to gauge their relative audible effects."

I thought this might suggest that the difference in the threshold detection in terms of relative level for a distorted reflection might be reflected (sorry, no pun intended) in the precedence-effect fusion interval, but I was asking it as a question.
So I'm familiar with the background, and was wondering what you meant with your question above. I don't think we can interpret what we hear happening in line arrays vs. other speakers in terms of the fusion interval (or really any psychoacoustic feature) right now. A lot of the mechanics you've mentioned apply, but the more pressing question is if we have good measurements.

The main thing I'm curious about is lobing, beaming and off axis performance. I've heard speakers using arrays plenty of times, but nothing as specific as a floor-to-ceiling array using identical drivers.

Good flush mounted speakers on angled walls certainly have their merits, and aren't altogether different from good speakers in room (I have a paper with full range anechoic measurements of a flush mounted speaker out to 90 degrees somewhere—the end result is largely similar to speakers with good directivity). I'm wondering how much of a difference this kind of line array really represents. Likely nothing spectacular since its solutions probably try to address the same problems as other speakers: what you hear at the listening position for direct vs. reflected sound.
 

youngho

Senior Member
Joined
Apr 21, 2019
Messages
487
Likes
802
So I'm familiar with the background, and was wondering what you meant with your question above. I don't think we can interpret what we hear happening in line arrays vs. other speakers in terms of the fusion interval (or really any psychoacoustic feature) right now. A lot of the mechanics you've mentioned apply, but the more pressing question is if we have good measurements.

The main thing I'm curious about is lobing, beaming and off axis performance. I've heard speakers using arrays plenty of times, but nothing as specific as a floor-to-ceiling array using identical drivers.

Good flush mounted speakers on angled walls certainly have their merits, and aren't altogether different from good speakers in room (I have a paper with full range anechoic measurements of a flush mounted speaker out to 90 degrees somewhere—the end result is largely similar to speakers with good directivity). I'm wondering how much of a difference this kind of line array really represents. Likely nothing spectacular since its solutions probably try to address the same problems as other speakers: what you hear at the listening position for direct vs. reflected sound.
Oh, sorry, I was only responding to the regular speaker part of the post. My only array experience is the Epique CBT24.
 
Top Bottom