• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Importance of impulse response

RayDunzl

Grand Contributor
Central Scrutinizer
Joined
Mar 9, 2016
Messages
13,250
Likes
17,200
Location
Riverview FL
Pardon the interruption, but...

Long ago it bothered me that software could take a sine sweep (no edges) and create a measurement (impulse or step) that does have "edges".

So I set out to find the difference between "real" and "calculated" values.

Real:

Single Full Scale Bit sent through speakers, playback recorded in Audacity via UMIK-1:

index.php


Ok, that looks good. A recording of an "impulse".

Now lets see what a sine sweep reveals after the mathematicians have a whack at it:

Impulse response calculated from a ten second 10-24kHz sweep tone as played through the speakers and analyzed in REW:

index.php



Test repeated for step response:

In-room recording of a step - using a 10Hz square wave played through the speakers as the excitation:

1668129884600.png


And the calculation of the step response from a sine wave sweep played through the speakers:

1668129951530.png


At that point I decided that the tone sweep was sufficient to characterize what the speakers were doing in the room.

No need to examine real impulses or steps - the math could accurately (somehow) pull that data out of a tone sweep.
 

fluid

Addicted to Fun and Learning
Joined
Apr 19, 2021
Messages
694
Likes
1,198
What I am getting to is that, rather than weighing down this thread with the basics, just for me, do you know a good resource on understanding the subject for someone starting at close to ground zero? My goal is to understand why, whenever someone proposes that it would be helpful for ASR (or someone) to measure the response of speakers and headphones that subjectively are "crisp", "fast" or "resolving", as distinguished from those that are "soft", "slow" or "congested", the ensuing discussions often seem to go 'round in circles. What are the actual phenomena that account for these attributes, and how can they be measured or explained, if at all?
These are murky waters where there are many thought experiments trotted out to explain real audible differences that people genuinely hear.

Here is a good study with a Genelec connection aiming to test the audibility of group delay. basically impulses representing different speaker were listened to forwards and backwards to see if they sounded different. This is a pretty high bar.

https://research.aalto.fi/en/publications/audibility-of-loudspeaker-group-delay-characteristics

This is the TLDR conclusion, keep group delay below 1ms to 300Hz and it can increase significantly below that point, let it get between 1ms and 2ms above 300Hz and it can become audible sometimes, let it go wild 300Hz to 1k and it is likely to to be audible. They do point out that audible does not mean "bad".

gd-threshold-png.1003581


Excess group delay is another figure to look at kimmosto suggest keeping it to under 2ms above 100Hz based on the conclusions of old studies.

There is some other information from him that you might find helpful, hammer hit is a term he uses

https://www.diyaudio.com/community/threads/vituixcad.307910/post-7029770

https://www.diyaudio.com/community/threads/vituixcad.307910/post-7029358

https://www.diyaudio.com/community/threads/vituixcad.307910/post-6931717

Another thing that is often overlooked in this discussion of timing audibility is the effect that the room has on the perception of low frequencies. It's no good to have an anechoically perfect phase response only to have the room mess that up.

In my own system I use a correction to make the response at the listening position resemble the minimum phase response, to have the phase follow the frequency. This is really only in effect from 1K on down and it absolutely sounds different to a filter that uses a minimum phase version of the same thing. Sounds different doesn't always mean better but for me I generally prefer the correction in place.

I haven't come across much in the way of research that has ever studied this aspect.
 

kimmosto

Active Member
Joined
Sep 12, 2019
Messages
217
Likes
516
Genelec's study is done with headphones (HD-650). I think result is quite valid also for speaker reproduction at mid...high, but not so much at bass and low mid where whole body is better than ears for sensing pressure hits i.e. force impacts caused by wavefront of wide range transients. My experience is that certain method to get dynamic sound reproduction is to have (or design):
* Closed system with low (2nd) order HP slope at LF to minimize normal group delay at LF and keep excess GD caused by LF radiator at 0 ms.
* Keep excess group delay due to (IIR) crossover below ca. 2 ms @100Hz. Smaller is better, but it's possible to compensate reasonably low excess group delay with dynamic drivers having low thermal compression and/or adequate radiating area or some other unidirectional directivity method having low compression. For example LF horn.
* LF radiator should be in front baffle or application should be half space design (flush mounted) to minimize possible delay in acoustic total.

Speaker doesn't have to be big with large and dynamic woofers in order to sense for example piano hammer also from left hand notes. Simple 2-way with 5.25" woofer is quite adequate - especially as half space design.
 
Last edited:

JanesJr1

Addicted to Fun and Learning
Joined
Jun 9, 2021
Messages
505
Likes
450
Location
MA
These are murky waters where there are many thought experiments trotted out to explain real audible differences that people genuinely hear.

Here is a good study with a Genelec connection aiming to test the audibility of group delay. basically impulses representing different speaker were listened to forwards and backwards to see if they sounded different. This is a pretty high bar.

https://research.aalto.fi/en/publications/audibility-of-loudspeaker-group-delay-characteristics

This is the TLDR conclusion, keep group delay below 1ms to 300Hz and it can increase significantly below that point, let it get between 1ms and 2ms above 300Hz and it can become audible sometimes, let it go wild 300Hz to 1k and it is likely to to be audible. They do point out that audible does not mean "bad".

gd-threshold-png.1003581


Excess group delay is another figure to look at kimmosto suggest keeping it to under 2ms above 100Hz based on the conclusions of old studies.

There is some other information from him that you might find helpful, hammer hit is a term he uses

https://www.diyaudio.com/community/threads/vituixcad.307910/post-7029770

https://www.diyaudio.com/community/threads/vituixcad.307910/post-7029358

https://www.diyaudio.com/community/threads/vituixcad.307910/post-6931717

Another thing that is often overlooked in this discussion of timing audibility is the effect that the room has on the perception of low frequencies. It's no good to have an anechoically perfect phase response only to have the room mess that up.

In my own system I use a correction to make the response at the listening position resemble the minimum phase response, to have the phase follow the frequency. This is really only in effect from 1K on down and it absolutely sounds different to a filter that uses a minimum phase version of the same thing. Sounds different doesn't always mean better but for me I generally prefer the correction in place.

I haven't come across much in the way of research that has ever studied this aspect.
Fluid I appreciate it so much that you took the time to put these references together for me! I hoped for a web link or two and got so much more from the ASR community.
 

Multicore

Major Contributor
Joined
Dec 6, 2021
Messages
1,789
Likes
1,964
Pardon the interruption, but...

Long ago it bothered me that software could take a sine sweep (no edges) and create a measurement (impulse or step) that does have "edges".

So I set out to find the difference between "real" and "calculated" values.

Real:

Single Full Scale Bit sent through speakers, playback recorded in Audacity via UMIK-1:

index.php


Ok, that looks good. A recording of an "impulse".

Now lets see what a sine sweep reveals after the mathematicians have a whack at it:

Impulse response calculated from a ten second 10-24kHz sweep tone as played through the speakers and analyzed in REW:

index.php



Test repeated for step response:

In-room recording of a step - using a 10Hz square wave played through the speakers as the excitation:

View attachment 242542

And the calculation of the step response from a sine wave sweep played through the speakers:

View attachment 242543

At that point I decided that the tone sweep was sufficient to characterize what the speakers were doing in the room.

No need to examine real impulses or steps - the math could accurately (somehow) pull that data out of a tone sweep.
Tip o th' hat to @RayDunzl once again. Nice work.

Couple of questions: 1. Do I read it right that's around 10 or 11 milliseconds on the X time axis? 2. What are the "noisy" looking fluctuations right of the impulse? Room reflections?
 

Trdat

Addicted to Fun and Learning
Forum Donor
Joined
Sep 6, 2019
Messages
968
Likes
397
Location
Yerevan "Sydney Born"
The “unfiltered” IR graph view is heavily HF biased so you won’t glean enough from looking at it alone by itself. But what you’re showing (very much zoomed in time) though seems perhaps somewhat weirdly mangled (?) at a glance. It might be helpful to look at some example IR plots posted in Amir’s speaker reviews.

If it’s a multiway system, you could look at the IR overlays view for individual drivers and see how closely they overlap each other (time reference offsets have to be correctly set), then compare those to their resulting sum. Same type of superimposed overlays view should also be observed for the phase and step response.
Struggling to understand what is a good step repsonse and what isn't. But appreciate your help, I will try and look at it a little more. Actually, I always apprecaite your responses, hands on approach, detailed graph/examples and simple explanations Ernest.

Is there a way for the graph to look like Kimmosoto's graph of ETC, Step response in REW? Is it as simple as choosing ETC and Step and thats it?
 
Joined
Apr 14, 2021
Messages
34
Likes
33
Struggling to understand what is a good step repsonse and what isn't. But appreciate your help, I will try and look at it a little more. Actually, I always apprecaite your responses, hands on approach, detailed graph/examples and simple explanations Ernest.

Is there a way for the graph to look like Kimmosoto's graph of ETC, Step response in REW? Is it as simple as choosing ETC and Step and thats it?
Are you building your own speakers? If not, then I wouldn't worry about it. Focus on your room setup and speaker placement, that's exponentially more important.
 

Multicore

Major Contributor
Joined
Dec 6, 2021
Messages
1,789
Likes
1,964
What I am getting to is that, rather than weighing down this thread with the basics, just for me, do you know a good resource on understanding the subject for someone starting at close to ground zero? My goal is to understand why, whenever someone proposes that it would be helpful for ASR (or someone) to measure the response of speakers and headphones that subjectively are "crisp", "fast" or "resolving", as distinguished from those that are "soft", "slow" or "congested", the ensuing discussions often seem to go 'round in circles. What are the actual phenomena that account for these attributes, and how can they be measured or explained, if at all?
To be perfectly honest and I mean this in a 100% constructive educational way: It's a fool's errand. Don't try to relate these words to measurements.

I simply don't understand what they mean and I never have, despite trying. I've been spending disposable income on "superior" home audio gear since the Quad 405 power amp and Linn LP12 were talk of the town (I grew up in Glasgow so the LP12 was a matter of pride to us). To this day I still don't understand what people mean when they talk about sound-stage. I tried asking about it here and got answers that suggested about 3 distinct perceptual phenomena. I gave up.

My advice: when you encounter these words, just skip it, don't challenge or inquire. My heuristic is: these words are very unlikely to carry information useful to me. As you move on eventually you'll encounter information that is useful and you'll recognize it as such. And that recognition will update your heuristic (aka BS detector) and you'll find yourself learning about both the technical subject matter and the para-technical sociological/political/commercial games that build on the grey areas around understanding.
 
Last edited:

RayDunzl

Grand Contributor
Central Scrutinizer
Joined
Mar 9, 2016
Messages
13,250
Likes
17,200
Location
Riverview FL
Couple of questions: 1. Do I read it right that's around 10 or 11 milliseconds on the X time axis? 2. What are the "noisy" looking fluctuations right of the impulse? Room reflections?

Room reflections would be my guess.

Wide (JBL LSR 308) and narrow (MartinLogan reQuest) dispersion, impulses overlaid.

1668179575736.png


Black - ML
1ms - bounce off base of mic stand, or top of couch
4ms - maybe bounce off seat of couch
7ms - bounce from wall behind speakers (electrostatic dipole)
27ms - speaker to wall behind listening position to wall behind speakers to listening position - triple room length bounce

Red - JBL LSR 308
No specific analysis, presumed bounces from walls ceiling floor etc.
 
Last edited:

fineMen

Major Contributor
Joined
Oct 31, 2021
Messages
1,504
Likes
680
Here is a good study with a Genelec connection aiming to test the audibility of group delay. ...
https://research.aalto.fi/en/publications/audibility-of-loudspeaker-group-delay-characteristics
You've done the homework, as I suggested: use white (pink) noise and alter phase.

So, what are these guys saying? I strongly doubt that anybody here took the effort to ... question: two signals are compared (a) pink burst w/ loudspeaker convolved and (b) pink noise pink burst w/ same loudspeaker convolved and time reverted. What does time reversal mean for (1) group delay and (2) temporal masking? How is this work related to HRTF where the t/r comes from; that crucial article is as usual hidden from the public by a steep paywall--that's how science is supposed to work.

( btw: the time reversal is the one and only motivation to explore phase once and again. But it is never explained why that is. Anybody explain it to me, the twit of the year? )

Did the author mention, that 'speaker convolved' implicates an additional high pass? Me guessing would say: no. Same high pass with everyone? Not quite ...

The Sennheiser 650 is not linear, no compensation; so why? A personal headphone in the hands to the test personnel, is that o/k? What was the sequence of the test-signals?

How come that the reported and tested group delay of a 500Hz cross-over exceeds multiple times what I get here with my humble 3-way @400Hz--8ms, really?

Just different, no qualification of the difference? Again, the signal is with no doubt altered, but what about the information? Only audiophools want the 'signal' to be pure. It's the information that counts. So, if the difference cannot be qualified, where is the information then, I mean the meaning?

Last edit: I've got a 3rd order bandpass subwoofer ( the ugly one ) for experimentation. After equalising it to flat in-room group delay fell below 8ms in sub-bass. How could I possibly become such a bad audiophile?
 
Last edited:

gnarly

Major Contributor
Joined
Jun 15, 2021
Messages
1,038
Likes
1,476
Struggling to understand what is a good step repsonse and what isn't. But appreciate your help, I will try and look at it a little more. Actually, I always apprecaite your responses, hands on approach, detailed graph/examples and simple explanations Ernest.

Is there a way for the graph to look like Kimmosoto's graph of ETC, Step response in REW? Is it as simple as choosing ETC and Step and thats it?

Hi Trdat,
In you first post in this thread, you said "I have attached the response of my Audiolense DSP tri-amp system with its crossovers done through the DSP. Compression driver and 15 inch mid and dual subs in a very small but heavily acoustically treated room. And no timing reference or loopback in measuring."
So I take it you are either tuning your own DIY 3-way speaker build, or tuning one already made. Either way, cool in my book !

My advice, which I've repeated enough on other forums to be obnoxious, is to totally ignore step response for tuning purposes.
Step, spectrogram, and even impulse responses are confirmations of a good tuning, imo.
But they are too difficult to interpret, and to figure out what actually needs to be done to improve them.

I strongly think magnitude (aka freq response) and phase are the two measurements that deserve virtually all of our attention when tuning a speaker.
(by tuning, i mean the speaker itself)

My advice to anyone trying to learn how to put multi-ways together, is start by making measurements of the driver sections individually. It's the equivalent of crawl before walk.

The best and cheapest way to do that, that i know of, is to use REW with a soundcard and loopback timing reference.
Until constant delay like time-of-flight, can be consistently removed from measurements, it's nearly impossible to get driver sections to fall together easily.
(Cost is no more than about $200 for Beh UMC202HD and ECM8000 mic, which will work fine. Note: UMIK will not work, as it depends on acoustic timing reference from 5kHz up.)

Once you get the hang of single driver measurements, it starts to get easy.
The differences in the driver sections' REW measured delays vs loopback, when the mic stays fixed for all sections, end up being dang close to the exact delays needed in DSP.
I'll stop now...but if you're keen to go down this path lemme know, and I'll continue on...
and just my 2c
 

Trdat

Addicted to Fun and Learning
Forum Donor
Joined
Sep 6, 2019
Messages
968
Likes
397
Location
Yerevan "Sydney Born"
Hi Trdat,My advice to anyone trying to learn how to put multi-ways together, is start by making measurements of the driver sections individually. It's the equivalent of crawl before walk.
My biggest mistake,I jumped before I can crawl but it was unintentional. I perseveeed in learning Audiolense to set up my tri-amped system and boy I love the 15inche mids with dual subs, plus I also worked towards understanding small room acoustics and currently learning acoustic modeling software so I jumped the important parts such as learning phase and worked towards understanding REW for room interpretation not system design.
In you first post in this thread, you said "I have attached the response of my Audiolense DSP tri-amp system with its crossovers done through the DSP. Compression driver and 15 inch mid and dual subs in a very small but heavily acoustically treated room. And no timing reference or loopback in measuring."
So I take it you are either tuning your own DIY 3-way speaker build, or tuning one already made. Either way, cool in my book !
Yes, the Audiolense Speaker is my DIY design and its already tuned and have checked the crossover point with simulated polar plots from the DIY audio fprum and I just want to see how my final step response has turned out and I am also currently working on a centre speaker with a hypex module so I do have want to learn a few things and struggling 1. Interpreting Step response 2. VituixCad 3. Taking Off Axis meausurements and putting them in VituixCad 4. To tune with REW alignemtn tool or live tuning with pink noise or something with REW or OpenSoundMeter 5. Learning where to put biquads
My advice to anyone trying to learn how to put multi-ways together, is start by making measurements of the driver sections individually. It's the equivalent of crawl before walk.

The best and cheapest way to do that, that i know of, is to use REW with a soundcard and loopback timing reference.
Until constant delay like time-of-flight, can be consistently removed from measurements, it's nearly impossible to get driver sections to fall together easily.
(Cost is no more than about $200 for Beh UMC202HD and ECM8000 mic, which will work fine. Note: UMIK will not work, as it depends on acoustic timing reference from 5kHz up.)
I am working on this now, will soon measure with loopback, playing around with REW alignment tool and will read a little about phase in general to understand what I am actually doing in the alignment tool.

Then regarding biquads it seems I have to learn Vituixcad and take off axis measurements no easy way around this.
 

JanesJr1

Addicted to Fun and Learning
Joined
Jun 9, 2021
Messages
505
Likes
450
Location
MA
To be perfectly honest and I mean this in a 100% constructive educational way: It's a fool's errand. Don't try to relate these words to measurements.

I simply don't understand what they mean and I never have, despite trying. I've been spending disposable income on "superior" home audio gear since the Quad 405 power amp and Linn LP12 were talk of the town (I grew up in Glasgow so the LP12 was a matter of pride to us). To this day I still don't understand what people mean when they talk about sound-stage. I tried asking about it here and got answers that suggested about 3 distinct perceptual phenomena. I gave up.

My advice: when you encounter these words, just skip it, don't challenge or inquire. My heuristic is: these words are very unlikely to carry information useful to me. As you move on eventually you'll encounter information that is useful and you'll recognize it as such. And that recognition will update your heuristic (aka BS detector) and you'll find yourself learning about both the technical subject matter and the para-technical sociological/political/commercial games that build on the grey areas around understanding.
Yes, I've thought a lot about soundstage and have done probably hundreds of level-matched, EQ-matched a/b comparisons between headphones on various phenomena, including that. I think there are a dog's breakfast of things that do contribute to what various listeners will call soundstage, but that it will be hard to pin it down because of the variety of things going on and the psycho-acoustic invention going on in our own minds when we listen.

For me, there are two "real things" with soundstage.

One is one that I don't seek for myself, but there are either cross-feed or phase or directional manipulations that create the impression of a larger physical stage... even if not part of the original recording. Although I don't pursue this aspect, I think it's quite real.

The second is simply my EQ preference for a more U-shaped EQ that doesn't over-emphasize the mids. I always get an impression at least of better tonal balance and being "in the audience" on well-recorded acoustic sources, for several likely reasons. The combination of the right recording and right EQ and to some degree, the right headphone, can also often (if not always) trigger a perceptual impression of a physical sound stage in front of me and outside of my head. I do seek that and find the cause-and-effect question of why it works for me an interesting one to experiment with. I discovered some things that betrayed my expectations. I also just do not like my music to be on a straight line between my ears in the middle of my brain all the time, so the effort is worth it. (Not everyone is sensitive to that, but I am.)

There has been a learning-curve and listening pay-off for me on this one., but I've tried to keep the scope of my exploration simple and practical.
 

fineMen

Major Contributor
Joined
Oct 31, 2021
Messages
1,504
Likes
680
... but there are either cross-feed or phase or directional manipulations that create the impression of a larger physical stage...
Maybe Your desire is for some training in sound engineering. The 'feel' of a recording is the end result of, in parts, the s/e efforts. To record a recording involves a lot of decision making. And then after additionally manipulations You mentioned. The engineer will always control the preliminary and final result using speakers that are ... what? Less perfect than an 'audiophile' set of? For what purpose are 'better' speakers made if they don't match the monitors in the stuido?!
 

JanesJr1

Addicted to Fun and Learning
Joined
Jun 9, 2021
Messages
505
Likes
450
Location
MA
Maybe Your desire is for some training in sound engineering. The 'feel' of a recording is the end result of, in parts, the s/e efforts. To record a recording involves a lot of decision making. And then after additionally manipulations You mentioned. The engineer will always control the preliminary and final result using speakers that are ... what? Less perfect than an 'audiophile' set of? For what purpose are 'better' speakers made if they don't match the monitors in the stuido?!
I'm not sure I get your point, but what I was trying to say is that I have no interest in manipulating the recording on playback to create an artificial soundstage. I know that some sound engineers may also manipulate things, or streamers (usually video) also may add spatial effects. But I don't control that. If the recording doesn't have it to begin with, I think it can be added, but I don't want to do that.

I do have an interest in simple EQ techniques that will plausibly reproduce the original recording with the best chance of getting my particular brain to find it realistic and infer a real soundstage. Some high-fidelity recordings work better than others, and there are often spatial cues from the original recording venue, that can enhance the illusion of soundstage. The recordings that work often have miking from a single point and natural spatial cues from the recording venue (delay, tonal shifts, reverb). Some instrumental variety also helps. I find that EQ of playback can also affect it (as some others have also found). That's the only thing I want to control, along with having a good headphone.

My experience with headphones is limited, but of the seven headphones I've used extensively, my DCA Noire, even though it's a closed-back, works the best at creating a good soundstage illusion for me with well-recorded acoustic material. (It doesn't have spatial enhancements like an open back, or angled earpads or transducers, or any signal enhancements by me.) What's left is the combination of smart EQ and headphone, and that can sometimes help more than I originally would have suspected. It's the tail end of the recording/playback chain, but it makes a difference for me.
 

fineMen

Major Contributor
Joined
Oct 31, 2021
Messages
1,504
Likes
680
I'm not sure I get your point, but ...
Agreed, many recordings emphasize the upper mids to treble for the sake of clearity. I only wondered why some folks focus ony tiny bits and pieces, here the 'impulse' without any consideration of the production process. So called 'scientific papers' stream in the wind like buddhistic prayer flags, only to not fully enjoy the music because of a fear to miss important, yet undisclosed detail in the recording.
 

fineMen

Major Contributor
Joined
Oct 31, 2021
Messages
1,504
Likes
680
So, what are these guys saying?

1668208935344.png


The energy time curve is down to -60dB within 5ms, then after the room reflections set in. Interpretation?

Likewise the impulse response w/o room reflections:

1668209072397.png


With the room included:

1668209247256.png


Group delay, very early room reflections included, the wiggles namely, drivers not counter sunk (sloppy, I know):

1668209330444.png


The box is sealed fullrange with crossover frequencies at 400Hz and 2,3kHz. Please tell me what science could do for me. Why is it that the quoted scientific paper assumes 8ms of a GD for a simple standard cross over at 500Hz?!
 

Multicore

Major Contributor
Joined
Dec 6, 2021
Messages
1,789
Likes
1,964
Yes, I've thought a lot about soundstage and have done probably hundreds of level-matched, EQ-matched a/b comparisons between headphones on various phenomena, including that. I think there are a dog's breakfast of things that do contribute to what various listeners will call soundstage, but that it will be hard to pin it down because of the variety of things going on and the psycho-acoustic invention going on in our own minds when we listen.

For me, there are two "real things" with soundstage.

One is one that I don't seek for myself, but there are either cross-feed or phase or directional manipulations that create the impression of a larger physical stage... even if not part of the original recording. Although I don't pursue this aspect, I think it's quite real.

The second is simply my EQ preference for a more U-shaped EQ that doesn't over-emphasize the mids. I always get an impression at least of better tonal balance and being "in the audience" on well-recorded acoustic sources, for several likely reasons. The combination of the right recording and right EQ and to some degree, the right headphone, can also often (if not always) trigger a perceptual impression of a physical sound stage in front of me and outside of my head. I do seek that and find the cause-and-effect question of why it works for me an interesting one to experiment with. I discovered some things that betrayed my expectations. I also just do not like my music to be on a straight line between my ears in the middle of my brain all the time, so the effort is worth it. (Not everyone is sensitive to that, but I am.)

There has been a learning-curve and listening pay-off for me on this one., but I've tried to keep the scope of my exploration simple and practical.
Polemical simplification warning...

I effing hate stereo! It's the single biggest mistake in the development of this particular corner of bourgeois consumer decadence. It became a standard that we can't undo. And because it is the distribution standard for commercial music, musicians, recording engineers, producers and, it seems, even some consumers, want WIDE STEREO IMAGES. This is an aesthetic catastrophe. It's a stupid standard-induced commercial pressure in opposition to musical aesthetics. Just for one example: How to get the most wide stereo image of rhythm guitar? You make the performer record it twice over independently and pan each hard L and R. That's standard practice. And it's idiotic if what you care about is musical practice rather than commercial product.
 

fluid

Addicted to Fun and Learning
Joined
Apr 19, 2021
Messages
694
Likes
1,198
You've done the homework, as I suggested: use white (pink) noise and alter phase.
Thanks I try to :)
So, what are these guys saying? I strongly doubt that anybody here took the effort to ... question: two signals are compared (a) pink burst w/ loudspeaker convolved and (b) pink noise pink burst w/ same loudspeaker convolved and time reverted. What does time reversal mean for (1) group delay and (2) temporal masking? How is this work related to HRTF where the t/r comes from; that crucial article is as usual hidden from the public by a steep paywall--that's how science is supposed to work.
There is no pay wall for the article I linked, down at the bottom there is a link to the full paper on the University's site. Do you mean another paper that is pay only?
( btw: the time reversal is the one and only motivation to explore phase once and again. But it is never explained why that is. Anybody explain it to me, the twit of the year? )
It makes sense when trying to test if waveform accuracy matters for audibility, if you can listen to the same thing forwards or backwards and have it sound the same then any flaw in it is inaudible. Some things were audible which is evidence that there are elements of a speakers time response that are audible. Interestingly too they were easier to hear when filtered only to low frequencies. The higher frequencies did mask perception of the difference.
How come that the reported and tested group delay of a 500Hz cross-over exceeds multiple times what I get here with my humble 3-way @400Hz--8ms, really?
The box is sealed fullrange with crossover frequencies at 400Hz and 2,3kHz. Please tell me what science could do for me. Why is it that the quoted scientific paper assumes 8ms of a GD for a simple standard cross over at 500Hz?!
You've lost me on where the figure of 8ms is coming from at what frequency.
 

Multicore

Major Contributor
Joined
Dec 6, 2021
Messages
1,789
Likes
1,964
I effing hate stereo!
Replying to myself, I know, (I'm drunk) but here's another example...

In a Silent Way. I love this album so much Gav and I made a 1h 25m podcast about just side B. The album is monumental. Miles took the most exciting jazz players he could get his hands on and made them play rock music. Genius!

But the stereo mix, like a lot of old stereo jazz albums, is ridiculous. The soloist and bass are in only this ear and the drums are entirely over there. It makes absolutely no sense... except if, as I imagine it, the stereo standard that they had to master to demanded something and, not having any better ideas, those in the mixing room were forced to choose something and that's the best they could manage. In my view this is a clear example of the technology imposing itself on the musical process in a way that should never happen. This is the opposite of transparency. In these cases technology modified music in arbitrary and uncontrolled ways.

On the plus side, these ridiculous stereo mixes may have helped Bill Laswell make his lovely remix album Panthalassa.
 
Top Bottom