And once more, that is exactly what I addressed in my posts. There is no misunderstanding on my part whatsoever.
In this note I discuss some issues in filter design for equalization of sound systems. The emphasis is on rationale, not on experiments, and I will focus on a few common misunderstandings. I will briefly describe the basic concepts used in sound equalization, such as FIR and IIR filters, minimum and linear phase and present basic mathematical facts as well as give a background to the philosophy behind the Dirac Live approach. To limit the length of the text, I assume some basic familiarity with the topics covered. I will however refrain from the popular trend in some engineering periodicals of hiding bad ideas behind complex-looking equations. As already mentioned, the emphasis is on the logic of different approaches to equalization rather than experiments. Logic can be checked by the reader, whereas experiments carried out by others always leave room for doubt concerning experiment conditions. Furthermore, as probability theory teaches us, one lucky experiment shows nothing about the underlying rationale, but the underlying rationale will be much more indicative of future experimental outcomes.
You may want to consider mounting your microphone on a shock mount. Even a cheap universal one like this (https://www.amazon.nl/dp/B074V984VY/ref=cm_sw_r_other_apa_i_qI.NEbBZE0E3W) can detach the mic from any vibrations in the floor or tripod.I bought this to solve my microphone attachment issues. A few EURs
View attachment 59696
View attachment 59697
And yes, the USB cable was previously damaged when the stack of books/boxes/whatever I used before collapsed. I have others, it's just there for illustration purposes
That would require measuring a time span instead of a moment in time. The complexity of that will definitely give a certain Klippel owner a headache.By the way, maybe you could test a Leslie speaker (which utilises the Doppler effect) using your Klippel system at some point
I still think they should have added the possibility to tap a point an let you manually type in the center-frequency, bandwidth (Q) and level adjustment.Quick tip when using the Audyssey app - connect a bluetooth mouse to your phone and use the mouse to set your curves, it's much easier to be precise when your finger isn't in the way.
EQ never boosts dips more than 6-8dB. If your speaker needs 30dB boost you solve it with buying a better speaker, not with EQ.
Yes it does. To be clear, you mean your particular software is set to limit this.
Room modes have no interest in your 6-8db limits.
Yep, Audysses and ARC don't do time domain corrections, Acourate and Dirac do. But I wouldn't worry too much because of it as frequency domain correction is the thing that really matters while time domain correction is just a cherry on top of the cake.
Do you listen to how music images?
Audyssey XT32 is a good start for budget AVR:s but Dirac Live is in my opinion just on another level. NAD tried with their budget offering, too bad they failed on the design of the hardware as seen on this site. If Denon were to move to Dirac Live it would disrupt the whole market.
Some people delight in the imaging, it is part of what they listen for. Others, it is just part of the wash.I'm not sure any human can switch that capabilty off. Can you?
Are you trying to say that time domain (phase) corrections improve imaging?
Some people delight in the imaging, it is part of what they listen for. Others, it is just part of the wash.
There is clear evidence that up to about 1kHz the ear/brain can process time differences between the ears, and thus phase information can hold some localisation cues. But it is limited. And apart from binaural dummy head, and pure ORTF and Blumlein pairs recordings, there is scant retained phase information in available recordings. Pan pots are the default imaging device on a mixing desk.
By the time your speaker crossover network has had its effect, phase information is pretty scrambled. But you can unwind it. I suspect Dirac does.
Overall, you might expect there to be a change for the better in imaging. But I would want to see some proper tests.
There is clear evidence that up to about 1kHz the ear/brain can process time differences between the ears, and thus phase information can hold some localisation cues. But it is limited. And apart from binaural dummy head, and pure ORTF and Blumlein pairs recordings, there is scant retained phase information in available recordings. Pan pots are the default imaging device on a mixing desk.
By the time your speaker crossover network has had its effect, phase information is pretty scrambled. But you can unwind it. I suspect Dirac does.
Overall, you might expect there to be a change for the better in imaging. But I would want to see some proper tests.
The amount of correction needed in the time domain would depend upon how good the speakers are to begin with. Why I asked is because a lot of people just see music listening as you listen to a wall of sound and spend most of their time thinking about if the bass is good, is the treble neutral etc... It is so much more than that with proper speakers, setup correctly, in a treated room. With good recordings I have music from floor to ceiling and all of front to back of the room. It sounds like surround sound basically- but with stereo.
I have been surprised when people have listened and asked me to turn off the center channel when playing a song with mainly vocals..yeah it's already off, it's the phantom image. These have been people who own proper speakers at home. So sometimes I wonder what people are doing.
The amount of correction needed in the time domain would depend upon how good the speakers are to begin with.