• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

A Broad Discussion of Speakers with Major Audio Luminaries

That said, there seem to be limitations in case you have several drivers very far from each other (such as sound reinforcement linesources) and units with a lot of cancellation effects (such as bending wave transducers, large planars, cardioids and dipoles).
First, such a speaker is not measurable in reasonably sized anechoic chamber as the mics cannot be placed far enough. The standard method calls for hanging them on a crane and measuring them outdoor with its attendant problems (noise, temperature gradients, etc.).

As for Klippel NFS, such speakers are actually part of its advertising material:

1754377020923.png


While I don't have a need for it, complex speaker arrays can be measured in segments and their radiation patterns combined. Again, from the Klippel NFS brochure:
1754377143942.png


So again, wrong argument based on not even doing the simplest research into the capabilities of the system.
 
Magico uses a crane along with Klippel.

View attachment 467778



I'm really curious about Genelec's set-up though, can't see the Klippel logo at 8381A charts.
When it is impossible to switch off individual drivers (passive xover), how do they avoid contributions of adjacent drivers? Same question for large planar drivers.
 
When it is impossible to switch off individual drivers (passive xover), how do they avoid contributions of adjacent drivers? Same question for large planar drivers.
I guess the goal is overall performance.
They could just disconnect the drivers they don't want if they wanted to just see a single driver performance though.
 
No. My goal is to know how do it if individual speaker drivers cannot be simply turned off without damage or dismounting. Same question for large planar membranes, that do not have acoustical center. To me, there is no other way than extra large anechoic chamber. Klippel NFS is not a universal tool. Yes, it is excellent to measure small speakers as Amir does.
 
No. My goal is to know how do it if individual speaker drivers cannot be simply turned off without damage or dismounting. Same question for large planar membranes, that do not have acoustical center. To me, there is no other way than extra large anechoic chamber. Klippel NFS is not a universal tool. Yes, it is excellent to measure small speakers as Amir does.
Oh, I see.
I guess the Magico I posted does not fall to this category as it's active down low and also has different binding posts for the rest each passive way.
 
When it is impossible to switch off individual drivers (passive xover), how do they avoid contributions of adjacent drivers? Same question for large planar drivers.
If you are the manufacturer of the speaker, you can wire them up so that can be done.

The rest of your comment tells me that system operation is still not understood. Klippel doesn't care what the speaker is made up from. It treats it as a black box, radiating some pattern vs frequency. If speaker creates a lot of interference patterns, then that is bad design but Klippel NFS can still quantify it, albeit with many more measurements.

Of course, there is a physical size limit of about 6 foot tall, and hence the decomposition system I showed earlier.
 
No. My goal is to know how do it if individual speaker drivers cannot be simply turned off without damage or dismounting. Same question for large planar membranes, that do not have acoustical center. To me, there is no other way than extra large anechoic chamber. Klippel NFS is not a universal tool. Yes, it is excellent to measure small speakers as Amir does.
What do you call "small?" I have measured very substantial speakers, up to nearly 6 feet and 100 pounds. Weight is actually not a limit if you have a hoist and can hang the speaker as magico has done. Speakers that are larger than this cannot be reasonably measured in typical anechoic chambers either. Instead, they are measured in huge spaces where low frequency is still limited in resolution (compared to NFS).
 
Guys, there is already an ASR topic on this: Understanding how the Klippel NFS works. Except that you need a pretty high degree of math literacy to understand how the damned thing works. I don't like to accept anything on faith, but as far as this goes the math is so impenetrable for me that I gave up. I agree it is important to understand the limitations of our test equipment, but so far the only limitation of the Klippel I have found is that it costs so much and is inaccessible for nearly all of us and many smaller speaker firms.
 
What do you call "small?" I have measured very substantial speakers, up to nearly 6 feet and 100 pounds. Weight is actually not a limit if you have a hoist and can hang the speaker as magico has done. Speakers that are larger than this cannot be reasonably measured in typical anechoic chambers either. Instead, they are measured in huge spaces where low frequency is still limited in resolution (compared to NFS).

I am interested in explanation how to properly measure something like this:


777soundlab.promo_.jpg


Or this:

img650.jpg
 
Only for simple measurements, not with computational holography performed by Klippel NFS. You really don't understand the technology.
I think I do understand it actually.

Which is the case in anechoic chambers where you cannot get far enough from the speaker to be in true far field. CEA/CTA-2034 acknowledges this issue and still stipulates measurement at just 2 meter:

"Measurement Distance Ideally measurements should be made in the far field of the DUT and, for reasons of standardization the sensitivity should be referenced to a distance of 1 m. The far field for large diaphragm loudspeakers can be several meters away and listeners may sit in the near field of these loudspeakers. For typical loudspeakers the far field begins about 2 m from the DUT, and the typical listening distance is closer to 3 m from the DUT. By taking many measurements and displaying the results as spatial averages useful data can be gathered within the near field. Therefore measurements shall be made at 2 m from the DUT and the data shall be reported as the equivalent sound pressure level (SPL) at 1 m, which is a convenient 6 dB higher than the SPL at 2 m."


Not at all. You have no concept of signal processing being applied to solve two problems that NFS masterfully handles:
I actually do understand it.


1. Using near-field measurements to compute far field response. Huge benefit here is increased signal to noise ratio, obviating the need for an ultra quiet measurement space (which is independent of whether the room is anechoic or not).

2. Using field separation to remove room reflections and generating truly anechoic response. An anechoic chamber relies on filtering those reflections but the wedges are not nearly large enough to absorb the massive wavelength at frequencies at true bass frequencies.

Both of these require computation power which did not exist in decades past. But now that we have it, we are able to generate complete sound field of speaker down to 1 degree resolution and at any distance. Such a result is practically impossible with an anechoic chamber and a microphone array.
The power existed then, just not in lots of garages across the globe.

Don't go putting it down as "garage" measurements and such nonsense you post. This is a $100,000 instrument meant for serious measurement work.
You cannot easily throw up the statement of me claiming that the Klippel NFS is garbage in an honest way, unless you overlooked that obvious sarcasm when I mentioned that “if the ears cannot separate the direct sound from the reflections, then the Klippel must not be able to do it either.”
It ^all that^ is true, then it makes it sort of difficult that Klippel can even have the cojones to advertise that they are able to gate out the sounds.

Are they obviously telling lies?

:cool:

We cannot have it both ways, where the Klippel can do it, but the ears and brains somehow cannot do it…

It is as non-sensical as saying that dolphins and whales cannot figure out how to process sounds, and that only a submarine sonar can it.
It is largely the same “math” happening in each case, whether they are dolphin and whales or Sonar, or Bats, or humans and Klippel machines.

However I get that you do not think that phase is important.
Others do think that it is somewhat important.
And we delved into that are a few pages back.

Whether it is as important as frequency response, then we have yourself, Floyd, etc. all saying that it’s “Frequency Response uber alles”.
When timing and phase may become really important is with more impulse sounds like JJ mentioned with the drums and Harpsichords.
 
I think I do understand it actually.


I actually do understand it.



The power existed then, just not in lots of garages across the globe.


You cannot easily throw up the statement of me claiming that the Klippel NFS is garbage in an honest way, unless you overlooked that obvious sarcasm when I mentioned that “if the ears cannot separate the direct sound from the reflections, then the Klippel must not be able to do it either.”


We cannot have it both ways, where the Klippel can do it, but the ears and brains somehow cannot do it…

It is as non-sensical as saying that dolphins and whales cannot figure out how to process sounds, and that only a submarine sonar can it.
It is largely the same “math” happening in each case, whether they are dolphin and whales or Sonar, or Bats, or humans and Klippel machines.

However I get that you do not think that phase is important.
Others do think that it is somewhat important.
And we delved into that are a few pages back.

Whether it is as important as frequency response, then we have yourself, Floyd, etc. all saying that it’s “Frequency Response uber alles”.
When timing and phase may become really important is with more impulse sounds like JJ mentioned with the drums and Harpsichords.

They are at least as important as the RF, just listen to a single-driver wideband, it lacks something but is appreciated for others.
 
We cannot have it both ways, where the Klippel can do it, but the ears and brains somehow cannot do it…
Actually Mr Toole clearly states in his document that listeners are able to listen through the room (or simlar wording). So we are capable of somehow "substract" the room from the speaker sound.
 
The transducer properties will render this impossible. Turning a signal into soundwaves, it not a mathematical process. You can only approximate the electrical signal and choose which alterations are acceptable.

I should have said match acoustical to electrical as closely as possible, not exactly. I guess I was over emphasizing the goal of what I think speaker design should be.
That said, I find it amazing, how very close acoustic can be made to match electrical to a particular measurement spot...again with today's multi-way DSP capabilities.

Phase and time alignment at one single point does not necessarily help, it has to be even over a greater window both for direct sound and early reflections.
Yes, just as excellent on-axis regular frequency response needs to extend to smooth off-axis performance, good spins if you will,..... phase and time alignment need to also extend smoothly similarly.

This is the hardest piece of technical excellence to achieve that I've encountered with my DIYs, owing to simple geometry of multiple driver sections changing their relative distances under rotation, to the mic/listener.
Of all the traditional arguments against bothering with time and phase alignment, I think the difficulty of achieving it over a spatial area, is about the only one with real merit.

The best solution I've found so far, is to determining a reference axis to tune to, that provides the least mag and phase variation over the spatial area deemed to be the primary objective. I use the 2034 listening window as the spatial range in which to determine a 'least variation' tuning reference axis.
A near perfect mag and phase tuning to that chosen reference axis, has provided pretty good time and phase alignment over the full listening window.

This is a typical example of what I've come to expect over a +/- 30 deg window with my DIY synergy/meh mains.
1754405127352.jpeg



A few speakers can come pretty close to that, if they employ DSP x-over, FIR filters with a linear-phase mode, and sufficiently large drivers/low crossover points. They are around for something like 20 years. Encourage everyone to listen to such a unit, if possible switching to minimum phase mode. Differences are not huge, most obvious in the lower bass with artificial sounds like EDM. Interestingly these bass differences survive even horrendous room-induced alterations in a listening test, according to my experience.

Yep. The diy's I build utilize FIR in linear-phase mode. One of syn/meh's most appealing features is how they minimize the center-to-center distances between driver sections, to help reduce changing relative distances under rotation. All driver summations are designed to occur within 1/4 wavelength of each other, something virtually impossible with conventional designs.
Add in lin-phase FIR crossovers to mix, and it's pretty easy to place crossover points anywhere needed for best polars, without sonic penalty, even in ear-sensitive frequency ranges..

I second your encouragement to switch between minimum phase mode and linear-phase mode, if they encounter a speaker set up to do both. (and I do not mean a comparison where linear-phase flattening is applied to an entire speaker already set up; I mean one tuned from ground up, driver section by driver section. Make the best tunings one can, both lin-phase and min-phase, and compare those. The spatial variance issue makes a linear-phase overlay on top of the entire speaker bogus ime...it works perfectly in 1D electric space, but not so in 3D acoustic space) ...Sorry, I digress.

I routine compare low-order minimum phase tuning to linear phase tunings. I agree, differences in lower bass differences are not huge, other than on occasional tracks that just pop-out. Tactile feel seems to favor lin-phase, but again, not aways.
 
How can reflections from milliseconds before in in time arrive before a straight-line path from the speakers? Hyperjump via warp drive? Lol
This how this happens in an actual room where most of us listen as opposed to the great outdoors ( or anechoic chambers....)

Assume that a room is 12.5 feet wide and the prime listening position (PML) is 6.25 ft from the side walls. Let's place the PML 11 ft. from the front wall. Let's position a a pair of stereo speakers 4.5 ft. from the front wall and 2.5 ft. from each side wall. Given that the speed of sound in air is 1128 ft per second at 70 deg. F., the time for the direct sound from each speaker to reach the PML is 6.64 msec. The reflected sound from the wall nearest to each of the stereo speakers that departed from the speaker at the exact same moment arrives 3.01 msec later.

Of course at this point in time, the sound that left the speakers 3.01 msec on a direct path after the first sound impulse will mix with the reflected sound impulse emanated 3.01 msec earlier, albeit at a reduced sound pressure level due to the greater distance travelled through the air (it has an absorption coefficient that converts sonic energy to heat...), the absorption characteristics of the wall, and the horizontal dispersion pattern of the speaker system.

So the reflected sound doesn't hyperjump via warp drive, it just blends in with the direct sound that comes just a bit later on in the music.....
 
Of course, you're right, but my direct sound must be in phase. An instrument, a violin or a cello, playing in my room is in phase
Somebody might like to explain where one should place a microphone relative to a complex musical instrument - like violin, cello, bass, piano, etc. etc.- where the "sound" is like what one hears in a decent auditorium/concert hall. If that is the reference. Such sources radiate extremely complex sound patterns from various parts of the instrument, all acoustically interfering with each other, altering amplitude and phase response, when they arrive at the mic. The concept of a microphone pickup of any such instrument being "In phase" is hard to understand. It is what it is, and neither amplitude or phase are "standardized" for any musical instrument, before we even consider reflections from nearby surfaces at the point of capture - like a floor. There is a room at the beginning of the recording process too.

Care to elaborate?
 
As an older recording engineer, from a time when we had little equipment for testing, and certainly no time for it - I have posted before of my experience as a student, with the microphone choice, placement, and the removed door - when dealing with the French Horn.

An Apple is an Apple when it is just ripe, not coated with pesticides and of good heritage, it is still an Apple if it was coated, repeatedly - fell to the ground weeks ago, and rotten.
I met a Japanese sushi chef that ran away from his home at 14 years old, did not have formal training and learned everything the hard way, he told me he'd age fish to see how it was consumable, tasted, and when it became unusable - by consuming it at every stage. Up to getting sick from eating it.

This Forum is fantastic because we can disagree on things, learn or re-learn, and most of all, for the most part - be civil and courteous in the process. Had to state this, thanks...
 
Floyd, etc. all saying that it’s “Frequency Response uber alles”.
When timing and phase may become really important is with more impulse sounds like JJ mentioned with the drums and Harpsichords.
If you read what I have actually said, many times, it comes down to a few simple observations. Loudspeaker transducers, individually over their operating frequency ranges, behave as minimum-phase devices. That means that an anechoic flat frequency response is a reliable indicator that there are no resonances, no associated phase shift, and no ringing. This is a good start, it seems to me, but evidence of that behaviour is only available in anechoic or equivalent measurements, not room curves. Any sound, impulses included, will be fairly treated by such devices - over their operating frequency ranges. Things can go astray in the crossover regions, which is where serious measurements, including phase, are needed to ensure a smooth summation in those regions. So, the conclusion is that frequency response is not "uber alles" but if it is wrong, nothing else may matter as much as one might think.

If moderate resonances should exist in such minimum-phase transducers, they can be addressed by matched equalization based on anechoic or equivalent data, not room curves. This is why Amir's data is so useful. A loudspeaker that is "almost" really good, can be improved by equalization, especially those with well-behaved directivity.

Strong room resonances also behave like minimum-phase systems, so they too respond to matched equalization, but only at one point in a room. The room is not changed, but the sounds delivered through it are.

And can someone clarify for me what "timing" means in relation to group delay and phase? If both of those are below thresholds of audibility, does this make timing acceptable?
 
As an older recording engineer, from a time when we had little equipment for testing, and certainly no time for it - I have posted before of my experience as a student, with the microphone choice, placement, and the removed door - when dealing with the French Horn.

An Apple is an Apple when it is just ripe, not coated with pesticides and of good heritage, it is still an Apple if it was coated, repeatedly - fell to the ground weeks ago, and rotten.
I met a Japanese sushi chef that ran away from his home at 14 years old, did not have formal training and learned everything the hard way, he told me he'd age fish to see how it was consumable, tasted, and when it became unusable - by consuming it at every stage. Up to getting sick from eating it.

This Forum is fantastic because we can disagree on things, learn or re-learn, and most of all, for the most part - be civil and courteous in the process. Had to state this, thanks...
Thank you for your input. Your knowledge can only be helpful. The angle from which the instrument is captured will depend on the sound engineer's choices. My reasoning is simpler: a microphone begins to pick up signals with more complex phases, as you described, but the direct sound of the instrument, with its harmonics, will reach the microphone with the instrument's own phase and time. The subsequent reflections, which will provide spatiality and ambience, will also be complex and more or less consistent with the instrument. Again, in my opinion, they are all snapshots taken by the microphone over time. Again, in my opinion, if a speaker is linear, if it has wide and coherent dispersion and attenuation as the off-axis angle increases (like all microphones), if it has time and phase alignment... it will reproduce the instrument as the reference recorded in that environment.
 
@Floyd Toole I'll take this opportunity to point out that the focused images anchored to the ground, silhouetted in the three-dimensional sound space I've heard in my listening experiences, have always manifested themselves with speakers that were the ultimate expression of coherence. The quality obviously varied depending on the room where these speakers were playing. The larger and more acoustically treated the room, the more breadth and verisimilitude the music had.
Thank you
 
Thank you for your input. Your knowledge can only be helpful. The angle from which the instrument is captured will depend on the sound engineer's choices. My reasoning is simpler: a microphone begins to pick up signals with more complex phases, as you described, but the direct sound of the instrument, with its harmonics, will reach the microphone with the instrument's own phase and time. The subsequent reflections, which will provide spatiality and ambience, will also be complex and more or less consistent with the instrument. Again, in my opinion, they are all snapshots taken by the microphone over time. Again, in my opinion, if a speaker is linear, if it has wide and coherent dispersion and attenuation as the off-axis angle increases (like all microphones), if it has time and phase alignment... it will reproduce the instrument as the reference recorded in that environment.
Thanks, appreciate it. I can only reiterate this : Some instruments, like the French Horn in my example, sound rather terrible if you point a fantastic condenser mic at the throat of the horn, plus - at no extra cost - you get to hear the valves, and the spittle from the players lips (unless it's a "dry" player ;))...
Time and phase alignment is scientifically interesting, it's surely worth the examination - but in recording instruments one tries to get pleasing results, science left aside...
In a lot of cases, the "reference" is really what the people listening to your work want, and expect, to hear. My experience...

Edit: I should add, musicians of acoustic instruments pretty much all hear and experience their instrument differently than their audience - by distance alone, and are often disagreeing with the recorded sound when they listen back to it, regardless of the choice of mains of the studio. Always a thin edge to walk on, carefully that is.
 
Last edited:
Back
Top Bottom