• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Some comments from Floyd Toole about room curve targets, room EQ and more

thewas

Master Contributor
Forum Donor
Joined
Jan 15, 2020
Messages
6,672
Likes
15,921
Room curve targets

Every so often it is good to review what we know about room curves, targets, etc.

Almost 50 years of double-blind listening tests have shown persuasively that listeners like loudspeakers with flat, smooth, anechoic on-axis and listening-window frequency responses. Those with smoothly changing or relatively constant directivity do best. When such loudspeakers are measured in typically reflective listening rooms the resulting steady-state room curves exhibit a smooth downward tilt. It is caused by the frequency dependent directivity of standard loudspeakers - they are omnidirectional at low bass frequencies, becoming progressively more directional as frequency rises. More energy is radiated at low than at high frequencies. Cone/dome loudspeakers tend to show a gently rising directivity index (DI) with frequency, and well designed horn loudspeakers (like the M2) exhibit quite constant DI over their operating frequency range. There is no evidence that either is advantageous - both are highly rated by listeners.

Figure 12.4 in the third edition of my book shows the evolution of a steady-state "room curve" using very highly rated loudspeakers as a guide. The population includes several cone/dome products and the cone/horn M2. The result is a tightly grouped collection of room curves, from which an average curve is easily determined. It is a gently downward tilted line with a slight depression around 2 kHz - the consequence of the nearly universal directivity discontinuity at the woofer/midrange-to-tweeter crossover. I took the liberty of removing that small dip and creating an "idealized" room curve which I attach. The small dip should not be equalized because it alters the perceptually dominant direct sound.

It is essential to note that this is the room curve that would result from subjectively highly-rated loudspeakers. It is predictable from comprehensive anechoic data (the "early reflections curve in a spinorama). If you measure such a curve in your room, you can take credit for selecting excellent loudspeakers. If not, it is likely that your loudspeakers have frequency response or directivity irregularities. Equalization can address frequency response issues, but cannot fix directivity issues. Consider getting better loudspeakers. Equalizing flawed loudspeakers to match this room curve does not guarantee anything in terms of sound quality.

When we talk about a "flat" frequency response, we should be talking about anechoic on-axis or listening window data, not steady-state room curves. A flat room curve sounds too bright.

Conclusion: the evidence we need to assess potential sound quality is in comprehensive anechoic data, not in a steady-state room curve. It's in the book.

The curve is truncated at low frequencies because the in-situ performance is dominated by the room, including loudspeaker and listener locations. With multiple subwoofers is it possible to achieve smoothish responses at very low frequencies for multiple listening locations - see Chapters 8 and 9 in my book. Otherwise there are likely to be strong peaks and dips. Peaks can be attenuated by EQ, but narrow dips should be left alone - fortunately they are difficult to hear: an absence of sound is less obvious than an excess. Once the curve is smoothed there is the decision of what the bass target should be. Experience has shown that one size does not fit all. Music recordings can vary enormously in bass level, especially older recordings - it is the "circle of confusion" discussed in the book. Modern movies are less variable, but music concerts exhibit wide variations. The upshot is that we need a bass tone control and the final setting may vary with what is being listened to, and perhaps also personal preference. In general too much bass is a "forgivable sin" but too little is not pleasant.

average-steady-state-room-curve-using-very-highly-rated-loudspeakers-as-a-guide.jpg


An addendum: If you think about it, many/most? suppliers of "room EQ" algorithms do not manufacture loudspeakers. If they did, they might treat them more kindly. This is not a blanket statement, but one with significant truth. The stated or implied sales pitch is: give me any loudspeaker in any room and my process will make it "perfect". A moment of thought tells you that this cannot be true.

Check out: Toole, F. E. (2015). “The Measurement and Calibration of Sound Reproducing Systems”, J. Audio Eng. Soc., vol. 63, pp.512-541. This is an open-access paper available to non-members at
1.gif
www.aes.org.
1.gif
http://www.aes.org/e-lib/browse.cfm?elib=17839

Research has shown that approximately 30% of one's judgment of sound quality is a reaction to bass performance - both extension and smoothness. Given this, if one has selected well-designed loudspeakers, the dominant problems are likely to be associated with the room itself and the physical arrangement of loudspeakers and listeners within it - i.e. the bass.

Conclusion: full bandwidth equalization may not be desirable, especially if any significant portion of the target curve is flat. On the other hand, some amount of bass equalization is almost unavoidable, and will be most effective in multiple sub systems (Chapter8). It is useful if the EQ algorithm can be disabled at frequencies above about 400-500 Hz. There should be no difference to equalization for music or movies. Good sound is good sound, and listeners tell us that the most preferred sound is "neutral". Because of the circle of confusion, some tone control tweaking may be necessary to get it at times.

When I read the manuals for some room EQ systems, they usually offer suggested target curves. If they happen to work, fine, but if they don't, they often provide user friendly controls to adjust the shape of the target curve. This is nothing more than an inconvenient, inflexible tone control. It is a subjective judgment based on what is playing at the moment. It is not a calibration.

Sources: https://www.avsforum.com/forum/89-s...aster-reference-monitor-143.html#post57291428 https://www.avsforum.com/forum/89-s...aster-reference-monitor-143.html#post57293354
(I had asked Dr. Toole if its ok for him to repost them and he not only agreed but said he is thankful if the knowledge is spread around)
 
Last edited:
OP
thewas

thewas

Master Contributor
Forum Donor
Joined
Jan 15, 2020
Messages
6,672
Likes
15,921
Some thoughts on room EQ and the "Harman" curve

This discussion illustrates some fundamental issues with respect to room equalization. I spend many pages in my book discussing this in detail, but here are some simplified thoughts.

As the "creator" of the "new Harman target curve" I can clear up some misunderstandings. Those who have the 3rd edition of my book can see where the curve came from - Figure 12.4. It is nothing more than the steady-state room curve that results from measuring any of several forward-firing loudspeakers that have been awarded very high ratings in double-blind listening tests. These steady-state room curves are substantially predictable from the "early reflections" curve in the spinoramas, as is illustrated.

Now, if you measure such a curve or something very close to it, and your speakers are conventional forward firing designs, it means that you probably have chosen well. Small tilt-like deviations may be seen and broadband tone-control-like adjustments can be made to achieve a satisfactory overall spectral balance. No small detail adjustments should be made because it is highly likely that they are acoustical interference (non-minimum-phase) phenomena that two ears and a brain interpret as innocent spaciousness - room sound. "Correcting" these is likely to degrade the audible performance of truly good loudspeakers - unfortunately this behavior is not uncommon in auto-EQ algorithms created by companies that do not make loudspeakers. Their marketing philosophy is that their magic can make any loudspeaker in any room into a perfect system. Sorry, but a small omni mic and an analyzer are not the equivalent of two ears and a brain. It is not uncommon to be forced to override auto EQ with manual adjustments to restore the inherent sound quality of excellent loudspeakers. In some cases the "off" icon is the preferred solution.

The simple fact is that a steady-state room curve is not accurately descriptive of sound quality - comprehensive anechoic data are remarkably capable, but such data are rare.

The Harman curve is not a "target" in the sense that any flawed loudspeaker can be equalized to match it and superb sound will be the reward. The most common flaws in loudspeakers are resonances (which frequently are not visible in room curves) and irregular directivity (which cannot be corrected by equalization). The only solution to both problems is better loudspeakers, the evidence of which is in comprehensive anechoic data.

Remember, the Harman curve relates to conventional forward-firing loudspeaker designs. Legitimate reasons for differences are different loudspeaker directivities - omni, dipoles, etc. - or rooms that are elaborately acoustically treated, or both.

If a "target' curve has been achieved, and the sound quality is not satisfactory, the suggestion is often to go into the menu, find the manual adjustment routine, and play around with the shape of the curve until you or your customer like the sound. This is not a calibration. This is a subjective exercise in manipulating an elaborate tone control. Once set it is fixed, and in it will be reflected timbral features of the music being listened to at the time. In other words, the circle of confusion is now included in the system setup. By all means do it, but do not think that the exercise has been a "calibration". Old fashioned bass & treble tone controls and modern "tilt" controls are the answer and they can be changed at will to compensate for personal taste and excesses or deficiencies in recordings. Sadly, many "high end" products do not have tone controls - dumb. It is assumed that recordings are universally "perfect" - wrong!

All that said, equalization of steady-state room curves at low frequencies is almost mandatory in small rooms. Multiple subwoofers can reduce seat-to-seat variations so that the EQ works for more than one listener - Chapter 8.

Those wanting to dig deeper can read my book or look at this open-access paper:
Toole, F. E. (2015). “The Measurement and Calibration of Sound Reproducing Systems”, J. Audio Eng. Soc., vol. 63, pp.512-541. This is an open-access paper available to non-members at
1.gif
www.aes.org
1.gif
http://www.aes.org/e-lib/browse.cfm?elib=17839

Source: https://www.avsforum.com/forum/89-s...vel-home-theater-thread-112.html#post57820394
 
OP
thewas

thewas

Master Contributor
Forum Donor
Joined
Jan 15, 2020
Messages
6,672
Likes
15,921
Some more very helpful comments of him

In think if you go back and review my - extensive - contributions to this thread you will realize that "neutral" is not "only in anechoic chamber". What has been found over several decades of conscientious investigation and publication is:
(a) in double blind tests in normally reflective rooms (different ones over the years) listeners give the highest ratings to loudspeakers that measure essentially flat and smooth on axis, and at least smooth off axis in an anechoic chamber or functional equivalent. What they are recognizing and responding favorably to is the absence of resonances - i.e. neutrality.
(b) Room curves do not correlate as well with listener preferences, except at bass frequencies, below the about 300 400 Hz transition frequency. Adjusting loudspeakers having different flaws to match full-bandwidth room curves of highly rated loudspeakers cannot yield the same high quality sound. This is especially true if narrow-band equalization is used above the transition frequency. This fact is not to be found in the advertising literature of "room EQ" products. Guess why?
(c) Although different rooms add their own "signature" sounds, when evaluated in different rooms listeners unerringly zero in on the most neutral - resonance-free - loudspeakers. It is merely that loudspeakers in different rooms are identifiable, just as in live sound we hear the same voices and musical instruments, but in different rooms. Humans have a significant ability to separate the two - except at bass frequencies in small rooms, where the room is the dominant factor and intervention is necessary to restore broadband neutrality - Chapter 8 in the 3rd edition.
Audio professionals, in my experience, are generally not aware of these facts. However, they substantially determine what we get to listen to.

From https://www.avsforum...57.html#post58521736



As for "voicing", what I wrote early in Chapter 3 says it all:
LOUDSPEAKER “VOICING”. Music composition and arrangement involves voicing to combine various instruments, notes and chords to achieve specific timbres. Musical instruments are voiced to produce timbres that distinguish the makers. Pianos and organs are voiced in the process of tuning, to achieve a tonal quality that appeals to the tuner or that is more appropriate to the musical repertoire. This is all very well, but what has it to do with loudspeakers that are expected to accurately reproduce those tones and timbres?
It shouldn’t be necessary if the circle of confusion did not exist, and all monitor and reproducing loudspeakers were “neutral” in their timbres. However that is not the case, and so the final stage in loudspeaker development often involves a “voicing” session in which the tonal balance is manipulated to achieve what is hoped to be a satisfactory compromise for a selection of recordings expected to be played by the target audience. There are the “everybody loves (too much) bass” voices, the time-tested boom and tizz “happy-face” voices, the “slightly depressed upper-midrange voices” (compensating for overly bright close-miked recordings, and strident string tone in some classical recordings), the daringly honest “tell it as it is” neutral voices, and so on. It is a guessing game, and some people are better at it than others. It is these spectral/timbral tendencies that, consciously or unconsciously, become the signature sounds of certain brands. Until the circle of confusion is eliminated, the guessing game will continue, to the everlasting gratitude of product reviewers, and to the frustration of critical listeners. It is important for consumers to realize that it is not a crime to use tone controls. Instead, it is an intelligent and practical way to compensate for inevitable variations in recordings, i.e. to “revoice” the reproduction if and when necessary. At the present time no loudspeaker can sound perfectly balanced for all recordings.

From https://www.avsforum...57.html#post58522906



To Resonate or not to Resonate, That is the Question?
Apologies to Shakespeare . . .

This discussion has drifted into an area of literal interpretations of classical definitions with some semantics thrown in. If there is a shallow hump in a frequency response, in literal terms it is a very low-Q resonance, implying a mechanical, electrical or acoustical system with a "favored" frequency range. In a physical system as complex as a loudspeaker it may sometimes be difficult to decide what is happening. Crossovers are equalizers, by any other name, that interact with transducers having inherently non-flat tendencies - the result is a combination of both electrical and mechanical elements. Equalizers can be resonators just as surely as acoustical cavities, enclosure panels and cone breakup. So a frequency response feature may be partly mechanical and partly electrical , but the end result can be that of a resonance having Q. Achieving a desirable flat on-axis sound using passive or active networks can result in non-flat off-axis behavior because transducers have frequency-dependent directivity. In a room the result is that even with flat direct sound, the early reflected and later reflected sounds may exhibit emphasis over a range of frequencies that could forgivably be interpreted as a low-Q resonance.

As discussed many times in this thread, transducers are inherently minimum-phase devices, so electrical EQ can modify the performance of mechanical resonances - a huge advantage for active loudspeakers or those for which accurate anechoic data are available.

In the crossover between a 6- to 8-inch woofer and a 1-inch tweeter, a directivity mismatch at crossover is unavoidable. Above crossover, the tweeter has much wider dispersion than the woofer, so there is an energy rise over a wide frequency range. Is this a resonance? Technically not, in the dictionary definition sense. However, there is a broad hump in radiated energy, so perceptually it may appear to be so. Figure 4.13 shows such an example where even crude room curves were adequate to recognize the energy excess in an above-crossover energy excess and attenuate it. Because wide bandwidth (low-Q) phenomena are detected at very small deviations there was a clear improvement in perceived sound quality even though medium and higher-Q "real" resonances were essentially unchanged. Addressing all of the "resonances" was not surprisingly the best.

So, don't get hung up on semantics. Deviations from a linear frequency response are all describable as "resonances" if one chooses to. Broadband trends are very low-Q, narrower trends, medium Q, and so on. Even a bass tone control is an opportunity to manipulate a "resonance" - in this case the hump that develops above the low cutoff frequency which, depending on the system design will have a Q.

Narrow dips are usually the result of destructive acoustical interference and are usually audibly innocuous because they change with direction/position. Broader dips can be interpreted as anti-resonances if one chooses to, whether there is an associated frequency selective absorption process or not. Mostly not.

From https://www.avsforum...61.html#post58530612



As I said, because loudspeaker transducers are minimum-phase devices one can use electrical parametric EQ to attenuate the mechanical resonances in transducers - using anechoic data of course. So, if you add a hump to an otherwise neutral/resonance free speaker you have added a resonance. This is why it is crucial to pay attention to what "room equalizers" are doing. If they "see" a ripple in a measured curve caused by acoustical interference of direct and reflected sound, and try to flatten it, they may be adding a resonance and degrading a good loudspeaker.

From https://www.avsforum...61.html#post58530898

Yes spatial averaging smooths curves; they look better - that is one reason why it is favored. However, because these curves are typically steady-state curves little of value is learned about the speaker, and nothing is specific to the prime listening location. Spectral smoothing is another "feature" that smooths curves. Tradeoffs are not always advantages.

Spatial averages reduce the ability to be analytical about room modes/standing waves.

So, it comes down to "how much do you know about the loudspeaker - in anechoic data?" If none, the usual case, such room curve data cannot be trusted. If one knows a lot, e.g. a spinorma, one can predict the room curve with reasonable precision. The room curve, by itself, is not reliably associated with sound quality.

From https://www.avsforum...61.html#post58534080



I have one of the most powerful and expensive multichannel processors on the market, widely praised for its processing power and complicated signal processing. It is enormously flexible, clearly designed by smart people in the math and DSP categories, but equally clearly these people did not understand the acoustics and psychoacoustics of loudspeakers and rooms - my speciality. The result is that, in its self-calibration mode it does things that should not be done.

I won't go into the details of my history with this unit, but it began with a setup procedure, using their proprietary microphone, spatial averaging with weighted mic locations, and allusions to combined IIR and FIR processing promising a very special result.

It was indeed special, because the superb sound of my Revel Salon2s was clearly degraded. Measurements I made with REW disagreed with the unit's displayed result, but agreed with my ears. With help from a product specialist, manual EQ overrides were able to restore the essence of good sound.

Some subsequent fiddles have taken it to the point that I can enjoy programs, but only by overriding or disabling some of the internal processes.

I know that there are other digital equalizers with problems. All originate with clever math/DSP engineers doing things that may make academic sense, but that pay insufficient attention to the peculiarities of human perception. At professional audio gatherings I have had extended discussions/arguments with some of their engineers. It has always come down to opinion, not fact, and the opinions are inclined to enhance the customers' perceived value in the product. It is part of a mighty struggle to be different or distinctive in a product that delivers something that nowadays many people can do for themselves with off-the-shelf DSP, free measurement software and a $100 mic.

None of these processes are supported by published double-blind subjective evaluations. Tell me if I am wrong.

The universal availability of "room EQ" is now a kind of "disease" in audio. People place trust in these devices that is misplaced. As I have stated several times, when the operating manual of the "calibration" device basically states that if the customer does not like the sound from the default target curve, then change the target curve. At this point it becomes a subjectively guided tone-control exercise, not a calibration. The "circle of confusion" for whatever program used during the tweaking is now permanently installed in the system.

All that said, equalization is part of the necessary treatment of room modes in bass. There is no escape from that, but even there, something that should be simple is sometimes compromised. Chapter 8 in the 3rd edition.

Toole, F. E. (2015). “The Measurement and Calibration of Sound Reproducing Systems”, J. Audio Eng. Soc., vol. 63, pp.512-541. This is an open-access paper available to non-members at
1.gif
www.aes.org
1.gif
http://www.aes.org/e-lib/browse.cfm?elib=17839

From https://www.avsforum...62.html#post58538632



E-r-r-r-r, have we not been saying that the small peaks and dips in room curves are likely to be caused by non-minimum phase phenomena, most likely caused by reflections and cannot be equalized. To two ears and a brain they are innocuous spaciousness, not coloration. It is attempts to "correct" such fluctuations that lead to degradation of well designed loudspeakers. So, above the transition frequency, small details in steady-state room curves should be ignored because unless you have comprehensive anechoic data on the loudspeakers you don't know what caused them.

Setting up a system according to personal preferences in spectral balance includes the circle of confusion and therefore generalization to all programs is not possible. Depending on the shortcomings in your loudspeakers and room results can vary. Better to have easily accessible tone controls that can be instantly adapted to your personal preference - for any program.

I don't have them, and miss them, so I, like you, arrived at a compromise setting that suits some programs better than others. Funny that so many elaborate, expensive, high-end audio products have no easily accessible tone controls. I can only

From https://www.avsforum...62.html#post58544284



For perspective it is worth remembering that about 30% of the factors influencing our judgement of sound quality relate to bass performance - and this is dominated by the room, only correctable in-situ. So, ANY loudspeaker can sound better after room EQ, so long as it competently addresses the bass frequencies - this is not a guarantee, but really is not difficult for at least the prime listener.

Above the transition frequency the M2 is sufficiently good that it should not require anything beyond broadband spectral tilts to fit with preferred program material.

From https://www.avsforum...60.html#post58544590
 
OP
thewas

thewas

Master Contributor
Forum Donor
Joined
Jan 15, 2020
Messages
6,672
Likes
15,921
The listening position response is in the end a variable mix of direct sound, early reflections and rest of reflections (sound power), so when using less directive speakers like omnidirectional or/and having a lower room reverberation or/and having the listening position closer to the loudspeakers will result in a less declining listening position response, that's why also just having a single target room curve doesn't make sense as it depends on the directivity of the speaker, the reverberation characteristics of the room and the listening distance.
 

Koeitje

Major Contributor
Joined
Oct 10, 2019
Messages
2,282
Likes
3,870
People here seem to hate ATC, but their mid range domes might be the right way to go?
 

Wayne A. Pflughaupt

Active Member
Joined
Mar 14, 2016
Messages
285
Likes
256
Location
Corpus Christi, TX
Excellent reading, thanks for posting!

I’ve always been averse to DSP room correction, mainly because I’m a cheapskate who always uses older equipment, and my current stuff has early technology. People seem duly impressed with the “latest and greatest” processes like Dirac, though. Being curious I’d like to try it, even if I am skeptical.

This comment is particularly brutal – Ouch!

[DSP] is enormously flexible, clearly designed by smart people in the math and DSP categories, but equally clearly these people [do] not understand the acoustics and psychoacoustics of loudspeakers and rooms...

I recently had an unusual (for me) experience with EQ. Some months back I bought some Canton speakers that were a big improvement over my previous ones, but I felt they were a tad strident. REW confirmed what I was hearing, a slight “mound” in the 4kHz range. I flattened it with my equalizer and – didn’t like the way it sounded! I changed the gain reduction from the “graph-recommended” -4 dB or so to only -2 dB, and still didn’t like it.

Mr. Toole also noted something else I’ve recently experienced:

Peaks [in subwoofer response] can be attenuated by EQ, but narrow dips should be left alone - fortunately they are difficult to hear: an absence of sound is less obvious than an excess.

My current situation puts my sub in a less-than-ideal location that gave me a null, the first time I’ve ever had one. Interestingly (and fortunately), with music program it isn’t audible.

Regards,
Wayne A. Pflughaupt
 
Last edited:

aarons915

Addicted to Fun and Learning
Forum Donor
Joined
Oct 20, 2019
Messages
685
Likes
1,140
Location
Chicago, IL
My listening comparisons over the years agree with Dr. Toole and the use of EQ. You really need a Spinorama style measurement before messing with EQ above the transition frequency in my opinion, just to make sure you're not doing more harm than good. I talked a bit about the EQ I use in the LS50 thread I recently posted but my EQ based on the Spinorama style measurement ended up sounding better to my ears than the previous EQ based on my in-room response, even though the EQ was similar.
 
OP
thewas

thewas

Master Contributor
Forum Donor
Joined
Jan 15, 2020
Messages
6,672
Likes
15,921
My experience with my quite big wide loudspeaker collection (from 70s till today) is similar, equalising loudspeakers with non continuous directivity to some kind of predefined target curve makes them often sound even worse, like for example filling a "BBC dip" which was there on purpose to compensate the decreasing directivity of a tweeter without waveguide.
So for loudspeakers with smooth directivity I only use above 300-500 Hz only anechoic data or room measurements from moving micro method which also reduces the influence of local disturbances like reflections. http://www.ohl.to/audio/downloads/MMM-moving-mic-measurement.pdf
 

Pio2001

Senior Member
Joined
May 15, 2018
Messages
317
Likes
507
Location
Neuville-sur-Saône, France
I have tried both kind of equalization on my system : speakers with flat anechoic response + bass correction, and full band equalization from the listening position.

The speakers are Neumann KH-120 in a reverberant room. They have a completely flat frequency response curve without correction.

111_Neumann.png


Listening distance : 2.1 meters, decay time at 500 Hz : 0.45 seconds. Treble tilt set to -1.

It took a very long time, with trial and error, to properly setup the manual bass correction. Starting from two huge peaks at 54 and 69 Hz, with a big depression around 100 Hz, very strong correction, with unstable results, are needed. Here are 6 one-point sweep tone measurements made 30 cm around the listening position:

24-HarmonicOverlay.png

And the final correction (left / right) :

92_CorrectionC.png


In the end, the result sounds perfect, although it measures oddly : the level decreases about 2 dB from 200 to 40 Hz instead of increasing. I suppose that this is was I preferred because the result is not smooth. Here is the result with both speakers, measured two times, two months apart.

99_20190127_NovJanEcranArriere.png


Then, I tried a complete correction, made by Jean-Luc Ohl, sending him the measurements, and getting two impulse responses in return (with a lot of interesting graphs).

44_Pio2001-measurement-100-stereo-correction-FIR_C2.png


The result was worse : since the impulse response were about 6000 samples long, the bass peaks were no more corrected properly, and also the overall balance was too bright, because the target curve that was chosen was not steep enough.
It means that the direct sound was not flat anymore : there was a treble boost.

After comparing the measurements, Jean-Luc sent me another correction, that followed more closely the natural curve that I have got at the listening position when the direct sound of the speakers is flat.
It sounded better, but it was impossible to compare it with the setup with bass correction only, as the bass themselves were not properly corrected, and that was too distracting.

So I decided to build a mix of the two corrections: I added my own correction for low frequencies to Jean-Luc's automatic correction for the rest of the frequencies:

100_CorrectionOhlPio.png


Now, the difference between this mixed correction and the pure correction C above is very small. It is difficult to tell which one sounds best, but I have a small preference for correction C, with no change in the direct sound of the speakers.

Comparing very carefully the frequecy responses of both corrections measured at the listening position, I realized that the main differences were a lack of accuracy in the 200 Hz range, and yet a slight change in the general balance, the green curve being overall lower in the 600-2000 Hz range, and higher in the 2000 - 8000 Hz range:

101_ResultatOhlPioVar.png


In conclusion, the obvious variations that I have between 1000 and 10000 Hz measured at the listening position were not a problem. The problems with automatic corrections never came from these unwanted corrections, but always from a general target curve that was not perfect.
The only perfect target curve, that sounds natural, is the one got with speakers equalized in anechoic conditions.
Looking at the last correction above, the two positive corrections at 2000 and 5500 Hz in the "correction Ohl" part are not heard as audible problem in themselves. The problem is that they are raising the average level of the whole 1000 - 20000 Hz range. They should be compensated by a lower overall level.

In fact that's the very problem that I've been facing from the beginning while equalizing the 35 - 600 Hz range : should I decrease the peaks or rather raise the shelves between the peaks ?
Note that I'm not saying "filling the dips" but "raising the shelves" (sorry for my bad english), as filling a narrow dip is usually very bad, while adding a general low shelf and decreasing the peaks accordingly is harmless.
In other words, where is the target supposed to be ??

So far my answers are: from 200 to 600 Hz, the correction added after the speakers are flat on-axis in anechoic conditions should be balanced between positive and negative, so that the average result in the 200 - 600 Hz range for the direct on-axis sound the speaker remains at the same level as the one above 600 Hz.
Below 200 Hz, I don't completely understand what's going on. It seems to me that my correction is still very imprefect, and that I prefer having this frequency range in the background, as it still sounds quite bad compared to the clean bass in headphones such as Sennheiser HD600 or Beyer DT-880 pro.
 
Last edited:

JoachimStrobel

Addicted to Fun and Learning
Forum Donor
Joined
Jul 27, 2019
Messages
517
Likes
303
Location
Germany
A great summary. I still carry this graph with me, where a group of people where asked to “adjust” a room curve that they like best. The graph is from a Toole’s book. And I always see the thick black curve being discussed, but not people’s taste and in particular the huge bass boost. And the high frequency tilt is an average of a boost from untrained listeners and a stronger tilt from trained ones. Can somebody shed light on the origin and interpretation of this graph, it confuses me.
3C249EE2-1B30-46A2-8B27-A89EB885AFF2.jpeg
 
OP
thewas

thewas

Master Contributor
Forum Donor
Joined
Jan 15, 2020
Messages
6,672
Likes
15,921
Thank you Joachim, here you can read more about that graph http://www.aes.org/e-lib/browse.cfm?elib=17839

Also thank you very much Pio2001 for your thorough post which confirms my past experiences, as said "correction" bases on listening position measurements of very neutral loudspeakers like your KH120 above modal region is usually counterproductive, so its better done on anechoic measurements.

About the lower region my experience is similar too, when using a neutral speaker like your Neumann the target curve should be somewhere in the middle of the peaks and dips as they are the influence of the room and local reflections but that depends also on the recording (some old recordings are quite bass shy as they were mixed with huge monitors without room EQ back then) and playback level (when listening with less than 85 dB a little bass boost may sound more neutral due to the loudness effect).

Funnily I have found that one specific Harman target curve fits in my room very close to the moving microphone listening area measured responses of my loudspeakers, so when correcting to it I also linearize the direct sound which is very convenient, as I don't need a 2 step correction as before.

Here for example my correction with it on my desktop speakers, the passive KEF LS50 which have very smooth horizontal and vertical directivity but not perfect frequency responses:

1.jpg


Here my measured windowed responses (so quasi anechoic) of them without and with above EQ:

2.jpg


As it can seen the quasi anechoic response is nicely linearised in the upper region to a approx. +- 1dB width and the little frequency response problems of the LS50 like its famous presence "hill" are corrected.

But as said this only works because of the smooth directivity and because that target curve fits my room response so well already before correction. Like Dr. Bruggemann of Acourate always says when people ask him about target curves, the best it to look at the difference to your current measured curves.
 

SDX-LV

Active Member
Joined
Jan 11, 2020
Messages
132
Likes
139
Location
Sweden
Hi, great collection of quotes - I found a few that I personally missed before.

Actually among everything I read, including the book I never understood what to do for people who have irregular listening space? All these recommendations to not touch speakers above transition frequency and avoid automatic calibration, but I have one speaker in a sort-of corner of a an irregular room, one in the middle of the wall, center in-between under a TV + two surrounds mounted at different distances in again strange corner conditions. Actually my situation is not much more extreme than Dr. Toole's own home theater today, but what could I use to set my system up if I don't want to spend lots of money on luxury equipment, don't want to have ugly professional rack-mount boxes and want the system to work with a simple blu-ray player as a source?

Is there a simple solution to a speaker setup problem for a consumer with an AVR? As bad as Audissey XT or similar systems are - they do get something right. Alternatively I could use quite basic manual EQ inside the AVR that may or may not be any better than Audissey when used with an external measurement system.

The funny thing is that this dillema stops me from buying really excellent speakers, as I know I will kill them by placement and any system that could fix the placement issues costs craaazy money (JBL Synthesis custom install, Trinnof, Genelec, even JBL Intonato 24 calibration). :)
 

Pio2001

Senior Member
Joined
May 15, 2018
Messages
317
Likes
507
Location
Neuville-sur-Saône, France
I have my manual correction loaded in a MiniDSP 2x4 device. It is very small and very cheap. It needs a computer to be programmed. The correction can be done in REW, then exported from REW and imported in the MiniDSP control software, then loaded in the MiniDSP's internal memory.

Then the computer can be disconnected, and the MiniDSP can run alone with its own external power supply, either between the pre output and active speakers, or between the source and the amplifier, or between the dsp out / dsp in connectors of the amplifier. In the later cases, the gain of the MiniDSP must be adjusted to avoid digital clipping.

Cheap and small hardware, but it does nothing by itself. You have to understand all the process of creating a room correction and do it yourself.
 

SDX-LV

Active Member
Joined
Jan 11, 2020
Messages
132
Likes
139
Location
Sweden
I have my manual correction loaded in a MiniDSP 2x4 device. It is very small and very cheap. It needs a computer to be programmed. The correction can be done in REW, then exported from REW and imported in the MiniDSP control software, then loaded in the MiniDSP's internal memory.

Then the computer can be disconnected, and the MiniDSP can run alone with its own external power supply, either between the pre output and active speakers, or between the source and the amplifier, or between the dsp out / dsp in connectors of the amplifier. In the later cases, the gain of the MiniDSP must be adjusted to avoid digital clipping.

Cheap and small hardware, but it does nothing by itself. You have to understand all the process of creating a room correction and do it yourself.

Noup, MiniDSP can not be plugged into a Blu-Ray to AVR to 5.1 speaker setup. Multi-channel, 4K and HDMI requirement eliminates most of the things out there :(
 

krabapple

Major Contributor
Forum Donor
Joined
Apr 15, 2016
Messages
3,154
Likes
3,698
"if one has selected well-designed loudspeakers"

Key phrase that Dr. Toole inserts religiously whenever discussing this topic.

But how to know if yours are 'well designed' without being supplied a proper set of measurements?

Another circle of confusion....
 

Kal Rubinson

Master Contributor
Industry Insider
Forum Donor
Joined
Mar 23, 2016
Messages
5,251
Likes
9,715
Location
NYC
Is there a simple solution to a speaker setup problem for a consumer with an AVR? As bad as Audissey XT or similar systems are - they do get something right.
You can use the new Audyssey App and restrict the correction to <300Hz. Deal with the rest by reasonable room treatment and use of relatively accurate loudspeakers.
 

Kal Rubinson

Master Contributor
Industry Insider
Forum Donor
Joined
Mar 23, 2016
Messages
5,251
Likes
9,715
Location
NYC
Noup, MiniDSP can not be plugged into a Blu-Ray to AVR to 5.1 speaker setup. Multi-channel, 4K and HDMI requirement eliminates most of the things out there :(
One can use something from the miniDSP's nanoAVR series that has HDMI input and output.
 

RayDunzl

Grand Contributor
Central Scrutinizer
Joined
Mar 9, 2016
Messages
13,173
Likes
16,930
Location
Riverview FL
Top Bottom