• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Equalizing loudspeakers based on anechoic measurements (community project)

flipflop

Addicted to Fun and Learning
Joined
Feb 22, 2018
Messages
927
Likes
1,240
I'm confused as to why you'd say that improving NBD_ON will not make NBD_PIR worse, but that it might make SM_PIR worse. To be clear, I agree that it might make SM_PIR worse (again, past a certain point, no change you make can improve all variables simultaneously), I'm just confused as to why you'd treat NBD_PIR differently from SM_PIR in that regard. As far as I can tell, depending on the situation, optimizing NBD_ON could make NBD_PIR worse, or SM_PIR better, or neither, or both.
I think some of the confusion and misconceptions stem from the fact that you haven't had at look at the actual data I'm referring to. You're still talking about things from a very theoretical perspective, while I'm trying to convey how things look in practice on top of that. Maybe I haven't been able to communicate these things well enough.
I've invited you to my conversation with pierre. If you read through it, you might get a better idea of where I'm coming from.

I also think you might be under the assumption that my EQs are meant to optimize the Olive score. They're not. I only use the score components to make sure nothing gets degraded.
 

Robbo99999

Master Contributor
Forum Donor
Joined
Jan 23, 2020
Messages
6,961
Likes
6,806
Location
UK
Can I ask a quick question about one of the direct sounds: listening window. What is the exact definition of that? It's something like +/- 10 degrees from on-axis isn't it....in the horizontal & vertical?? Also how is one curve arrived at for listening window if it's supposed to span a number of different listening angles, does it just average measured curves throughout that listening window, and I guess you can stipulate the resolution of how many theoretical curves you average maybe (just thinking aloud), like maybe one curve for each degree off axis for example? Or does it just measure at the extreme of the listening window rather than inbetween? I'm just trying to get a proper understanding of it so I can determine for myself how important that factor is, and how I might use it.
 

edechamps

Addicted to Fun and Learning
Forum Donor
Joined
Nov 21, 2018
Messages
910
Likes
3,620
Location
London, United Kingdom
I think some of the confusion and misconceptions stem from the fact that you haven't had at look at the actual data I'm referring to. You're still talking about things from a very theoretical perspective, while I'm trying to convey how things look in practice on top of that. Maybe I haven't been able to communicate these things well enough.
I've invited you to my conversation with pierre. If you read through it, you might get a better idea of where I'm coming from.

Sorry for the misunderstanding. I'm not sure I can find the time to do a deep dive into the data, so unless you're willing to present a summary of your findings, I guess I'll just leave it at that.

Can I ask a quick question about one of the direct sounds: listening window. What is the exact definition of that? It's something like +/- 10 degrees from on-axis isn't it....in the horizontal & vertical?? Also how is one curve arrived at for listening window if it's supposed to span a number of different listening angles, does it just average measured curves throughout that listening window, and I guess you can stipulate the resolution of how many theoretical curves you average maybe (just thinking aloud), like maybe one curve for each degree off axis for example? Or does it just measure at the extreme of the listening window rather than inbetween? I'm just trying to get a proper understanding of it so I can determine for myself how important that factor is.

CTA-2034A section 5.2 (which is what everyone is using) defines the Listening Window (LW) curve as the power average of the responses at ±0-10° vertical and ±0-30° horizontal angles. It's an average of 9 curves, at 10° steps.
 

flipflop

Addicted to Fun and Learning
Joined
Feb 22, 2018
Messages
927
Likes
1,240
Sorry for the misunderstanding. I'm not sure I can find the time to do a deep dive into the data, so unless you're willing to present a summary of your findings, I guess I'll just leave it at that.
No problem at all. I think what I've said so far would constitute as a summary of the data. If you want a more elaborate explanation, you will have to read through the conversation.
 

Robbo99999

Master Contributor
Forum Donor
Joined
Jan 23, 2020
Messages
6,961
Likes
6,806
Location
UK
CTA-2034A section 5.2 (which is what everyone is using) defines the Listening Window (LW) curve as the power average of the responses at ±0-10° vertical and ±0-30° horizontal angles. It's an average of 9 curves, at 10° steps.
Cool, I can visualise that, so EQ'ing the Listening Window, that's really EQ'ing the speaker over a very large physical area in the room - as in definitely not just one listening position. Is there an argument to be had with studio monitor speakers that are designed to be toed in to listening position, is there an argument that EQ'ing such a large average could decrease the optimised sound at a perfectly toed in listening position? An extension of that argument, if you were EQ'ing your speakers to a listening position or listening positions that fall easily within say the first half of that listening window, then would you get better results instead by just using UMIK measured averages at the various positions? I'm talking about just EQ'ing the speaker rather than the room, so let's say 500Hz+.

EDIT: wouldn't it be an average of 21 curves though?? See here, for example this is only the top row so to speak so you'd have to multiply the following number of points by 3:

+10V/-30H +10V/-20H +10V/-10H +10V/0H +10V/+10H +10V/+20H +10V/30H

EDIT #2: even if you assumed the speaker was symmetrical horizontally, ie +10H was equal to -10H, then that's still 12 curves, and that also wouldn't weight the inclusion of on-axis correctly:

+10V/30H +10V/20H +10V/10H +10V/0H
 
Last edited:

aarons915

Addicted to Fun and Learning
Forum Donor
Joined
Oct 20, 2019
Messages
686
Likes
1,140
Location
Chicago, IL
Cool, I can visualise that, so EQ'ing the Listening Window, that's really EQ'ing the speaker over a very large physical area in the room - as in definitely not just one listening position. Is there an argument to be had with studio monitor speakers that are designed to be toed in to listening position, is there an argument that EQ'ing such a large average could decrease the optimised sound at a perfectly toed in listening position? An extension of that argument, if you were EQ'ing your speakers to a listening position or listening positions that fall easily within say the first half of that listening window, then would you get better results instead by just using UMIK measured averages at the various positions? I'm talking about just EQ'ing the speaker rather than the room, so let's say 500Hz+.

EDIT: wouldn't it be an average of 21 curves though?? See here, for example this is only the top row so to speak so you'd have to multiply the following number of points by 3:

+10V/-30H +10V/-20H +10V/-10H +10V/0H +10V/+10H +10V/+20H +10V/30H

It is supposed to be a better representation of the direct sound since not everyone uses the same listening angle, you can always create your own listening window based on your own use case. I listen right around 20 degrees off axis usually since I don't toe my speakers in at all but the listening window is usually very close to that anyway so I still EQ the listening window. We talked earlier about the sidewall reflection, that's another case that you can approximate your actual sidewall reflection and just use that curve if you want to be more precise. CTA-2034 are averages based on surveys of listeners rooms so it might not match yours very well, that is probably very likely for someone with an odd shaped room or an odd setup.

It is 9 curves, 0,-10, 10, -20, 20, -30, 30 in the horizontal and -10, 10 deg in the vertical all averaged.
 

Robbo99999

Master Contributor
Forum Donor
Joined
Jan 23, 2020
Messages
6,961
Likes
6,806
Location
UK
It is supposed to be a better representation of the direct sound since not everyone uses the same listening angle, you can always create your own listening window based on your own use case. I listen right around 20 degrees off axis usually since I don't toe my speakers in at all but the listening window is usually very close to that anyway so I still EQ the listening window. We talked earlier about the sidewall reflection, that's another case that you can approximate your actual sidewall reflection and just use that curve if you want to be more precise. CTA-2034 are averages based on surveys of listeners rooms so it might not match yours very well, that is probably very likely for someone with an odd shaped room or an odd setup.

It is 9 curves, 0,-10, 10, -20, 20, -30, 30 in the horizontal and -10, 10 deg in the vertical all averaged.
Ahh, so the points in space are describing a cross rather than a "dotted filled-in" rectangle. In other words -10 & 10 deg in the vertical are only measured at 0 degrees horizontal......and the Horizontal is only measured at 0 degrees in the vertical? That doesn't really make sense to me because a filled in rectangle (21 curves) would describe the overall listening window more completely.

EDIT: although the "cross" would emphasis on-axis more and decrease the importance of the vertical.....hmm - so listening window is making assumptions that you are listening mainly at 0 degrees in the vertical. If this is the case then this fits the use case of toed in perfectly aligned studio monitors more than more diffuse listening places in the room. If this is actually the case then I like Listening Window more than I thought, given I toe them in exactly and also from a vertical angle perspective.
 
Last edited:

aarons915

Addicted to Fun and Learning
Forum Donor
Joined
Oct 20, 2019
Messages
686
Likes
1,140
Location
Chicago, IL
Ahh, so the points in space are describing a cross rather than a "dotted filled-in" rectangle. In other words -10 & 10 deg in the vertical are only measured at 0 degrees horizontal......and the Horizontal is only measured at 0 degrees in the vertical? That doesn't really make sense to me because a filled in rectangle (21 curves) would describe the overall listening window more completely.

EDIT: although the "cross" would emphasis on-axis more and decrease the importance of the vertical.....hmm - so listening window is making assumptions that you are listening mainly at 0 degrees in the vertical. If this is the case then this fits the use case of toed in perfectly aligned studio monitors more than more diffuse listening places in the room. If this is actually the case then I like Listening Window more than I thought, given I toe them in exactly and also from a vertical angle perspective.

Yes since most people have the tweeter close to ear level the vertical doesn't have to be more than +/- 10 but the average person listens around 10-20 degrees off-axis according to the survey they based it on. I ignore 0 degrees entirely because of that and only look at the LW, ER and SP curves when applying EQ, of course if those are all good then the PIR will also be good as it's basically derived from them.
 

edechamps

Addicted to Fun and Learning
Forum Donor
Joined
Nov 21, 2018
Messages
910
Likes
3,620
Location
London, United Kingdom
No problem at all. I think what I've said so far would constitute as a summary of the data.

Okay, so I assume you're referring back to:

  • A (reasonable) PIR based approach improves NBD_ON, NBD_PIR, and SM_PIR.
  • A direct sound approach improves NBD_ON, NBD_PIR, and risks degrading SM_PIR.

So, if I understand correctly, these are observations, not speculations. My bad.

While these are interesting findings, I'm not sure how they relate to my point. By "(reasonable) PIR based approach" I assume you mean some approach were you try to make both direct sound and PIR look smooth at the same time. And that stands in opposition to the "direct sound approach" by which I assume you mean an approach where you only try to make direct sound smooth (the logical extreme of which would be to just make NBD_ON equal to zero), at the possible expense of PIR.

My point basically boils down to: when it comes to EQ, we should ignore PIR and only look at direct sound. What I'm reading in your post is "if I optimize only for direct sound, NBD_ON improves, but PIR metrics might degrade". Okay, sure, I don't find that surprising. You also say that if you "optimize for PIR, then all metrics improve". I don't disagree with that, either (though of course NBD_ON will not improve as much as in the "direct sound" strategy, as that would be a contradiction in terms). So I'm not sure what your point is. I agree that PIR metrics might degrade with a direct sound strategy, and my only answer to that is, which I've been saying for a quite a few posts now, is: it doesn't matter. PIR doesn't matter. PIR matters in the Olive model because it correlates with sound quality (when comparing different speakers, that is), but that doesn't mean PIR directly causes sound quality.

The difference between ON and PIR (i.e. the PIR "directivity index", if you will) vary between the speakers the model was trained on. But when EQ'ing a single speaker, that difference never changes, so the situation is different. The best variables that Principal Component Analysis selected to explain the variance in ratings between speakers that have different DI might be quite different from the variables that PCA would select to explain the variance in ratings between multiple EQs for the same speaker, a situation in which DI is always the same. My theory is: if you compare actual preference ratings on various speaker frequency responses where the difference between ON and PIR is always the same (i.e. the EQ situation), and you redo Olive's PCA analysis on that, what will come out is a model with 100% NBD_ON. And nothing else.

Ahh, so the points in space are describing a cross rather than a "dotted filled-in" rectangle. In other words -10 & 10 deg in the vertical are only measured at 0 degrees horizontal......and the Horizontal is only measured at 0 degrees in the vertical?

Yes. CTA-2034A never uses oblique angles - it only uses angles in the horizontal and vertical planes, relative to the reference axis. So it's a cross, not a rectangle.
 
Last edited:

flipflop

Addicted to Fun and Learning
Joined
Feb 22, 2018
Messages
927
Likes
1,240
By "(reasonable) PIR based approach" I assume you mean some approach were you try to make both direct sound and PIR look smooth at the same time.
No, it just means 'not using an extreme slope to game the model'. I used -1.0 dB/octave for the vast majority of my EQs. Depending on the ERDI of the speaker in question, everything in the range of about -0.5 to -1.8 could also be reasonable, I guess.
What I'm reading in your post is "if I optimize only for direct sound, NBD_ON improves, but PIR metrics might degrade".
Yes.
You also say that if you "optimize for PIR, then all metrics improve".
Yes.
So I'm not sure what your point is.
Those are exactly my points. The PIR based approach is the consequence I've drawn from those observations.
I agree that PIR metrics might degrade with a direct sound strategy, and my only answer to that is, which I've been saying for a quite a few posts now, is: it doesn't matter. PIR doesn't matter. PIR matters in the Olive model because it correlates with sound quality (when comparing different speakers, that is), but that doesn't mean PIR directly causes sound quality.

The difference between ON and PIR (i.e. the PIR "directivity index", if you will) vary between the speakers the model was trained on. But when EQ'ing a single speaker, that difference never changes, so the situation is different. The best variables that Principal Component Analysis selected to explain the variance in ratings between speakers that have different DI might be quite different from the variables that PCA would select to explain the variance in ratings between multiple EQs for the same speaker, a situation in which DI is always the same. My theory is: if you compare actual preference ratings on various speaker frequency responses where the difference between ON and PIR is always the same (i.e. the EQ situation), and you redo Olive's PCA analysis on that, what will come out is a model with 100% NBD_ON. And nothing else.
Right, I hear you. So not only do you think SM_PIR is irrelevant when EQing speakers, you also think NBD_PIR is. This was one of the things I was hinting at. Glad we got that cleared up.
You have a better understanding of the model than me, so I'll change my approach back to equalizing the LW, although this time with a -0.2 dB/octave slope (as discussed by @bobbooo). If a resulting EQ degrades any of the score components (except for LFX), it will be replaced by a PIR based EQ. That's the compromise I'm willing to make for now until I've seen some actual evidence that supports your hypothesis.
 

edechamps

Addicted to Fun and Learning
Forum Donor
Joined
Nov 21, 2018
Messages
910
Likes
3,620
Location
London, United Kingdom
Maybe it would be worth organizing some large-scale online blind test that pit various EQs (following different strategies) against each other. That might be a great way to iterate on EQ strategies and answer important questions like "with DI kept constant, what is the best correlate of sound quality: ON, LW, PIR, or some combination of those". The very nice thing is, EQ blind testing can be done without any special setup (no need for a speaker shuffler or anything crazy like that). And with @amirm's insane measurement cadence, we might quickly arrive at a point where it would be possible to generate EQs for most candidates because we'd have anechoic measurements of their speakers.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,558
Likes
238,946
Location
Seattle Area
Maybe it would be worth organizing some large-scale online blind test that pit various EQs (following different strategies) against each other. That might be a great way to iterate on EQ strategies and answer important questions like "with DI kept constant, what is the best correlate of sound quality: ON, LW, PIR, or some combination of those". The very nice thing is, EQ blind testing can be done without any special setup (no need for a speaker shuffler or anything crazy like that). And with @amirm's insane measurement cadence, we might quickly arrive at a point where it would be possible to generate EQs for most candidates because we'd have anechoic measurements of their speakers.
I haven't kept track of this thread :), but we should be doing that right away. We need formal volunteers to create the EQ/convolution files for owners to test and verify. The improvement in speaker performance with EQ is incredible in my view. It allows passive speakers to get most of the advantages of active DSP speakers.

So let's get organized and create this database! We should call them "ASR EQ Correction" or AEC for short. :)
 

Hactar

New Member
Forum Donor
Joined
Dec 3, 2018
Messages
4
Likes
0
Is there any info available on the strategy that Harman use for their 'Anechoic EQ' in the JBL SDP-75 etc.?
 

aarons915

Addicted to Fun and Learning
Forum Donor
Joined
Oct 20, 2019
Messages
686
Likes
1,140
Location
Chicago, IL
Maybe it would be worth organizing some large-scale online blind test that pit various EQs (following different strategies) against each other. That might be a great way to iterate on EQ strategies and answer important questions like "with DI kept constant, what is the best correlate of sound quality: ON, LW, PIR, or some combination of those". The very nice thing is, EQ blind testing can be done without any special setup (no need for a speaker shuffler or anything crazy like that). And with @amirm's insane measurement cadence, we might quickly arrive at a point where it would be possible to generate EQs for most candidates because we'd have anechoic measurements of their speakers.

That's what I thought we were doing, at least a few of us in this thread have done so. Tim and I have both had the best results with an approach that tries to make the best compromise between the direct sound and early reflections, he singles out the sidewalls though. Nothing too surprising yet, this is consistent with what the research says we should like, neutral direct sound with a similar and smooth off-axis response.
 

Robbo99999

Master Contributor
Forum Donor
Joined
Jan 23, 2020
Messages
6,961
Likes
6,806
Location
UK
Maybe it would be worth organizing some large-scale online blind test that pit various EQs (following different strategies) against each other. That might be a great way to iterate on EQ strategies and answer important questions like "with DI kept constant, what is the best correlate of sound quality: ON, LW, PIR, or some combination of those". The very nice thing is, EQ blind testing can be done without any special setup (no need for a speaker shuffler or anything crazy like that). And with @amirm's insane measurement cadence, we might quickly arrive at a point where it would be possible to generate EQs for most candidates because we'd have anechoic measurements of their speakers.
That sounds good, how would you do it blind though, would you bake the speaker specific EQ's into sample tracks that would be part of an online assessment tool? So you'd have baked in EQ's of all the different strategies for all the different speakers that we wanted to cover.....and then you would ask owners to listen to them in that 'randomised' online assessment tool on stock speakers (ie they're not applying EQ on their end because it's baked into the song already for the specific speaker under test)? That way it could be automatically randomised out in the web rather than the owner having to do anything except for clicking a few buttons on a web page.....and it would give you back your 'assessment score' at the end.
 

Maiky76

Senior Member
Joined
May 28, 2020
Messages
444
Likes
3,743
Location
French, living in China
@edechamps

Here the abstract of the “A Multiple Regression Model for Predicting Loudspeaker Preference Using Objective Measurements: Part II - Development of the Model” article

ABSTRACT
A new model is presented that accurately predicts listener preference ratings of loudspeakers based on anechoic measurements.
The model was tested using 70 different loudspeakers evaluated in 19 different listening tests. Its performance was compared to 2 models based on in-room measurements with 1/3-octave and 1/20-octave resolution, and 2 models based on sound power measurements, including the Consumers Union (CU) model, tested in Part One. The correlations between predicted and measured preference ratings were: 1.0 (our model), 0.91 (in room, 1/20th-octave), 0.87 (sound power model), 0.75 (in-room, 1/3-octave), and −0.22 (CU model). Models based on sound power are less accurate because they ignore the qualities of the perceptually important direct and early- reflected sounds. The premise of the CU model is that the sound power response of the loudspeaker should be flat, which we show is negatively correlated with preference rating. It is also based on 1/3-octave measurements that are shown to produce less accurate predictions of sound quality.]

So the model is trying to PREDICT the preference rating and does it fairly well. At first glance I can't see why one couldn't use it on any speakers.
However, I take your point that the single parameter with the highest correlation is the AAD_LW/ON.
If I am not mistaken Toole always refers to “forward firing speakers with flatish and smooth response with no directivity aberration”, sorry if not 100% accurate.
So that hypothesis, that might be implicit, might be necessary for the score to be valid.
We tend to see that optimizing for the score tend to deteriorate the ON/LW in favor of the PIR with in return would diminish the validity of the model, so there might be something to understand.

The question are then:
Is there any merit to EQ a speaker based on the predicted score alone? if yes then it is trivial.
Or, within the confine of the “forward firing speakers with flatish and smooth response with no directivity aberration” hypothesis, is there any merit to EQ a speaker based on the predicted score?

Which implies defining flatish and smooth response and with no directivity aberration to be able to perform optimization trade-off.

This would be the goal of double-blind testing EQ based on different predicaments and see if there is a trend for example: score optimized only EQs tend to yield better preference. -> big task but why not?
In return this amounts to “if you want high confidence that the speaker you are buying will be good it should measure in a certain way”.

Another possibility would be that the model does correlates but is oversimplified, PIR as a line with a slope might not actually be the target.
That would link with the In-room curve (more bass less HF) and the Headphone studies (also more bass less HF).

Again with a set of design goals, say for example flat ON/LW and a certain PIR the engineers can come up with a design that is a good approximation.

See the graph from the Revel F228Be
20200730 Revel F228Be Spinorama.png


In your point #4
Passive crossovers ARE a type of EQ circuitry.
If I extrapolate what you are saying, multiway speakers should be separated from full ranges because one would not design a full range driver the same way than drivers for multiway application are. The full range driver performance should be designed with the same goal than that of a multiway speaker but most probably with more compromises and worst approximation of the target.

I don’t believe that there are different “divisions” in the loudspeaker designing game.
Anything goes to achieve the target set, whatever that might be it is just about compromises and constrains.
 

Chromatischism

Major Contributor
Forum Donor
Joined
Jun 5, 2020
Messages
4,796
Likes
3,742
I don't think we want to EQ based on in-room response. We don't hear that way, and it doesn't work for all room sizes, and it especially doesn't work for an outdoor speaker. I'd EQ based on the listening window.
 

Maiky76

Senior Member
Joined
May 28, 2020
Messages
444
Likes
3,743
Location
French, living in China
Yeah. I've come up with some EQ files of my own - would be interesting to see if they match your matlab version closely again.

Hi,

Let's start with the F206 only. No EQ: 6.27
It is not easiest to EQ but we'll see how it pans out.

The score are probably not that accurate because they are not from the raw data.
Take them as relative scores rather.

20200730 F206 Spinorama.png


again, can you discriminate between them and the raw speaker?
If so is there one better than the others?
I did not see, maybe I missed it, any clear answer for the M105, could you give us your final conclusion at one point?


Code:
% H1: 7.03
x0=[ 221.0, -0.66,  6.43,...
     375.0,  0.95,  2.53,...
     598.0,  0.48,  6.50,...
    1100.0, -0.33,  0.50,...
    1324.0,  1.65,  3.82,...
    2145.0,  0.73,  5.30,...
    3435.0,  1.13,  3.00,...
    5340.0,  0.75,  8.40,...
   10010.0, -0.50,  11.1,...
   11560.0, -0.75,  3.80,...
    4333.0, -0.00,  3.60,...
    1];

% SL: 6.94
x0=[ 223.0, -0.46,  9.00,...
     375.0,  1.40,  2.53,...
     570.0,  0.51,  6.50,...
     987.0, -0.43,  0.61,...
    1321.0,  2.00,  4.00,...
    1820.0, -1.24,  3.70,...
    3435.0,  1.65,  3.00,...
    5350.0,  0.75,  12.5,...
    9924.0, -0.67,  7.60,...
   11560.0, -1.10,  2.60,...
    4333.0, -0.00,  3.60,...
    1];

% SC 7.14
x0=[ 223.0, -0.46,  11.1,...
     375.0,  1.40,  2.53,...
     570.0,  0.66,  4.70,...
     987.0, -0.33,  0.69,...
    1331.0,  2.00,  3.22,...
    1721.0, -0.88,  5.45,...
    3435.0,  0.90,  6.00,...
    5351.0,  0.69,  14.3,...
    9924.0, -0.61,  8.60,...
   11560.0, -0.95,  2.00,...
    4333.0, -0.00,  3.60,...
    1];
 
Last edited:

JIM_82

Member
Joined
Jul 29, 2020
Messages
8
Likes
10
I agree with edechamps

Equalization is beneficial for direct sound because the power response with respect to the axis is an invariant parameter of the loudspeaker directivity.
The directivity index cannot be changed by equalization.
The relationship between direct sound and diffuse sound at he tlistener position depends on many parameters of YOUR room.

Acoustic of your room cannot be equalizing.

LW (+ -30H in H) is too large for many cases in a hi-fi system and on axis if a single measurement is subjected to many variations within a minimum angle.
Many loudspeakers have a local accident in the axis responses but not at + 5 °.
So be careful, the spinorama does not necessarily allow you to have sufficient analysis finesse to choose ON or LW.

Thus, the spinorama is a good indicator of speaker quality, but not ideal for applying the "perfect" equalization.

It is better to make your own measurement in real LW (covering your listening aera) to apply a flat equalization of the direct sound with some measurements in anechoic condition.
Possible with a proximity measurement and by applying a time window. REW can be used.

It is also possible to measure at your listening position in MMM to estimate the direct / diffuse ratio but also to equalize the bass which depends on the modal activity.

See the opinions of other acousticians such as Phlip Newell (book Recording studio design).
 
OP
TimVG

TimVG

Major Contributor
Forum Donor
Joined
Sep 16, 2019
Messages
1,191
Likes
2,623
Hi,

Let's start with the F206 only. No EQ: 6.27
It is not easiest to EQ but we'll see how it pans out.

The score are probably not that accurate because they are not from the raw data.
Take them as relative scores rather.

View attachment 75811

again, can you discriminate between them and the raw speaker?
If so is there one better than the others?
I did not see, maybe I missed it, any clear answer for the M105, could you give us your final conclusion at one point?


Code:
% H1: 7.03
x0=[ 221.0, -0.66,  6.43,...
     375.0,  0.95,  2.53,...
     598.0,  0.48,  6.50,...
    1100.0, -0.33,  0.50,...
    1324.0,  1.65,  3.82,...
    2145.0,  0.73,  5.30,...
    3435.0,  1.13,  3.00,...
    5340.0,  0.75,  8.40,...
   10010.0, -0.50,  11.1,...
   11560.0, -0.75,  3.80,...
    4333.0, -0.00,  3.60,...
    1];

% SL: 6.94
x0=[ 223.0, -0.46,  9.00,...
     375.0,  1.40,  2.53,...
     570.0,  0.51,  6.50,...
     987.0, -0.43,  0.61,...
    1321.0,  2.00,  4.00,...
    1820.0, -1.24,  3.70,...
    3435.0,  1.65,  3.00,...
    5350.0,  0.75,  12.5,...
    9924.0, -0.67,  7.60,...
   11560.0, -1.10,  2.60,...
    4333.0, -0.00,  3.60,...
    1];

% SC 7.14
x0=[ 223.0, -0.46,  11.1,...
     375.0,  1.40,  2.53,...
     570.0,  0.66,  4.70,...
     987.0, -0.33,  0.69,...
    1331.0,  2.00,  3.22,...
    1721.0, -0.88,  5.45,...
    3435.0,  0.90,  6.00,...
    5351.0,  0.69,  14.3,...
    9924.0, -0.61,  8.60,...
   11560.0, -0.95,  2.00,...
    4333.0, -0.00,  3.60,...
    1];


Will test these over the course of the day!
For the M105 results please see post #66 of this thread - I left in your filterset at the end minus the dip at about 4,9kHz which shows up in the PIR as it seemed kind of weird 'visually' :)
 
Top Bottom