• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required as is 20 years of participation in forums (not all true). Come here to have fun, be ready to be teased and not take online life too seriously. We now measure and review equipment for free! Click here for details.

Master Preference Ratings for Loudspeakers

OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
2,306
Likes
4,871
Location
Land O’ Lakes, Florida
Thread Starter #501
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
2,306
Likes
4,871
Location
Land O’ Lakes, Florida
Thread Starter #502
It would be nice if someone could double-check what the mathematically correct weights are. This doc that @pierre found is supposed to describe it, but I'm not that good at math :oops:
This is infuriating, I found a Princeton article:
http://www.princeton.edu/3D3A/Publications/Tylka_3D3A_DICalculation.pdf
Where they go about the calculations of weights, but they only show 5-degree increments, but they are totally different to the 5-degree increment weights in the 2034 document!
 

edechamps

Addicted to Fun and Learning
Forum Donor
Joined
Nov 21, 2018
Messages
676
Likes
2,715
Location
London, United Kingdom
In practice, I have observed that the data matches exactly for most of the columns, but there is one notable exception: the Rear reflection data is inconsistent between "Early Reflections.txt" and "Horizontal Reflections.txt". And the difference is far from negligible
I figured out what went wrong on Klippel's side. In "Horizontal Reflections", they compute the average of every horizontal angle between -90° and 90°. But in "Early Reflections", I determined that it is an average of just 3 angles: -90°, 180°, and 90°.

I had a feeling that this had been discussed before, and indeed, I found a previous thread precisely about this. The conclusion was: CTA-2034A is poorly worded and ambiguous, the correct approach is to use all rear angles, not just 3. Klippel uses the correct approach in the "Horizontal Reflections" export file, but not in "Early Reflections". @napilopez @Dave Zan
 

edechamps

Addicted to Fun and Learning
Forum Donor
Joined
Nov 21, 2018
Messages
676
Likes
2,715
Location
London, United Kingdom
A couple more findings:

I have verified that "Total Vertical Reflections" is the power average of the Floor and Ceiling Reflection curves.

For "Total Horizontal Reflections", however, no clue. It's not the power average of (Front, Side, Rear) nor (Front, Side, Rear Wall Bounce). I don't know how it's computed.

These don't matter much though, because these "total reflection" curves are not standardized in CTA-2034A. Klippel just produces them as a bonus I guess.
 
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
2,306
Likes
4,871
Location
Land O’ Lakes, Florida
Thread Starter #506

napilopez

Major Contributor
Joined
Oct 17, 2018
Messages
1,249
Likes
3,822
Location
NYC
A couple more findings:

I have verified that "Total Vertical Reflections" is the power average of the Floor and Ceiling Reflection curves.

For "Total Horizontal Reflections", however, no clue. It's not the power average of (Front, Side, Rear) nor (Front, Side, Rear Wall Bounce). I don't know how it's computed.

These don't matter much though, because these "total reflection" curves are not standardized in CTA-2034A. Klippel just produces them as a bonus I guess.
@edechamps I don't know what the Klippel does, but if one is to calculate a total Horizontal curve it should presumably be an rms average of the front, side, and rear curves (that's what I do anyway). Section 5.2 of CTA-2034-A makes it seem like you could compute a total Horizontal reflections this way, since it is formatted the same way as the early reflections curve is, as an average of averages. Again, the document could use some extra clarification.

Did you try a simple average of all 36 horizontal curves?
 

edechamps

Addicted to Fun and Learning
Forum Donor
Joined
Nov 21, 2018
Messages
676
Likes
2,715
Location
London, United Kingdom
Did you try a simple average of all 36 horizontal curves?
Oh, great catch! Yeah, that's it, "Total Horizontal Reflections" is the average of the underlying curves of Front, Side and Rear Wall Bounce (the full semi-circle Rear). I should have thought of that.

In fact "Total Vertical Reflections" is probably meant as an average of the underlying curves as well, but in the case of Vertical it doesn't make a difference because there's the same number of underlying curves in every "section" anyway, so doing the average of averages is exactly the same as doing the average of the underlying curves. That's not true for Horizontal Reflections, hence my confusion.
 

napilopez

Major Contributor
Joined
Oct 17, 2018
Messages
1,249
Likes
3,822
Location
NYC
Oh, great catch! Yeah, that's it, "Total Horizontal Reflections" is the average of the underlying curves of Front, Side and Rear Wall Bounce (the full semi-circle Rear). I should have thought of that.

In fact "Total Vertical Reflections" is probably meant as an average of the underlying curves as well, but in the case of Vertical it doesn't make a difference because there's the same number of underlying curves in every "section" anyway, so doing the average of averages is exactly the same as doing the average of the underlying curves. That's not true for Horizontal Reflections, hence my confusion.
Makes sense! No effective weighting on the vertical curves. Btw, this is how VituixCAD calculates it's curves too after I brought it to the creator's attention. It's all using the average of averages method where relevant, in case anyone wants to double check their curves.
 

pierre

Senior Member
Forum Donor
Joined
Jul 1, 2017
Messages
353
Likes
521
Location
Switzerland
This is infuriating, I found a Princeton article:
http://www.princeton.edu/3D3A/Publications/Tylka_3D3A_DICalculation.pdf
Where they go about the calculations of weights, but they only show 5-degree increments, but they are totally different to the 5-degree increment weights in the 2034 document!
these one is easy to understand so I tried to compute the weights with the included formula:
1596468709363.png

blacks are the values from this paper with delta = 5 degrees, red same paper delta = 10 degrees, green from cea2034
all sets sum up to ~0.5.

I don’t understand yet where the green values come from.

intuitively contribution of on-axis should be greater that 45 degrees. It took me some times to realize that the weight is the surface of the part the sphere between 2 angles which increases with the angle up to 90 Degrees.
 

pierre

Senior Member
Forum Donor
Joined
Jul 1, 2017
Messages
353
Likes
521
Location
Switzerland
these one is easy to understand so I tried to compute the weights with the included formula: View attachment 76432
blacks are the values from this paper with delta = 5 degrees, red same paper delta = 10 degrees, green from cea2034
all sets sum up to ~0.5.

I don’t understand yet where the green values come from.

intuitively contribution of on-axis should be greater that 45 degrees. It took me some times to realize that the weight is the surface of the part the sphere between 2 angles which increases with the angle up to 90 Degrees.
ok I have found how the weights are computed. The article above slice the surface of the sphere by moving a plane, generating concentric rings.
Another way follow the picture below:
1596655450857.png


And a third way is by intersecting 2 lunes, which is what they chooses.

1596694269765.jpeg


if alpha and beta are two angles that defines the quadrangle, then. The area
is given by:
Python:
def areaQ(alpha_d, beta_d):
  alpha = alpha_d * 2 * math.pi / 360
  beta  = beta_d * 2 * math.pi /360
  gamma = math.acos(math.cos(alpha)*math.cos(beta))
  A = math.atan(math.sin(beta)/math.tan(alpha))
  B = math.atan(math.sin(alpha)/math.tan(beta))
  C = math.acos(-math.cos(A)*math.cos(B)+math.sin(A)*math.sin(B)*math.cos(gamma))
  S = (4*C-2*math.pi)
  #print('gamma {} A {} B {} C {} S {}'.format(
  #    gamma*360/2/math.pi, A*360/2/math.pi, B*360/2/math.pi, C*360/2/math.pi, S))
  return S
For the weights, they start at 0,0 with alpha=5 and beta=5, the next one is alpha=15, beta=15 (minus the first one) et catera:
1596694324121.jpeg


Python:
a  = [i*10+5 for i in range(0,9)] + [90]
wa = [areaQ(i, i) for i in a]
w = [wa[0]] + [wa-wa[i-1] for i in range (1,len(wa))]
w[9] *= 2
ws = np.linalg.norm(w)
w  /= ws/0.047133397655733274
print(w)

which gives the weights from the standard from 0 to 90 degrees, after they are periodic.
Python:
[I][0.00060449 0.00473019 0.00895503 0.01238735 0.01498961 0.01686815 0.01816596 0.01900674 0.01947779 0.01962937][/I]
almost done.

I still need to figure out why 0.047xx for normalizing And why x2 for 90degree. I learnt plenty on spherical excess and other funny relations on a sphere which are known for a long time. If I was not on my iPad I would add some pictures of drawing that helps to understand alpha, beta and the various trigonometric equations.
 
Last edited:
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
2,306
Likes
4,871
Location
Land O’ Lakes, Florida
Thread Starter #512
ok I have found how the weights are computed. The article above slice the surface of the sphere by moving a plane, generating concentric rings.
another way follow the picture below: View attachment 76774

And a third way is by intersecting 2 lunes, which is what they chooses.

if alpha and beta are two angles that defines the quadrangle, then. The area
is given by:

def areaQ(alpha_d, beta_d):
alpha = alpha_d * 2 * math.pi / 360
beta = beta_d * 2 * math.pi /360
gamma = math.acos(math.cos(alpha)*math.cos(beta))
A = math.atan(math.sin(beta)/math.tan(alpha))
B = math.atan(math.sin(alpha)/math.tan(beta))
C = math.acos(-math.cos(A)*math.cos(B)+math.sin(A)*math.sin(B)*math.cos(gamma))
S = (4*C-2*math.pi)
#print('gamma {} A {} B {} C {} S {}'.format(
# gamma*360/2/math.pi, A*360/2/math.pi, B*360/2/math.pi, C*360/2/math.pi, S))
return S

for the weights, they start at 0,0 with alpha=5 and beta=5
the next one is alpha=15, beta=15 (minus the first one);

a = [i*10+5 for i in range(0,9)] + [90]
wa = [areaQ(i, i) for i in a]
w = [wa[0]] + [wa-wa[i-1] for i in range (1,len(wa))]
w[9] *= 2
ws = np.linalg.norm(w)
w /= ws/0.047133397655733274
print(w)


which gives the weights from the standard from 0 to 90 degrees, after they are periodic.

[0.00060449 0.00473019 0.00895503 0.01238735 0.01498961 0.01686815 0.01816596 0.01900674 0.01947779 0.01962937]
almost done.


I still need to figure out why 0.047xx for normalizing And why x2 for 90degree. I have learnt plenty on spherical excess and other funny relations on a sphere which are known for a long time. If I was not on my iPad I would add some pictures of drawing that helps to understand alpha, beta and the various trigonometric equations.
 
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
2,306
Likes
4,871
Location
Land O’ Lakes, Florida
Thread Starter #513
And a third way is by intersecting 2 lunes, which is what they chooses.
Hmm, so does that mean the 10-degree quadrangle is covering 5-15 degrees, and the 0-degree is covering 355 to 5 degrees? If so, that makes sense, as if the 1st Lune starts at 0 and ends at 10, then the quadrangle which spans that same region thus has it's midpoint not at 10-degrees.
 

edechamps

Addicted to Fun and Learning
Forum Donor
Joined
Nov 21, 2018
Messages
676
Likes
2,715
Location
London, United Kingdom
I have - finally! - implemented the "ER fix" in Loudspeaker Explorer, so now the data shown for Early Reflections and Estimated In-Room response follows the correct interpretation of CTA-2034A and diverges from the curves in the published datasets. I've even made it configurable so it's possible to run Loudspeaker Explorer in the wrong "Klippel" mode too ("Curve generation" section).

So now, I expect Loudspeaker Explorer to generate Olive preference ratings that are exactly identical to @MZKM's, since we're using the same curves.

For many speakers, that is indeed the case - the scores are identical down to 2 decimal points. However, there are also a number of speakers where the score diverges a bit. For example Adam Audio S2V, Ascend Acoustics CMT-340 SE Center, Ascend Acoustics Sierra-2 (to only name a few). I noticed that if I run Loudspeaker Explorer in "Klippel" curve mode, the scores match. @MZKM: does that mean that a subset of your scores is still missing the ER fix? Or is this a real descrepency I should investigate?
 
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
2,306
Likes
4,871
Location
Land O’ Lakes, Florida
Thread Starter #515
I have - finally! - implemented the "ER fix" in Loudspeaker Explorer, so now the data shown for Early Reflections and Estimated In-Room response follows the correct interpretation of CTA-2034A and diverges from the curves in the published datasets. I've even made it configurable so it's possible to run Loudspeaker Explorer in the wrong "Klippel" mode too ("Curve generation" section).

So now, I expect Loudspeaker Explorer to generate Olive preference ratings that are exactly identical to @MZKM's, since we're using the same curves.

For many speakers, that is indeed the case - the scores are identical down to 2 decimal points. However, there are also a number of speakers where the score diverges a bit. For example Adam Audio S2V, Ascend Acoustics CMT-340 SE Center, Ascend Acoustics Sierra-2 (to only name a few). I noticed that if I run Loudspeaker Explorer in "Klippel" curve mode, the scores match. @MZKM: does that mean that a subset of your scores is still missing the ER fix? Or is this a real descrepency I should investigate?
I have not applied the fix to a decent amount of older models.
 

Sgt. Ear Ache

Addicted to Fun and Learning
Joined
Jun 18, 2019
Messages
687
Likes
1,093
Location
Winnipeg Canada
LOL. If you fellows could boil all this down into one or two brief, succinct sentences for those of us who majored in English Lit that would be much appreciated thanks.

:D
 
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
2,306
Likes
4,871
Location
Land O’ Lakes, Florida
Thread Starter #517
LOL. If you fellows could boil all this down into one or two brief, succinct sentences for those of us who majored in English Lit that would be much appreciated thanks.

:D
The graphs Amir show are not fully correct for the Early Reflections and Sound Power. PIR. The machine measured it correctly, but it calculated the graphs wrong. I (and others) are taking the measurements and calculating the correct graphs. Amir told Klippel this, and they just told him how to manually correct it, rather then fix it on their end (maybe their programmer is on leave).
 
Last edited:

Sgt. Ear Ache

Addicted to Fun and Learning
Joined
Jun 18, 2019
Messages
687
Likes
1,093
Location
Winnipeg Canada
The graphs Amir show are not fully correct for the Early Reflections and Sound Power. The machine measured it correctly, but it calculated the graphs wrong. I (and others) are taking the measurements and calculating the correct graphs. Amir told Klippel this, and they just told him how to manually correct it, rather then fix it on their end (maybe their programmer is on leave).
Cheers!

(although I was mostly just kidding. But thanks!)
 

edechamps

Addicted to Fun and Learning
Forum Donor
Joined
Nov 21, 2018
Messages
676
Likes
2,715
Location
London, United Kingdom
The graphs Amir show are not fully correct for the Early Reflections and Sound Power.
Did I miss something about Sound Power being wrong? The data in Amir's datasets do follow the weights in CTA-2034A (weighted power average), so I'm not sure why you'd think there's a problem there.
 
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
2,306
Likes
4,871
Location
Land O’ Lakes, Florida
Thread Starter #520
Did I miss something about Sound Power being wrong? The data in Amir's datasets do follow the weights in CTA-2034A (weighted power average), so I'm not sure why you'd think there's a problem there.
My bad, PIR.

Though there is a wondering on where they are getting their weights as discussed in previous posts.
 
Top Bottom