• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Random musings on The Preference Ratings of speaker reviews

bobbooo

Major Contributor
Joined
Aug 30, 2019
Messages
1,479
Likes
2,079
Yes, the 20 hz assumption looks to make more sense.

I really don't see how. Remember, a 20 Hz LFX frequency means -6 dB at 20 Hz, not flat down to 20 Hz. Why choose this figure? Just because it's a nice round multiple of 10? This is much more arbitrary in my eyes than an LFX point of 14.5 Hz which, as I said in this post allows for flat response down to 20 Hz for most ported and sealed subs (only 0.5 Hz out for the latter with a ~12 dB per octave bass roll-off), covering all frequencies in music for an ideal response, which was part of the original intention of the w/sub score (and why I've previously suggested it should be named 'score w/ ideal sub' to avoid confusion). And yes, the actual listener preference ratings were given out of 10, so I don't see how it makes any sense for the maximum theoretical score to be less than 10 for the predicted ratings.

Having said all this, I do think @edechamps ' brilliant Loudspeaker Explorer having the option to choose the LFX frequency for the score calculation is very useful, as you can then just set it to your sub's actual LFX frequency for a personalised score (leaving the 'official' w/ sub score as the 'maximum potential score' you could get with an ideal sub, not necessarily the one you currently have). I just wish the Loudspeaker Explorer's LFX slider went lower than 20 Hz ;) As @MZKM and others have linked to, there are definitely subs out there with lower LFX frequencies than 20 Hz, at not unreasonable prices either.
 
Last edited:

Universal Cereal Bus

Active Member
Joined
Sep 24, 2020
Messages
171
Likes
358
"A general model is provided for predicting a loudspeaker preference rating, where the model's predicted loudspeaker preference rating is calculated based upon the sum of a plurality of weighted independent variables that statistically quantify amplitude deviations in a loudspeaker frequency response. The independent variables selected may be independent variables determined as maximizing the ability of a loudspeaker preference variable to predict a loudspeaker preference rating. A multiple regression analysis is performed to determine respective weights for the selected independent variables. The weighted independent variables are arranged into a linear relationship on which the loudspeaker preference variable depends."

I didn't know it was patented...

Thanks for finding this--very interesting. You linked the application publication, not the granted patent publication, which is here. Notice the narrower granted claim scope:

1. A method for predicting a loudspeaker preference rating, the method including,
measuring the frequency response of a loudspeaker by sending a series of audio signals to the loudspeaker;
recording in a storage medium the measured frequency response of the loudspeaker for each audio signal; and
predicting the loudspeaker's preference rating, using a multiple linear statistical regression model, based upon a measured deviation in the stored measured frequency response of the loudspeaker.


vs.

1. A method for predicting a loudspeaker preference rating that correlates the loudspeaker's preference rating, using a statistical regression model, to a measured deviation in a frequency response of a loudspeaker.

This is a classic example of a "vanity" patent. It looks good on the inventor's office wall but has no chance of monetization.

Question for the long-time members of this forum: what's the reason for Amir taking the measurements and MZKM calculating the preference rating? Is it simply to delegate workload?

Though I doubt Harman is looking to enforce this patent, claim 1 arguably captures the combined activities of Amir and MZKM. Luckily for Amir and MZKM, there's no single infringer but if Amir calculated the preference rating, Harman could (however unlikely) go after Amir for infringing claim 1! There wouldn't even need to be discovery; Harman's attorneys would find all the evidence they need on ASR lol. (setting aside the joint infringement issue, which I don't think applies here)

Knowing that Amir is at least somewhat knowledgeable about patents, and was also possibly aware of said Olive patent, I wonder if dividing up the "measuring... recording... and predicting" steps between two people was a conscious decision to avoid patent infringement?
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,671
Likes
241,051
Location
Seattle Area
Knowing that Amir is at least somewhat knowledgeable about patents, and was also possibly aware of said Olive patent, I wonder if dividing up the "measuring... recording... and predicting" steps between two people was a conscious decision to avoid patent infringement?
No. Initially I did not have the tool to compute it myself. And by the time the tool was developed, I was not all that sold on the measure so didn't adopt it myself.

On the patent, I specifically asked Sean Olive about it before I set out to measure speakers. He told me other companies are using it and as such, we might as well (without a license). As you said, I can't imagine this being in interest of Samsung to litigate. Even if it were, what damages can they demonstrate? If anything, the scoring has helped Harman/Samsung, not hurt.
 

Universal Cereal Bus

Active Member
Joined
Sep 24, 2020
Messages
171
Likes
358
On the patent, I specifically asked Sean Olive about it before I set out to measure speakers.

Ah, good job. Imagine needing to perform due diligence in relation to a retirement hobby.

Even if it were, what damages can they demonstrate?

Big corporations assert patents mainly for the injunction, less so for the money. Damage awards are not a good way to make revenue, unless that's your whole raison d'etre (NPEs).
 

HooStat

Addicted to Fun and Learning
Joined
May 11, 2020
Messages
856
Likes
934
Location
Calabasas, CA
A method for predicting a loudspeaker preference rating that correlates the loudspeaker's preference rating, using a statistical regression model, to a measured deviation in a frequency response of a loudspeaker.

I don't see this as even close to a breach of a patent. The patent is for the method, which involves 3 steps:
  1. Measure preference score for a speaker by having trained listeners evaluate it (presumably blinded)
  2. Measure speaker frequency response
  3. Construct regression model to predict preference
The only thing ASR is doing is #2. There is no blinded listening, and there is no regression model being constructed. (Note that using the weights from a regression model is a totally different thing.)

I am not a lawyer, but I don't see any issues at all.
 
Last edited:

Universal Cereal Bus

Active Member
Joined
Sep 24, 2020
Messages
171
Likes
358
I don't see this as even close to a breach of a patent. The patent is for the method, which involves 3 steps:
  1. Measure preference score for a speaker by having trained listeners evaluate it
  2. Measure speaker frequency response
  3. Run regression model to predict preference
The only thing ASR is doing is #2. There is no blinded listening, and there is no regression model being constructed. (Note that using the weights from a regression model is a totally different thing.)

I am not a lawyer, but I don't see any issues at all.

I'm sorry to derail the OP so I'll try to nip this in the bud. The question of infringement is always asked in relation to the content of enumerated claims at the end of the document. The description text and figures in the patent are used for understanding the meaning of the terms in the claims. I will requote claim 1 here again:

1. A method for predicting a loudspeaker preference rating, the method including,
measuring the frequency response of a loudspeaker by sending a series of audio signals to the loudspeaker;
recording in a storage medium the measured frequency response of the loudspeaker for each audio signal; and
predicting the loudspeaker's preference rating, using a multiple linear statistical regression model, based upon a measured deviation in the stored measured frequency response of the loudspeaker.


There is no mention of "having trained listeners evaluate it" or "[running a] regression model".
 

HooStat

Addicted to Fun and Learning
Joined
May 11, 2020
Messages
856
Likes
934
Location
Calabasas, CA
There is no mention of "having trained listeners evaluate it" or "[running a] regression model".

I admit that the wording on the patent is very poor from a statistician's perspective. But these things are actually mentioned.
  • It is a method for predicting preference rating. One can't "predict the preference rating" if it doesn't exist. Hence, the results from the trained listeners are a required part of the method. ASR doesn't have this.
  • "Using a linear regression model" is exactly the same as "constructing a linear regression model". The regression model is connecting measured preferences from listeners to measured deviations. ASR is not doing this either.
It may be confusing to non-statisticians, but using the results of a regression model to calculate a score (as is done at ASR) is a fundamentally and completely different thing from constructing a regression model based on the inputs (frequency response) and the outcome (measured preference) which is what this patent is about.
 
Last edited:

xarkkon

Active Member
Joined
May 26, 2019
Messages
228
Likes
338
Can you direct me to the page where you saw the LSR305's w/sub score is better than all but the 8341A's? I checked the following two pages and did not see that.
View attachment 84387
View attachment 84388
Oh man, look at that GR Research XLS Encore score... must be all those tube connectors blowing everyone else out of the water! :D
 

Universal Cereal Bus

Active Member
Joined
Sep 24, 2020
Messages
171
Likes
358
I admit that the wording on the patent is very poor from a statistician's perspective. But these things are actually mentioned.
  • It is a method for predicting preference rating. One can't "predict the preference rating" if it doesn't exist. Hence, the results from the trained listeners are a required part of the method. ASR doesn't have this.
  • "Using a linear regression model" is exactly the same as "constructing a linear regression model". The regression model is connecting measured preferences from listeners to measured deviations. ASR is not doing this either.
It may be confusing to non-statisticians, but using the results of a regression model to calculate a score (as is done at ASR) is a fundamentally and completely different thing from constructing the model based on the inputs (frequency response) and the outcome (measured preference) which is what this patent is about.

I feel I need to be a bit more clear and rephrase: There is no mention in claim 1 of "having trained listeners evaluate it" or "[running a] regression model". I invite you to review my previous post(s) about the importance of reading claim 1 because I think you are confusing the overall thrust of "what this patent is about" with what is the subject matter of an assertable claim.

I must also disagree that "Using a linear regression model" is exactly the same as "constructing a linear regression model". "Using" is clearly a different verb than "constructing". You yourself write "using the results of a regression model to calculate a score (as is done at ASR)"; is that not the same as predicting the loudspeaker's preference rating, using a multiple linear statistical regression model?
 

HooStat

Addicted to Fun and Learning
Joined
May 11, 2020
Messages
856
Likes
934
Location
Calabasas, CA
I feel I need to be a bit more clear and rephrase: There is no mention in claim 1 of "having trained listeners evaluate it" or "[running a] regression model". I invite you to review my previous post(s) about the importance of reading claim 1 because I think you are confusing the overall thrust of "what this patent is about" with what is the subject matter of an assertable claim.

I must also disagree that "Using a linear regression model" is exactly the same as "constructing a linear regression model". "Using" is clearly a different verb than "constructing". You yourself write "using the results of a regression model to calculate a score (as is done at ASR)"; is that not the same as predicting the loudspeaker's preference rating, using a multiple linear statistical regression model?

I agree that the precision of the language is lacking, both in the patent, and in my text. Apologies for that.

I am talking about this part of the claim: " and predicting the loudspeaker's preference rating, using a multiple linear statistical regression model, based upon a measured deviation in the stored measured frequency response of the loudspeaker."

There is only one way that this can work. So, let's unpack it.

"and predicting the loudspeaker's preference rating using a multiple linear statistical regression model"
This means that the regression model has a preference rating as its outcome variable (i.e., the thing that is being predicted). It would have been clearer if they had said "predicting the loudspeaker's measured preference rating". But there is no other quantity that could be used as the outcome variable. The rest of the patent makes this more clear when they go into the details of the model and they show how the measured preference ratings are used. So, there is no ambiguity about what is going on.

"based upon a measured deviation in the stored measured frequency response of the loudspeaker"
This means that the information used to predict the [measured] preference rating is the frequency response data

The method doesn't function without measured preference scores, measured frequency response, and a regression model to connect them.

Until this method is implemented, there is no way to predict preferences. Once this method is implemented, the regression model provides weights which can be used to estimate the preference score of a speaker for which a preference score does not exist. The confusing part is that, colloquially, this is called "using a regression model to predict the results". As you can see, the same words are used for two separate and distinct things. Hence the confusion.

So, how does one know which applies? From reading the entire patent.

Just to be clear, I am not trying to argue the finer legal points -- I am not qualified to do that. And I could certainly be wrong about how this should be interpreted. I am trying to show that there is another reasonable interpretation.
 

Universal Cereal Bus

Active Member
Joined
Sep 24, 2020
Messages
171
Likes
358
Thanks for the explanation @HooStat. Can the multiple linear statistical regression model be something you run offline? I.e., the model is the result of what you get from Fig. 1: a linear equation representative of the predicted loudspeaker preference rating. If the model is represented by the linear equation, would its input be the stored measured frequency response of the loudspeaker and its output be the loudspeaker's preference rating?

I agree that many embodiments in the patent are directed to running the regression analysis online. There are even some independent claims that very clearly recite this:
  • Claim 10 "...applying a statistical regression technique to the selected set of independent variables to predict the loudspeaker preference rating by using a statistical regression technique to weight the variables and arrange the weighted independent variables into a linear relationship on which the loudspeaker preference variable depends."
  • Claim 12 "...performing a multiple regression analysis to determine respective weights b1-bn for the selected independent variables X1-Xn ... and predicting the loudspeaker preference rating by solving the linear relationship."
  • Claim 19 "...performing a multiple regression analysis ... and predicting a loudspeaker preference rating by finding a value for a loudspeaker preference variable (Pref. Rating) indicative of the loudspeaker preference rating according to..."
The differentiation between independent claims is another important clue for interpreting claim terms. When things are claimed differently, we should initially assume the inventor intended for the choice of different words to cover different things. It's enlightening that claim 1 omits these very clear steps in claims 10, 12, and 19 about actually running the regression. Moreover, claim 1 uses the term "multiple linear statistical regression model" whereas claims 10, 12, and 19 use the terms "statistical regression technique" or "multiple regression analysis".

Another useful comparison is independent claim 7, which is very similar to claim 1 and recites a different predicting step: "predicting the loudspeaker's preference rating, using a statistical regression model, based upon a measured deviation in the stored frequency response of the loudspeaker, where the statistical regression model uses weighted independent variables arranged in a linear relationship to calculate the loudspeaker preference rating and where the independent variables are derived from applying different statistical measures to frequency response curves that are derived from objective measurements."

Note that claim 7 has an interesting mix of active voice and passive voice. This is a common trick drafters use to exclude steps (passive voice) that are difficult to prove that someone actually performed. The claim isn't requiring the infringer actively make the model by selecting the correct independent variables; the claim only requires the infringer use the correct model.
 

edechamps

Addicted to Fun and Learning
Forum Donor
Joined
Nov 21, 2018
Messages
910
Likes
3,621
Location
London, United Kingdom
I really don't see how. Remember, a 20 Hz LFX frequency means -6 dB at 20 Hz, not flat down to 20 Hz. Why choose this figure? Just because it's a nice round multiple of 10? This is much more arbitrary in my eyes than an LFX point of 14.5 Hz which, as I said in this post allows for flat response down to 20 Hz for most ported and sealed subs

To give you an idea: the best LFX ever achieved out of all 90 speakers measured by Amir thus far is 1.55, which corresponds to -6 dB at 35 Hz.

I think it's reasonable to assume that the situation was similar for the sample of speakers @Sean Olive used in his study. (Okay, maybe a bit lower as he might have used more large floorstanders.)

If that's true, then that means that my LFX override of 20 Hz is already quite the extrapolation and is well outside the sample used in the study. So if anything, I should have picked an even higher "with sub" default LFX override in order to stay closer to the model's applicable domain. Using 14.5 Hz as you're suggesting is just adding insult to injury.

And yes, the actual listener preference ratings were given out of 10, so I don't see how it makes any sense for the maximum theoretical score to be less than 10 for the predicted ratings.

The model doesn't know, and doesn't care, that the max possible rating is 10. No speaker in the study sample got a 10. It's almost certain that if the model spit out a 10 it would not actually be rated 10 by the average listener, because the model was never trained on such extremes and would not be expected to work on such extremes. Given that, I don't see the point in obsessing on this "magic number".

The fact that using NBD_ON=NBD_PIR=SM_PIR=0 and a LFX of 14.5 Hz results in an estimated score of 10 is almost certainly a coincidence. Actually I wouldn't even call it a "coincidence" since 14.5 Hz is not a particularly interesting number (and as I just mentioned, 10 is not that interesting of a number either). If the score=10 LFX had been 13 Hz or 16 Hz we would likely still be having the same discussion right now.

This is much more arbitrary in my eyes than an LFX point of 14.5 Hz

I don't care about the chosen value for the LFX override being "arbitrary" or not. I care about it being as meaningful and useful as possible, i.e. trying not to stray too far from the model's domain, especially for comparisons with "without sub" scores. And from that perspective, 20 Hz seems strictly better than 14.5 Hz to me.
 

bobbooo

Major Contributor
Joined
Aug 30, 2019
Messages
1,479
Likes
2,079
If that's true, then that means that my LFX override of 20 Hz is already quite the extrapolation and is well outside the sample used in the study. So if anything, I should have picked an even higher "with sub" default LFX override in order to stay closer to the model's applicable domain.

And therein lies the problem with choosing 20 Hz. Why not 22 Hz? Or 20.852 Hz? Or 25 Hz, to reduce 'extrapolation' further? I see no reason given whatsoever why a specific value of 20 Hz should be chosen. In contrast, there are two good reasons why a specific value of 14.5 Hz is being used:

1. It's the number the formula spits out as an ideal LFX frequency to give a perfect 10 score (which, deny it or not, makes sense as an ideal value as the actual listener preference ratings were given out of 10).

2. It describes a subwoofer that is flat down to 20 Hz, an ideal response for replication of all audible bass frequencies.

Note the key word in both the reasons: ideal. An LFX frequency of 14.5 Hz represents an ideal boundary case, which then results in a maximum potential score a set of speakers could get with an ideal sub flat down to 20 Hz.

As for extrapolation, the w/ sub score itself will always be an extrapolation of the formula. A few Hz difference in LFX frequency won't change this. Strictly speaking, it's an extrapolation even using the formula to predict the without sub preference rating for any speaker not in the original dataset. Extrapolation is a necessary evil in exchange for a predictive score outside the dataset, and the w/ sub score is useful for users to know the maximum potential of a set of speakers when used with an ideal sub.

I don't care about the chosen value for the LFX override being "arbitrary" or not.

I think this is the big difference between us - I can't stand arbitrary choices :D Especially in science (even just on a audio science forum). We'll have to just agree to disagree on this. There's no need to force anyone to use a specific w/ sub LFX value anyway, they can all exist in peaceful harmony across different sites! Yours for a more 'average' subwoofer, and the one on here for an 'ideal' sub. I just wish @MZKM would make this clear to readers by calling it 'score w/ ideal sub', or 'score w/ sub (-6 dB at 14.5 Hz)', or 'score w/ sub (flat down to 20 Hz)' if the latter is deemed more intuitively understandable to readers. This would then avoid a lot of the confusion over what the w/ sub score represents, some of which is typified by this thread.
 
Last edited:

weasels

Senior Member
Joined
Jun 15, 2020
Messages
335
Likes
547
Location
Richmond, Virginia
Also please remember that the r-squared was only 0.74, that’s good but if means that 74% of the variation in preferences could be explained by the preference score

View attachment 84404

This is interesting, but r-squared only tells part of the story. Were there residual plots included as well? Those would be very interesting to evaluate with regard to model fit.
 

oivavoi

Major Contributor
Forum Donor
Joined
Jan 12, 2017
Messages
1,721
Likes
1,940
Location
Oslo, Norway
There is a target slope for the curves, but that was attained from averaging the better scorers. Olive even states in the paper that the target slope changes with the speaker's dispersion (a 2-way likely will have a steeper PIR slope than a 4-way). However, the SM score component does favor narrow directivity, and if a speaker has ultra wide dispersion, it scores very low. However, when talking about ultra wide dispersion speakers, I think I recall reading that Toole/Olive stated they don't sound as good as a normal wide dispersion speaker, so it's not bad to penalize them; but this means it also rewards narrow directivity speakers, like some of the KEF models.

This is one of the things which gives me some pause as to how I read the preference ratings. The highest rated speaker at the Canadian NRC tests, which was the precursor to the Harman tests, was a Mirage speaker with very wide dispersion. Dr. Toole was so impressed that he bought them for himself, before moving house.

As I have stated elsewhere on this forum, every single published blind test I have been able to find of omnis or very wide dispersion speakers seem to indicate that they do as well - and frequently better - than normal dispersion speakers.
https://www.audiosciencereview.com/...ectional-loudspeakers.6552/page-2#post-147417
Edit/addition: Also, all the psychoacoustic evidence that dr. Toole discusses in his book also point to that direction. People generally seem to like room reflections, the more the merrier it seems (as long as they are spectrally similar to the direct sound).

Now it's possible that Harman did some tests later where ultra wide dispersion speakers did worse than normal dispersion speakers. We don't know that, and if so, what kind of wide dispersion speakers they were. If only one such speaker was included, and this speaker was a Bose 901, it would probably skew the results, for example.

Being a (social) scientist by training myself I'm instinctively skeptical of relying on a model which is a black box, meaning that I don't know how it came about or if the results can be replicated. For me, this model is primarily useful for comparing apples to apples. If I see two speakers of a similar kind - say, a ported small two-way with a waveguide for the tweeter and a relatively narrow dispersion - and one of them performs much better than the others, I'll assume that the one with the better score is in fact the superior speaker. For comparing apples to oranges - meh, not so much.
 
Last edited:

RayDunzl

Grand Contributor
Central Scrutinizer
Joined
Mar 9, 2016
Messages
13,250
Likes
17,194
Location
Riverview FL
Thanks for finding this--very interesting. You linked the application publication, not the granted patent publication, which is here.

Yes, I noticed that after posting, but figured it was enough to make the point, whatever that was at the time.

Meanwhile, back at the office:

1600971662965.png
 

watchnerd

Grand Contributor
Joined
Dec 8, 2016
Messages
12,449
Likes
10,415
Location
Seattle Area, USA
Sorry, I don't take the slightest interest in the "preference formula", or anything produced from it. Making any buying decisions based on such scores would be foolhardy and most likely result in disappointment.

And beyond that, we have that list linked in every speaker review on ASR. People are going to look at it and make use of (or misuse) the Preference ratings.

I think the "problem" is even worse than it is for the SINAD chart. With the SINAD chart, the worst that happens is people overpay for inaudible differences in SINAD.

With the Preference score, people can mechanically buy speakers that score high, but are not suited to their use cases (e.g. bass / volume too small for room / listening distance).

I just don't see the point in using a metric that is as likely to be misleading as informative when it comes to purchase decisions.
 
OP
Blumlein 88

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,769
Likes
37,633
This is one of the things which gives me some pause as to how I read the preference ratings. The highest rated speaker at the Canadian NRC tests, which was the precursor to the Harman tests, was a Mirage speaker with very wide dispersion. Dr. Toole was so impressed that he bought them for himself, before moving house.

As I have stated elsewhere on this forum, every single published blind test I have been able to find of omnis or very wide dispersion speakers seem to indicate that they do as well - and frequently better - than normal dispersion speakers.
https://www.audiosciencereview.com/...ectional-loudspeakers.6552/page-2#post-147417
Edit/addition: Also, all the psychoacoustic evidence that dr. Toole discusses in his book also point to that direction. People generally seem to like room reflections, the more the merrier it seems (as long as they are spectrally similar to the direct sound).

Now it's possible that Harman did some tests later where ultra wide dispersion speakers did worse than normal dispersion speakers. We don't know that, and if so, what kind of wide dispersion speakers they were. If only one such speaker was included, and this speaker was a Bose 901, it would probably skew the results, for example.

Being a (social) scientist by training myself I'm instinctively skeptical of relying on a model which is a black box, meaning that I don't know how it came about or if the results can be replicated. For me, this model is primarily useful for comparing apples to apples. If I see two speakers of a similar kind - say, a ported small two-way with a waveguide for the tweeter and a relatively narrow dispersion - and one of them performs much better than the others, I'll assume that the one with the better score is in fact the superior speaker. For comparing apples to oranges - meh, not so much.
The Mirage were bipolar, but I don't know they were really wide dispersion. A friend had a couple models. The largest being the M1si my friend had the M3si. They were pretty wide speakers and above 500 hz or so I doubt they were of wide dispersion. Here is the horizontal measurement by Stereophile.

1600974782198.png


Nothing like the omni MBL 101E shown here.
1600974887301.png


Here is the Mirage M1si which I think is what Toole had. It has even less dispersion to a lower frequency.
1600974984770.png


Compare to the Revel F228Be which is wider dispersion and much smoother off axis.
1600980033594.png
 
Last edited:

oivavoi

Major Contributor
Forum Donor
Joined
Jan 12, 2017
Messages
1,721
Likes
1,940
Location
Oslo, Norway
The Mirage were bipolar, but I don't know they were really wide dispersion. A friend had a couple models. The largest being the M3si. They were pretty wide speakers and above 500 hz or so I doubt they were of wide dispersion. Here is the horizontal measurement by Stereophile.

View attachment 84622

Nothing like the omni MBL 101E shown here.
View attachment 84624

Here is the Mirage M1si which I think is what Toole had. It has even less dispersion to a lower frequency.
View attachment 84625

Thanks, I didn't know! I believe dr. Toole said that they were "essentially omnidirectional" or something like that, so that's what I assumed. Very interesting.
 
Top Bottom