HooStat
Addicted to Fun and Learning
I do show that radar chart
I think that is probably the most important measure -- more relevant than the score. I didn't realize that is what the radar plot reflected.
I read through Olive's patent application. I can't believe anybody got a patent for what is a garden variety statistical analysis, but good for him and Harmon. I think the real value is the experiments themselves and the approach to summarizing complex functions over frequency. The method is just basic statistics. But it was really informative. Assuming that is the documentation for the analyses on this site, it explains a lot.
The one thing I noted was that there was 1 measured preference score that was above 8 and 1 calculated preference score above 7. So calculated preference scores above about 7 are going to have a lot of variance (more than scores in the middle of the range). Plus, we run into ceiling effects (i.e., no measured score can be above 10) and I assume things get non-linear near the tops of the range. By non-linear, I mean we hit the point of diminishing returns for small improvements in any one of the model's inputs. All of this is to say that any "high" calculated score is very uncertain. The things that matter at/above 8 might be very different than what is measured for the model (or maybe not -- we have no data to know for sure since there is no data). Most likely, there are some additional variables that might help discriminate the very high scores. I wouldn't expect to need a completely different model.
Just to be clear, I am not criticizing any of Olive's work or your hard work in putting this all together. I am just trying to lay out some of the limitations of using this kind of model to make inferences about subjective preferences so people don't get too carried away with the different preference scores with EQ.