• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

A Broad Discussion of Speakers with Major Audio Luminaries

If you are a scientist looking for scientific levels of certainty, then you may as well disregard any uncontrolled listening.

A scientist following a truly scientific approach would not only disregard any uncontrolled listening test, but any conclusion on the impact of general properties of the speakers tested under more or less identical, scientifically controlled conditions. For example, a result of a mono test would not allow to apply the findings on stereo conditions, even if some other preference ratings coincided. Even a conclusion such as ´linear frequency response is preferred´ could be a coincidental outcome of other properties of the tested candidates.

If you would want some results which are transferable and generally applicable, you would need to test the properties in question in isolated tests, one by one, and prove the results are valid under differing conditions.

Drawing conclusions such as ´listeners in mono tests had higher discrimination ratings for specific audible flaws tested, so preference results of all mono tests are automatically applicable under stereo and multichannel conditions as well´ is in my understanding much more unreliable stretch than any uncontrolled or non-blind testing.
 
OK, so then it must be possible to correctly discern meaningful differences when listening to a pair of speakers, sighted, without real-time switching to another pair.

Perhaps, but how could we be sure if sight bias does not impact our choices? If I look at a speaker with more woofers than another, would seem likely I might think that one has more bass. Suggest a better definition of how to compare speakers meaningfully in one’s home is needed.

Because if this is not the case, then I have no reason to ever buy "better" speakers than the ones I currently have. I won't hear the difference (and neither will you, nor anyone else) under normal listening conditions (no rapid switching to compare other speakers)

I think most expect the discipline applied in lab research to be more disciplined than personal evaluations. Either we extrapolate as best we can from the lab or we make our own judgement call. ASR falls somewhere in between and so Amir has chosen his set. Given that stereo is so speaker position dependent, do not think there is one right set of conditions for him to test consistently. With more time and/or better equipment, may be there could be.

The addition of Klippel testing is a major step and is helpful, but results are not available for every speaker. If you or anyone else has put some rigor into a home audio system, think we should respect those uniquely individual situations. Due to the individuality, it is unlikely a home can ever do the same discipline as a research facility. Notably, if measurement tools are lacking, home evaluation is much more subjective. When I read or listen to Dr. Toole, I discern experienced qualifications of his research. Can also see how his systems have evolved and why. Have never once heard him claim that any of his was better than the rest of ours. I respect that and respect if you are happy with your system too. :)
 
There will be actual and perceived differences 100% of the time (with whatever the reference is: My speakers back home if we are somewhere else or if we are at home the speakers we heard at [take your pick]).

100% of the time your brain will perceive the differences to be meaningful - for a number of reasons.

Science, through blind testing, has established that the perceived differences are rarely meaningful.

Measurements will confirm the differences 100% of the time.

Rubbish (at least i you want us to take that literally). If you hear a thing but think 'I could well be imagining that' then you are not perceiving differences to be meaningful 100% of the time. To the contrary, you are thinking 'that probably isn't meaningful'. You've provided another example of idiomatic/rhetorical use of quantitative terminology.
 
Last edited:
The majority of them had a degree in recording engineering or equivalent, typically with several decades of experience as balance engineers, bringing their own recordings for comparison tests.
So no training in detecting speaker tonality errors. I suggest ignoring their advice, lest you want the classical blind leading the blind.

This is what happens when folks who claim to have good ears are tested:

Trained+vs+UnTrained+Performance2.png


All the other groups in the above study had claims of such great hearing. Yet their outcomes were so unreliable.

Last time I was at Harman, Dr. Olive ran the listener training test by me and their top acoustic installer experts. All failed to get above level 2 or 3. I stayed with Sean up to level 6 and 7. Sean sailed past me. :)

Before you argue otherwise, tests like above are graded. That is, you take a test and results are compared to what they should be. Any self-claims are of zero value. We wouldn't accept you giving yourself a grade in college. Let's not do the same when it comes to audio.
 
A scientist following a truly scientific approach would not only disregard any uncontrolled listening test, but any conclusion on the impact of general properties of the speakers tested under more or less identical, scientifically controlled conditions.
They would also totally ignore what their professional colleagues say in uncontrolled studies. Yet you got done telling us their opinion matters and is an authoritative. So disregard we will. Please don't keep appealing to this false authority. It is wasted bits to capture them in this forum....
 
A scientist following a truly scientific approach would not only disregard any uncontrolled listening test, but any conclusion on the impact of general properties of the speakers tested under more or less identical, scientifically controlled conditions. For example, a result of a mono test ...
So you speak of people involved, real people, humans. How would a scientist 'control' them scientifically? It sounds odd, but that's what you silently implicate in my understanding. And that's the crux with engineers, the professional training, for the best reasons, doesn't focus on people with fluid minds, or 'brains', 'ears' as we put it often. Then people get statisti-fied and all, no good. There are other professions, that take care of that. Their language is weired, but what would we do?!

... would not allow to apply the findings on stereo conditions, even if some other preference ratings coincided. Even a conclusion such as ´linear frequency response is preferred´ could be a coincidental outcome of other properties of the tested candidates.
Right on, thank you.
 
All failed to get above level 2 or 3. I stayed with Sean up to level 6 and 7. Sean sailed past me. :)
Training and adaption both are influences that have a strong impact in audio.
Would level 2 or 3 qualify for the "Trained" group in the graph?
 
Last edited:
I don't think anyone should underestimate the listening skills of many great studio engineers. The abnormalities they can hear and, within seconds, know exactly how to address in the details of a mix if they are mixing engineers, or the overall balance of the sound if they are mastering engineers, is nothing but remarkable. I would say that detecting speaker tonality errors is a child's game in comparison to what is done in a music studio, but don't get me wrong, I have all the respect in the world for the "trained listener" group of Harman and what they do. :)
 
Last edited:
Are you familiar with the concept of i.i.d. in statistics? Your statement is very likely true for any statistical model when you use it on samples outside the population/distribution used to build the model.
Essentially it is a machine learning task which was performed. In this specific scientific community it is very common to use a training set which is different from the test set. Here it is well known that overfitting is a serious problem which should always be addressed.

Since creating statistical models are to some degree similar with mashine learning tasks overfitting also has to be addressed. Since the statistical model community isn't so focused on overfitting you can get much more easily a pass without proper investigation of overfitting issues...
 
Essentially it is a machine learning task which was performed. In this specific scientific community it is very common to use a training set which is different from the test set. Here it is well known that overfitting is a serious problem which should always be addressed.
If the data come cheap and you have plenty, sure, separate them into a training set, a testing set and a validation set. In this case, however, data is extremely expensive to obtain, and you want to maximize their usefulness. If you have limited amount of data, and you exclude some and use a subset to build your model, you are not going to get a very good model.
Since creating statistical models are to some degree similar with mashine learning tasks overfitting also has to be addressed. Since the statistical model community isn't so focused on overfitting you can get much more easily a pass without proper investigation of overfitting issues...
The model is PCA based, and only the first two PC's were used. Regularization is inherent.
 
If the data come cheap and you have plenty, sure, separate them into a training set, a testing set and a validation set. In this case, however, data is extremely expensive to obtain, and you want to maximize their usefulness. If you have limited amount of data, and you exclude some and use a subset to build your model, you are not going to get a very good model.

The model is PCA based, and only the first two PC's were used. Regularization is inherent.
I agree that compromises must be made when there are few samples and a model must be created. Nevertheless, I guess most people don't understand the limitations of such a model and how inaccurate it can be.

Simply selecting a small number of PCA components isn't prone to overfitting—or, rather, missfitting—since the chosen linear model might be completely wrong to begin with. It isn't clear whether you have an overfitted, poor-quality model that seems to work okay or good due to the low number of samples, or if a multidimensional, linear relationship is the right choice to begin with.

It's telling that the newer Harman/JBL/Revel speakers were not optimized according to the predicted preference score. I guess the model is better than nothing, especially for quickly judging standard designs as good or bad, but it's not good enough to use as an equalizing target.
 
Simply selecting a small number of PCA components isn't prone to overfitting—or, rather, missfitting—since the chosen linear model might be completely wrong to begin with. It isn't clear whether you have an overfitted, poor-quality model that seems to work okay or good due to the low number of samples, or if a multidimensional, linear relationship is the right choice to begin with.
The original objection of BenB was that Toole quoted the correlation coefficient of 0.996. And you said it may just be overfitting. Now it is a bad model, but not overfitted, but with a correlation coefficient of 0.996. Can you clarify?
It's telling that the newer Harman/JBL/Revel speakers were not optimized according to the predicted preference score. I guess the model is better than nothing, especially for quickly judging standard designs as good or bad, but it's not good enough to use as an equalizing target.
Where in the preference score model does it say anything about it usefulness as an equalizing target? If it was the intent (as an objective function to optimize for using EQ), I have little doubt with it would have been tested by Dr Olive and colleagues, just like the Harman headphone target.
 
The original objection of BenB was that Toole quoted the correlation coefficient of 0.996. And you said it may just be overfitting. Now it is a bad model, but not overfitted, but with a correlation coefficient of 0.996. Can you clarify?

Where in the preference score model does it say anything about it usefulness as an equalizing target? If it was the intent (as an objective function to optimize for using EQ), I have little doubt with it would have been tested by Dr Olive and colleagues, just like the Harman headphone target.
If your total sample size is small you have to be aware that simple linear relations like modeled with a PCA will almost always provide good results since you find linear dependencies even by chance, but if it really accurately represent the data at all can't be said without further research.

A lot of the input values seems to be linearly correlated which @BenB points out. But this doesn't prove that the model is accurate at all, which he also rightfully hinted.

As far as I know, the only time you published information about scores produced by your model on the tuning data, and later on novel data, the novel correlations were significantly less.
If this information is true you obviously have a significant amount of overfitting, since the results get worse by the change of the dataset.

If you have a look at the famous Datasaurus dozen you get an idea how wrong a seemingly fitting linear representation can be and here you only got two dimensions.
datasaurus-dozen.png


If the dimension of your data isn't orders of magnitude lower compared to the total sample size you always have to be skeptical squared.

I don't want to unfairly bash the research of the predicted preference score. It is hard to do something useful with a small dataset and the approach is over all okay but it none of the less has some significant limitations.
 
If your total sample size is small you have to be aware that simple linear relations like modeled with a PCA will almost always provide good results since you find linear dependencies even by chance, but if it really accurately represent the data at all can't be said without further research.

A lot of the input values seems to be linearly correlated which @BenB points out. But this doesn't prove that the model is accurate at all, which he also rightfully hinted.


If this information is true you obviously have a significant amount of overfitting, since the results get worse by the change of the dataset.

If you have a look at the famous Datasaurus dozen you get an idea how wrong a seemingly fitting linear representation can be and here you only got two dimensions.
datasaurus-dozen.png


If the dimension of your data isn't orders of magnitude lower compared to the total sample size you always have to be skeptical squared.

I don't want to unfairly bash the research of the predicted preference score. It is hard to do something useful with a small dataset and the approach is over all okay but it none of the less has some significant limitations.
Do you have any evidence that the data resemble any of those unicorn cases that you showed?
 
Do you have any evidence that the data resemble any of those unicorn cases that you showed?
The dataset is for educational purpose which seems to be a perfect fit for the explanation here, since some misconceptions seems to float around here!?

A lot of similar datasets have some kind of curvatures and destinct clusters. In this peticular case I would definitely bet on some kind of curved clusters. Such kind of data can't be accurately represented with a linear model.

If the sample size is low it might be interesting to plot the data samples in some dimensions since humans are much better in finding and extrapolate distributions and clusters.
 
The dataset is for educational purpose which seems to be a perfect fit for the explanation here, since some misconceptions seems to float around here!?

A lot of similar datasets have some kind of curvatures and destinct clusters. In this peticular case I would definitely bet on some kind of curved clusters. Such kind of data can't be accurately represented with a linear model.

If the sample size is low it might be interesting to plot the data samples in some dimensions since humans are much better in finding and extrapolate distributions and clusters.
Without doing any math, you are already overfitting.
 
As far as I know, the only time you published information about scores produced by your model on the tuning data, and later on novel data, the novel correlations were significantly less.
I'll check with Sean Olive, but I don't know of any such tests. The correlations resulting from his analysis reassured us that the measurements being done were useful indicators of perceived loudspeaker performance.

We have never used the model as a universal predictor of sound quality. Others have done so in spite of our objections, and results are often misleading. We relied only on informed interpretations of the raw spinorama data and the results of double-blind subjective evaluations - the reference standard. Models are "never" perfect. But that all is now history--the research group at Harman no longer exists. The 4th edition of my book will document that and the preceding NRCC research eras.
 
First of all thanks very much for your comment. And “ long winded” will certainly follow me to my gravestone :)

I’m just going to reply to this, and if Rick finds it too off topic he can end up deleting it.

Yes, this is a particularly long one, and I certainly understand anybody taking a FRAT response. But I feel that laying out my case requires a number of examples, so I’m just putting it out there.. as a summation of my position on this..



I don’t think I’m working with the level of naïveté you are inferring.

I have propounded as much as anybody here the nature and relevance of bias effects. Bias effects in audio are well demonstrated by the type of research cited here all the time. I’ve experienced my own bias effects vanishing under blind testing . (In fact my own work in sound involves exploiting bias effects).

So the conclusions I’m drawing are not in the context of ignoring bias effects, including my own, but they are trying to keep a coherent picture of what type of reasonable inferences we can still make in informal listening while considering the possibility of bias effects. Because most of the time we are operating in conditions where we cannot control for bias effects, we therefore have to arrive at practical, even if not fully certain conclusions.


So in my view, the following is reasonable:

1. If you are a scientist looking for scientific levels of certainty, then you may as well disregard any uncontrolled listening. You can even just throw it in the “bias” bin because it simply doesn’t offer the reliability for the confidence levels you are seeking in order to understand what’s going on.

2. An ASR member can simply disregard any uncontrolled listening reports from reviewers or other audiophiles and when choosing gear can even disregard his own impressions as untrustworthy. “ I’m looking for the most reliable information on which to make my decisions - uncontrolled listening won’t cut it, and I’m looking towards measurements or at the very least controlled listening tests” is a perfectly reasonable approach for somebody who just looks to measurements.

So “Listening in uncontrolled conditions, especially without supporting measurements, is unsuitable for gaining the level of confidence I’m seeking” is entirely reasonable.

What goes too far is the idea that listening in uncontrolled settings is always uninformative and nobody can be justified in making any inferences under such a situations.

THAT is what I push back against. And this level of scepticism - it’s probably just your imagination so I don’t have to take what you’re saying seriously - is often enough thrown around here when people don’t feel like accepting a claim, even for the sake of argument .

We can and do come to reasonable conclusions with lower than scientific confidence levels all the time. Otherwise, we couldn’t get through the day. I’m bringing mini instances of my own experience and asking what explanations make the most sense of those impressions.
And unfortunately, I find that sometimes the default move to “ it’s likely just sighted bias effects / imagination” starts to look fairly hand wavy when it gets to brass tacks.

So when it comes to putting any stock in my or some other audiophile’s informal, listening impressions I work on basic heuristics;

Extraordinary claims require extraordinary evidence. Is the impression of the gear implausible? Does it suggest something technically implausible? Or does it go against any known measurements of that gear?

And:

What is the best or most reasonable hypothesis or explanation for a given subjective impression?

So sticking just to evaluating loudspeakers:

If the hypothesis about somebody’s subjective impression is that it is a “ bias effect” then that suggests “ the loudspeaker doesn’t really sound like you think it does; it sounds different than what you perceive.

Well, if that’s going to be one hypothesis, its explanatory power should be put against another possible hypothesis “The sonic impressions are to some relevant degree accurate; because that really is what the speakers sound like!”

So let’s see these different hypotheses in action:

I auditioned the Revel Performa speakers a few times. I went in knowing how Revel speakers generally measure. So that could have biased my perception. Except the first audition didn’t go well - I was surprised by the substandard performance in which the speakers didn’t sound smooth, but somewhat rough in the highs and uneven in the bass. But I quickly realized this was due to a poor set up - as the speakers were placed too close to the back wall and one was near a large reflective wall of glass. The poor set up seemed to overpower any bias I might’ve had that the speakers would sound better.

However, in a different store the speakers were set up much better.

And I evaluated them in my normal way: listening to some of the same tracks I’ve used on countless different loudspeakers, and I evaluate the sound from further seating distances, middle and nearer field distances to see how the sound holds up - not every loudspeaker sounds coherent up close. I evaluate the sound in the vertical domain from kneeling down, sitting, standing to see if there’s any “venetian blind” effect or obvious changes, roll off in the highs or whatever. I walk around the speaker, listening from different angles to check off axis performance, both in terms of changes in tonality and changes in imaging - does the sound glom in to one speaker we’re moving off axis, or maintain a sense of spaciousness and imaging between the speakers? Etc.

What I perceived from the Revels was a beautifully balanced sound. Well controlled and even sounding from top to bottom. No obvious colorations. Even if there were room nodes they were not intrusive. Smooth off axis performance. Very neutral while being smooth and easy to listen to.

And all of this is predicted by and consonant with the way those speakers measure.

So what’s the best explanation for my impression? Was it just that I was biased to hear them that way and they didn’t really sound that way? And if they didn’t sound the way I perceive them how did they actually sound? What objective evidence disputes my impressions? It seems to be objective evidence in terms of measurements actually support my impressions.

It seems to me at least as reasonable, Occam's razor style , that the reason I had those impressions is because that’s how the speakers actually sounded. Very much as their measurements help predict.

The same could be said for the number of times I listened to the B&W 804 D4 (and D3) loudspeakers.

What I heard was a very open, very detailed spacious sound, with generally well controlled and not over-rich bass, quite “ free of the box” sounding from top to bottom, but also a lack of warmth in the midrange and some peakiness in the upper mids and treble region. It clearly wasn’t neutral, but instead that modern sculpted B&W sound. This was especially obvious when I heard them the same day that I heard the same tracks earlier on the more neutral Kii Audio Three speakers.

So what’s the best explanation for my impressions? Is it better explained as a bias effect because I do know how B&W tends to measure? Could be.

On the other hand… the measurements DO generally describe how the speaker will sound, and also can comport with my listening impressions.

So is the best explanation for why I heard the B&Ws to be less neutral and more peaky sounding in the upper frequencies than the Kiis a bias effect or… that I was actually hearing what they sound like, which is predicted by the measurements. Put that together with the fact that John Atkinson also report reported the same impressions about their lack of neutrality.

Again, it seems to me entirely reasonable to provisionally conclude I was perceiving the essential characteristics of that loudspeaker.

But then there’s the many other examples of where I listened to speakers before I was aware of any measurements.

I auditioned the Paradigm Personas when they were brand new in a local high-end store.

I found plenty to admire in terms of their amazing clarity, and they seemed generally very well, balanced, similar to the Revel.

With the exception that I kept noticing a sharp peak somewhere in the highs that over time was wearing me out. It didn’t seem to be showing up in vocal sibilance so much as being somewhere maybe higher up. At the end I found I want to keep turning the volume down and ultimately found my ears fatiguing, so I gave up on those speakers.

And later Kal reviewed those speakers in stereophile. His description was almost word for word in terms of the qualities I heard in those speakers INCLUDING his noting a peak in the highs that he guessed would be around 10 K. And sure enough in the measurements there it was. A sharp peak right at 10 K!

So what’s the best hypothesis for both my and Kal’s sonic impressions both made before seeing the measurements?

We just happen to have the same bias that produced precisely the same sonic impressions including a peak in the highs… and all the sonic impressions just happening to lineup with the measurements?

Seems to me something like Occam's razor allows the practical inference that we were simply actually hearing what that speaker really sounds like.

That happened again with the PMC Fact 8 speakers. My friend had those in for review and I listened to them. I was impressed by their open spacious sound, which sounded very clean and detailed.

But I was largely turned off overall because they sounded, to use that old term “ too hi-fi” - in the sense of exaggerating the artifice in recordings in the highs and not really sounding as natural as I like. In particular they sounded very “cool” and reductive, lacking warmth somewhere maybe in the lower mids or upper bass, I didn’t know, but it lacked body and warmth for male vocals and anything that usually had more body that I’m used to other speakers like my own.

After that I saw Kal’s review of the same speakers in stereophile and… again… his descriptions matched what I heard all the way down to him mentioning the same lack of warmth “ in the upper and mid bass.”

And there it was the Stereophile measurements! A bit of a roller coaster in the on axis frequency and JA’s comments only in room response he measured “….the PMC fact.8s' in-room response is shelved down in the lower midrange and bass and has significantly less presence-region energy.“

Again… best explanation for why Kal and I perceived the same characteristics and flaws in the speakers that turned out to be continent with the measurements?

Seems reasonable that our perception was relatively accurate to what the speaker actually sounds like.

I can keep going with all sorts of examples. The YG floor standing speakers that my friend reviewed, where he asked me over to listen without telling me what he thought.

These were sizeable floor standing speakers and I was expecting on my test tracks to hear something probably down to 35 Hz or so a similar presentation that I’ve heard from similar size speakers like my own. And yet I was shocked at the lack of bass, and also the consequent sense of emphasis in the upper mids and higher frequencies. These matched my friends own impressions exactly which is why he was having so much trouble with the speaker.

And when they were measured sure enough, the bass was very underdamped - they actually start sloping down around 200 Hz and fall down fairly steeply after 50hz.

And this was ameliorated by placing them near the corners.

Again… just a bias effect, or just some form of coincidence that my friend and I perceived the same characteristics, which were surprising to both of us given the size of the speaker, but which happened to be consonant with the measurements?

Seems entirely reasonable we simply heard what the speaker really sounded like in that room.

What about my own Joseph Perspective speakers? I didn’t even know those speakers existed let alone having read reviews before I heard them for the first time at the dealers. And my impressions have remained the same from that first time all the way to my owning them for years now. And they are consistent with what JA heard and measured in Stereophile (with the exception that the highs did not bother me in the first version of the model as much as they did JA, even though I could hear that they were tipped up).

Once I had my Joseph speakers dialled in at home and I had experimented with some acoustic diffusors I was blown away by the complete disappearing act of the speakers, the massive, enveloping sound stage (on appropriate record recordings), the richness yet tight quality of the bass, the beautiful clarity of the mids and the incredible smoothness and airiness of the highs. And especially with the diffuser, an amazing solidity and palpability to the images appearing in the vast sound stage.

Once I had that set up, I invited my reviewer pal over and just sat him down to listen and give me his own impressions before I told him my own. We do this kind of thing when we have new speakers, double checking our impressions with what the other guy hears.

He was completely shocked and basically said “ how the fuck did you do this?” When I asked him to describe the sound he described what I hear to a “T” - his first comments were the crazy sound staging and imaging, he noted that on tracks with the lowest bass the bass was a little bit rich but that he loved it anyway, because it was still really tight and rhythmic. And he commented on the general clarity and especially on the incredibly relaxed quality of the highs “ I can just keep turning up the sound and it feels like I could just listen to this all night without any fatigue… the highs are buttery smooth.”

Exactly what I perceived in the same recordings. Just coincidence? My friend listens to plenty of speakers that sound great, is there some reason that these speakers cause a particular bias effect where we both perceive the sound characteristics the same way?

And his stereophile review of the Perspective 2s JA noted wide full range sweep of sound as well as the clarity and smoothness in the highs even during complex passages, and also noted a slightly rich and yet still tight and punchy bass quality.

None of this seems disputed in the nature of the Stereophile measurements.

So are we all suffering the same bias effect?

It seems at least as reasonable to conclude, provisionally, that we are largely perceiving the actual characteristics of this loudspeaker.

So those are just a few of many different examples along the same lines, where I and other listeners impressions seem to converge fairly well on the general character of loudspeaker, and when measurements are available very often the impressions are consonant with the measurements, at least certain characteristics.

I’m not talking all the time perfection here. But it seems to happen often enough. So from my own experience, I conclude that even in the context of its known liabilities, informal listening CAN provide some useful information where the impressions are technically plausible. It can be at least reasonable to draw some conclusions, with lower confidence levels and caveats, even if such conditions won’t suffice for scientific levels of confirmation and insight. Sighted listening to loudspeakers is significantly less reliable than blind listening, but not necessarily wholly unreliable or wholly uninformative.

Cheers.

(I should print this on a scroll and be buried in this one…)
You have outdone yourself with this post. This must be the longest ever... :eek:
Dante needs to resuscitate and create a special circle for you in the Inferno. :D
 
To understand how the double-blind ratings were affected by the reflexes in Harman's listening room, the size of the room and the characteristics of the walls, floor and ceiling are important variables.
In the listening position, which were the dominant reflexes? How were they attenuated and delayed? Did they have a similar frequency curve to the direct sound?
My guess from the images I have seen is that the lateral reflexes were dominant and delayed by about 20 ms and attenuated by about 8 dB with a similar frequency curve to the direct sound?
Do you think it is possible to extrapolate from an individual speaker's spinorama data and from relevant data from each given room's relevant data via a custom algorithm how this specific speaker will sound in each specific room?
With a few more new double-blind studies of how different variants of reflexes are perceived with respect to reflex angle, attenuation and delay, I predict that this should be possible.
Maybe I am overoptimistic.
 
Last edited:
Rubbish (at least i you want us to take that literally). If you hear a thing but think 'I could well be imagining that' then you are not perceiving differences to be meaningful 100% of the time. To the contrary, you are thinking 'that probably isn't meaningful'. You've provided another example of idiomatic/rhetorical use of quantitative terminology.

Yup.

That is the type of language overreach I was speaking to in my long-winded post and many other such posts… (and I’m not surprised to see that you-know-who “liked” that post).

It almost borders on a sort of skeptical nihilism.

When Travis writes about sighted listening:

Science, through blind testing, has established that the perceived differences are rarely meaningful.

He doesn’t seem to realize this would entail measurements are rarely meaningful to perceived differences.

(in the real world conditions in which people listen to loudspeakers)


Measurements will confirm the differences 100% of the time.

Sure, but given your previous comment, what’s the relevance of the measurements if they don’t help us predict what we will perceive in the actual (sighted) conditions in which we listen to loudspeakers?
 
Last edited:
Back
Top Bottom