I swear I am like a speaker magnet, the more I get rid of the more that show up, wish it was women....
Perhaps this will help:
I swear I am like a speaker magnet, the more I get rid of the more that show up, wish it was women....
With this hypothetical scenario, is your point that sighted tests are superior to blinded tests in determining the overall enjoyment from a speaker? If so, I would agree that the answer is yes, in some situations. For instance, if a loudspeaker is so ugly that your spouse makes you cover it up with a bedsheet everytime you listen to it, then yeah, that's going to affect your enjoyment. However, the question we're trying to answer with blinded tests is not overall enjoyment. The question is how preferred is the loudspeaker based JUST on its sound quality and nothing else (to the extent possible). Two different questions, two different methods to answer them.
Sometimes I think that may be the audiophile curse which is that you KNOW that your equipment isn't as good as the reviewers/friends/dealers/etc. and use your cognitive biases to find problems that aren't there, like power cords, bad electrical supply, cheap interconnects and speaker cables or theoretical problems like the amp uses too much feedback or 'stair-step' digital conversion.
I realize that in most cases these 'problems' don't exist or are easily fixed like better gauge speaker cables or better shielded interconnects, but it does tend to lead one down the rabbit hole of upgrading and expecting an improvement so one hears it until you read a review that says this cable/amp/power conditioner/..... is better and gives a blacker background to the music. I've rambled a bit OT, but the natural skeptic/scientist in me always doubted the truth of the golden eared audiophile, but not enough to ignore them.
That's absurd. No one has taken that position. I don't even do listening tests on bulk of electronics.
The issue is for speakers.
I just found ASR while ago and have seen a war of sorts online, I have even been told on this forum to "go away" and accused of being a subjectavist. HAHAHA
I wish people could just see that everything is a data point. blind listening, sighted listening, mono, stereo, all just data points. More data points may help you or they may not.
Whether you do a test blind or sighted, there is no guarantee of correctness. Every test has a margin of error. Turn a sub on and off in your room. Do you need a double blind test to trust what it does in your room? No. A blind test would generate the same as sighted test.It has been proposed that sighted evaluation (which IME is not acceptable protocol for research, for well-known reasons), can yield useful data so long as *trained listeners* are doing the evaluating.
No, he did the study because the people who sold audio, marketed them and designed them, had no use for controlled testing of any kind, or the job he was going to do at the company. It was personal for him to demonstrate that these people were not qualified to make critical decisions about fidelity of their speakers relative to the competition.What the industries do and what they should do is a reason why Dr Olive wrote the article and did experiments about this.
That's totally consistent with what I explained. Everyday testing and evaluation is performed sighted. Then at the end of the process you perform double blind tests."In other words, if you want to obtain an accurate and reliable measure of how the audio product truly sounds, the listening test must be done blind. It’s time the audio industry grow up and acknowledge this fact, if it wants to retain the trust and respect of consumers. "
Klippel is not science. It is a set of measurements. Those measurements are very difficult to interpret as "buy don't buy" against countless other speakers with similar looking measurements. 1 dB peak at 600 Hz is not the same as 1 dB peak at 1.5 kHz. Yet the score may be identical. We need to bridge that gap so people can purchase speakers without listening to them which is the norm today.
If we had a scoring system where we could all stand behind it so much that if it said speaker a is better than b, that would be the "truth," then sure, I would not need to do listening tests. But we are not there. Scoring system is like a compass that shows you north. It is not a turn by turn navigation system for driving in the city.
Also, when I first started to do measurements, people kept asking me what I recommend. I refused to say so. We had a bunch of debate threads about them. Eventually I got tired of answering those questions in private and in public and added the recommendations. That has proven to be hugely popular and rarely controversial. Today, I cannot, without listening to a speaker, give such recommendations. So as much work and aggravation it has turned out to be, I listen and provide this as a factor in my recommendation.
And no, not all "human beings" are the same. Which one of you has been exposed to nearly 80 speakers in the last 7 months where you could compare and correlate measurements to what you could hear? The answer is none. In other words, I am not situated like any of you. There are many things that apply to you that don't apply to me and vice versa. We rely on informed opinion of experts in real life all the time. Not sure why it is such a big deal to follow the same in audio.
I would be fascinated to read a review by yourself of a otherwise excellently measuring smooth FR speaker with narrow despertion.
or narrow desperation - it might add to the drama in the music
Ranking of speakers G, D and T did not change in sighted versus blind. Only speaker S changed.
Yes. A typical test consists of 8 trials where the order of speakers and program are randomized. In each trial, the listener can switch among the different speakers as many times as they like, changing their scores as they see fit. Once they are satisfied with their scores, comments and any other scales we include (spectral balance, distortion, etc) they hit a button (DONE) and move on to the next trial. One of the eight trials is a repeat so we get a measure of how consistent each listener rates the speakers for each program.In the Harman blind speaker tests, do they allow you to go back and change the ratings you gave to speakers after you hear subsequent speakers?
Yes it also means that untrained listeners have the same "authority" about preferences.
Let's not forget that those preference things are generalisation and are very usefull for companies that want to make profit. I guess it should not prevent people to try different ways of enjoying the audio experience.
Yes. A typical test consists of 8 trials where the order of speakers and program are randomized. In each trial, the listener can switch among the different speakers as many times as they like, changing their scores as they see fit. Once they are satisfied with their scores, comments and any other scales we include (spectral balance, distortion, etc) they hit a button (DONE) and move on to the next trial. One of the eight trials is a repeat so we get a measure of how consistent each listener rates the speakers for each program.
What audio companies do NOT want to make a profit?? Only the ones that won't be in business for very long
Doing rigorous blind testing of products costs a lot of time and money in R&D. Many audio companies would argue these tests cost too much money and reduce profit.
Also, if we only cared about profit would we publish our research so that other companies can use it freely to improve their products?. Would we help create loudspeaker measurement methods and standards that are used on this site to measure our loudspeakers and competitors so that consumers have better data to make informed purchase decisions?
Certainly we believe science & research are necessary to stay competitive over the long haul, but part of what we give back to the industry is not just motivated by profit, but to help raise the standards and bar of the industry. In the long run, that is good for the industry but also good for the consumer.
Thanks for the response. For the very first blind shootout I organized for myself and friends, we recorded both a preference and a score, but we've just done preference rankings since then, due to how much we all struggled with scoring the first time. I want to give scoring another try for the next one we do. I feel like it's something that will get easier with practice and hearing more speakers. I really like the idea of repeating the test to judge listener consistency, I'm definitely gonna try that next time.
Your gracious contribution of your time and thoughts are greatly appreciated.What audio companies do NOT want to make a profit?? Only the ones that won't be in business for very long
Doing rigorous blind testing of products costs a lot of time and money in R&D. Many audio companies would argue these tests cost too much money and reduce profit.
Also, if we only cared about profit would we publish our research so that other companies can use it freely to improve their products?. Would we help create loudspeaker measurement methods and standards that are used on this site to measure our loudspeakers and competitors so that consumers have better data to make informed purchase decisions?
Certainly we believe science & research are necessary to stay competitive over the long haul, but part of what we give back to the industry is not just motivated by profit, but to help raise the standards and bar of the industry. In the long run, that is good for the industry but also good for the consumer.
What audio companies do NOT want to make a profit?? Only the ones that won't be in business for very long
Doing rigorous blind testing of products costs a lot of time and money in R&D. Many audio companies would argue these tests cost too much money and reduce profit.
Also, if we only cared about profit would we publish our research so that other companies can use it freely to improve their products?. Would we help create loudspeaker measurement methods and standards that are used on this site to measure our loudspeakers and competitors so that consumers have better data to make informed purchase decisions?
Certainly we believe science & research are necessary to stay competitive over the long haul, but part of what we give back to the industry is not just motivated by profit, but to help raise the standards and bar of the industry. In the long run, that is good for the industry but also good for the consumer.
I'm not sure it will for the short-term. The focus right now seems to be immersive audio and how to improve the performance and deliver it to more people over headphones, loudspeakers and in the car. Besides movies, music, and gaming there are applications in VR and AR.Your gracious contribution of your time and thoughts are greatly appreciated.
Might I ask, given the present emphasis on headphone evaluations, do you think significant time and effort will be spent by the industry in the near future on further speaker research?