While my research is ongoing, I want to share a few early findings and ask a few questions.
1. This video on Audio Blind Testing and Listener Training provides at least some anecdotal evidence that test results from a "critical" listener who knows what to listen for can be different than test results of naive or untrained listeners :
I see so many people claiming to have done blind tests in youtube videos recently without knowing what it means to perform one. Also, the topic of training comes up with many abusing the term. So I decided on a whim to create a video on it. It is rather long at 44 minutes but you can speed it...
Of course anecdotal evidence leaves us short of a sound inference. Does anyone have a reference to papers that provide evidence of a statistically significant difference between naive and critical listeners in a controlled study?
2. Any inferences following from the public listening tests - Kamedo2 Multiformat and Archimago Musings - cited in this thread are subject to limitations of experimental design. We can learn a few things about how the Kamedo2 Multiformat test was conducted here https://listening-test.coresv.net/
, but there's no discussion of methodology.
Can anyone provide a reference to further information on the Kamedo2 Multiformat design and methodology?
Archimago does provide some discussion of methodology with admissions here http://archimago.blogspot.com/2013/02/high-bitrate-mp3-internet-blind-test_2.html
3. ITU-R BS.1116, 3 Selection of Listening Panels, restricts listening panels to expert listeners only and they are subject to pre- and post- screening. See also the attachments 1, 2 and 3 to Annex 1 for further qualification of listener expertise.
ITU-R BS.1116 is useful both as a standard for listening tests and also to qualify representations made about listening tests.
I look forward to reporting more findings in the coming weeks.