- Joined
- Nov 29, 2017
- Messages
- 543
- Likes
- 1,618
In the spirit of charity, let me attempt to clarify where you are running into resistance here.I'm proposing a test model that is common/routine/normal and has been used every day for close to 100 years in university and corporate research settings. Perception, learning, memory. This is in fact old-hat textbook science. Take it or leave it. Deny it or learn it. Ignore it or embrace it. Science is notoriously inefficient and costly. The EU spent billions of euros and decades building the Large Hardron Collier (CERN), and wasn't sure that it would work. Billions spent on a risky project.
Human Factors is mainstream science:
https://journals.sagepub.com/home/hfs
https://www.hfes.org/about-hfes/what-is-human-factorsergonomics
https://www.verywellmind.com/what-is-human-factors-psychology-2794905
My view is that the endless bickering in audio between the objective and subjective crowds likely follows from the lack of resources to appropriately test relevant phenomena (e.g., documented and predictable human illusions/internal cognitive constructs). Those experiencing illusions sense what they sense and this likely can never be tested with the objective measurement devices now used. The closet thing that comes to mind are functional Magnetic Resonance Imaging tools (fMRI) that scan the brain as people engage in a task, but the magnets would interfere with audio and render it useless. Useful answers might indeed require a shift to another more structured setting. But audio is just a hobby.
Genuine audio science is something very different than reporting standardized measurement summaries. I encourage people to avoid freezing on the technology and tools of a given era when newer computational resources might finally answer some of the weird, ephemeral, and dare I say "euphonic" things with audio. As always I'm not much of a believer in euphonic factors, I just seek scientific explanations that build on 100 years of data from 1,000 universities and corporations.
I worked with a professor once who was testing words that change meaning when the accent changes. The project ran into issues because the team couldn't tell the difference between RECORDED words that were supposed to mean different things. They often mixed up their RECORDINGS and therefore could not apply the correct test conditions. As everyone used and heard these words in normal conversations, no one outside the team believed the findings. ASR and others risk a similar outcome if/when they deny similar and frequently reported subjective experiences. The underlying question for this and all things in human experience is: "What part of the sensory experience is external to the person, and what part does that person construct in their own head? How was it transformed?" This is the essence of research psychology. Your smartphones were developed and refined with research psychology (e.g., Siri, Alexa, and much more).
Sample words that change as the accent changes:
https://jakubmarian.com/english-words-that-change-meaning-depending-on-the-stress-position/
You've pretty chronically failed to differentiate between testing for the audible elements within a signal and testing factors which influence perception, including those not present in the signal. I don't think anyone here is arguing that there is no qualitative experience on the part of people who report hearing these "euphonic" phenomena - the argument is about whether that perception is actually related to the signal characteristics, and that's what @SIY is proposing to allow testing. If you cannot discern the difference between two products on the basis of their signal impacts, then what difference you discern must be from other causes - this isn't, nearest I can tell, a point of disagreement between you, me, or SIY. Heck, I don't even think we'd disagree that if the sighted impact of a component on perception of an audio system's sound was positive, this would be a design merit - if a lightbulb on top of the amp makes people, on average, enjoy the music more, that's value to the customer in my books, regardless of the transfer functions. Edit: Although note, this claim isn't being accepted out of hand - I'd actually like to specifically see it tested, in fact.
FWIW I would agree that measurement blocks from an APX aren't particularly science, particularly using expressions of relative nonlinearity that were derived in an era where twin T filters were novel, and I'm a pretty vociferous antagonist to people claiming that a low-quality-by-the-standards-of-amplifiers-or-DACs measurement implies poor quality sound necessarily (admittedly, mostly because I just keep shouting about how the weakest link in the chain is the transducers almost regardless). If folks are taking a hierarchy of SINAD performance as a hierarchy of subjective perception then...well, that, too, can really only be attributed to psychological factors I would expect an ABX between something from the low quintile of Amir's SINAD hierarchy and something from its top to produce a similarly null result, because the signal differences aren't dominating the differences of perception there.