It's an interesting article. Ignoring the primary purposes of the investigation and article, we can apply some of the concepts that relate. It elucidates some of the limitations of current human hearing algorithms. I think these algorithms will get better and better as software and AI tech advance. It also brings to light the fact that the physical structure of the human ear itself is something that is tough to model.
Here is some food for thought: even if we were to build a form of AI that replicates human hearing perfectly, and the machine has replicated human intelligence, and it is hard AI to the point where the machine actually experiences what it is like to hear a sound or an entire song- would we be able to use software to replicate whether the AI "enjoys" one song over another, or one pair of headphones over another--would we be able to measure this enjoyment? Maybe. We would definitely have access to a substantial amount of more physical facts about what it is that makes one mind's processing of a sound more enjoyable to them vs to another mind. I don't think we would have a complete description though. We would still be left with non-physical phenomena that are irreducible to physical facts. What the subjective experience is actually like when you listen to one pair of headphones vs another. You might be able to replicate heuristics that the mind uses to draw subjective conclusions. But that still isn't enough to actually have access to another person's 1st hand personal subjective experience. To bring things back to present reality, at this point in time we still only have elementary auditory processing human brain algorithms, and most of this tech still hasn't even been applied to mainstream audio analysis machines, which are limited to a fairly small number of measurements, relating to distortion and many other things that people probably don't actually take into account when they judge one piece of equipment vs. another.