As for my pay grade, okay I'm a software engineer, not an audio engineer. On the other hand, I've had articles on aspects of human consciousness published in academic journals, and presented at academic conferences. And I've been a serious amateur musician for 55 years. I'm not claiming "ears" are more sensitive, I'm claiming brains are more capable than instrumentation in some aspects of discernment, including aspects which we can't give verbal reports of. There's a abundant neuroscience literature on this.
One typical, and often-repeated type of experiment involves "priming" the mind by flashing a word or image for a briefer time than the subject can consciously see or report on. That word or image will affect their subsequent interpretation and reaction to things they can consciously report on. Another experiment, heavily explored over the last decade, concerns "change blindness": show a picture of a scene, then a brief intermediate distraction, then another picture of the same scene, but with a major change. Most people most of the time cannot see or report on the change -- or even tell that one has been made. This despite that these are large, clearly-visible features in the pictures. (Google "change blindness" for examples.) A/B comparisons of images, despite obviousness when viewed side-by-side, are not necessarily obvious when viewed sequentially. By implication, sequential audio comparisons are also likely to miss major differences, due to being perceived sequentially, rather than side-by-side.
I'd suggest from my own experience of mixing amps and speakers in a common environment that we can tell side-by-side audio differences when simultaneous, especially when prolonged. But I've not conducted formal lab experiments on this. Still, we are more visual than auditory creatures. There's little reason to suggest we should be better at sequential comparison in the auditory realm than we are in the visual.
As for the implication of the reality of unconscious "priming": it has been shown to affect what we subsequently consciously notice, and how we interpret that. Since music is a sequential process, how we're primed in one moment, even unconsciously, can effect what we consciously perceive in subsequent moments.
Look, I know it would be easier from the engineer's standpoint if we could just count on our technology to do everything our brains can. I know some really smart guys who have the fantasy of uploading their minds into supercomputers. But really, our supercomputers are much better at some things than our brains are. There's other stuff we do with our brains that supercomputers can't touch. As a software engineer I respect the hell out of electronic technology. But I also see where it comes up short against our own native capabilities.