Many excellent phono cartridges can "only" muster 30 to 40 dB of channel separation, and still that's plenty to highlight these differences. Think about when stereo was first introduced/promoted to a dubious public already happy with their mono systems - pushing those heavily panned records like "Persuasive Percussion". Very boring after the first listen, but proving to the most doubtful that even a lowly phono cartridge or tape head can sufficiently provide proof of the beneficial stereo effect, and that they needed to double their system when they could afford to. In any case, there's a good reason why we use a logarithmic decibel scale: 10 Log (P2/P1). 30 dB is 1000 times stronger/weaker in linear terms; 40 dB is 10,000 times stronger/weaker in linear terms; 50 dB is 100,000 times stronger/weaker in linear terms; 60 dB is 1,000,000 times stronger/weaker in linear terms. Too many zeros to keep track of with linear terms, so decibels are better when the differences are so vast. We're already in the realm of enough with a "mere" 30 to 40 dB. Tape is even better, and digital even better yet, depending always upon the master. In panning across the sound-stage what's the difference if the left side is 1000 times or 1,000,000 times more/less than the right, and vice versa?
For a particular recording we do know that the sound-stage Left to Right at its front edge (a line across the front of the speakers) is based on channel separation, and that depth within the overall sound-stage is based on the amplitude of the signal within that L/R position. Then the most linear amplifiers should be able to place depth most accurately because of their high amplitude linearity, but does this consistently correlate with what's heard?
What about image height above and/or below the speakers? Maybe this is just the psychoacoustics of expectations?