It's possible (though unlikely) for a headphone model to have terrible unit variation, but good channel matching.
That would require extensive binning from the manufacturer, which is surely more expensive than it would be to reduce manufacturing tolerances.
This is actually not so uncommon.
The more unit variation is expected on a speaker level, the more likely the manufacturer will be to use IQC-data to match speakers, at least within their respective allocated batch.
As in: measure all speakers that plan to be used that day, and match them up. (Which means that theoretically there could be a better matching speaker in tomorrow's batch that would now be ignored and be matched with something else).
What constitutes "matched" is up to the manufacturer of course, but typically matching will be done to less than 1 dB.
Some manufacturers don't use matching at all and just rely on the fact that manufacturing tolerances for the speakers are so tight that any two speakers are always close enough together.
Of course, matching is typically done only on a speaker level, but there's other components in a headphone other than the speaker, which also affect the sound and hence will affect the channel matching (and unit variation).
Meaning that even if two speakers were matched to 0.01 dB, the end user could experience more than 0.01 dB of channel mismatch (e.g. due to the earpads not being 100% identical, or the damping foam in the earcup, ...)