That’s not my point or my position. I agree that in many or most cases we would not prefer a flat transfer as the final released product that we listen to.
But the point of a master source is to be the original mixdown recording, a “clean slate” from which a final master can be produced for the commercial release that we listen to.
Because we have plentiful empirical evidence that multiple masterings/(re)issues of music on CD sound different from each other and the first issue is not always flawless or widely preferred, it’s obvious that we want to have the original mixdown master as the archival base source from which future releases can be made. This isn’t “purist” in the way you are claiming. It’s just good practice and common sense, and I’d have thought that was obvious from my prior comment.
It’s bad if the only existing copy of an album is a CD that has a particular mastering engineer’s 1980s transfer equipment (which might very well include a sub-optimal/misaligned playback deck and/or non-transparent studio analogue gear before the ADC) and EQ choices baked into it. That reduces the fidelity of future releases that have to use that CD as their base source, and none of those baked in characteristics can be undone or reversed with any precision.
The hardware analogy to the position you’re taking here would be to say that from a “purist” point of view it would be ideal to use fully linear, load-independent amplification, but if high-quality solid-state amps were all destroyed in a fire then it’s no big deal to use a tube amp and just make it more linear with EQ and DSP in the playback chain. That position wouldn’t fly here, and for good reason.
I have noticed here at ASR a tendency among some members to ignore or dismiss the importance of mastering in the perceived sound quality of the listening experience, and I find that bizarre. Yes, mastering comes before the playback chain, and there’s no objective measurement standard for right vs wrong in mastering. But that doesn’t mean that mastering makes no sonic difference - it obviously and clearly does, in cases when multiple masterings exist and their differences are obvious.
So having the original, unadulterated stereo source available as a basis for mastering is important and goes way beyond a “purist” concern for anyone interested in fidelity as the standard, as most of us here at ASR are.
I'm trying to tread carefully here, but there are a lot of ifs floating around.
We have no idea that the original mixdown recording is a "clean slate". The mixing engineer could just as easily be using a sub-optimal/misaligned tape deck and/or non-transparent studio analogue gear, may be mixing on an analogue desk with less than perfect slider controls, and could also be recording to another sub-optimal misaligned tape deck.
So in most cases, we'd just be insulting engineers who are doing their best with the equipment they have. Guessing that the mastering engineer in the 1980s couldn't set up their deck, went out of their way to use poor amplification But I fancy the mixing process is much more likely to produce errors by its nature, than playing back a two track master into an ADC. There's nothing necessarily pure about a studio master. it is usually what we have though. I'm not convinced that we have a software equivalent of fully linear, load-independent amplification anyway, to refer back to your hardware example.
There are two bigger problems. The first is definitely there in your comments, the taste of the engineer. But for a lot of those transfers, the engineer is going to be more in line with the contemporary taste when the recording was made than an engineer with modern tastes today, and that may mean more for the presentation. The second, covered in posts here already, is the quality of the master. But again, if the stereo master was used, it's likely to have been cleaner than today, and the tape player in better condition as well compared to the aging beasts that are used today (of course today's engineers are doing great work on them, but it';s still an issue). The ADC wouldn't be to modern standards. Where are the measurements that show that it is guaranteed audibly inferior? By how much?
I do agree that mastering is important. However, I'd argue that the largest component of difference between early masters and today's are going to be subjective. And we have to get off the remastering bandwagon at some point, especially from analogue originals. It's less of a deal than you're making out.
The exception to this relates to the original multitracks, which if preserved well allow us to have Atmos mixes and such.
The point is to be able to listen to the music, ultimately. And to turn your hardware analogy on its head, if it suddenly became impossible to use fully linear amplification to listen to music, I'd be first in the queue for that tube amp. For all our audiophilic leanings, at the end of the day I'm here to listen to music, and if I had to listen on a 1960s cassette deck or a 1930s wireless set, I would. The point of what we do is to listen to recorded music. If I can't have the standard of equipment I have in my living room right now, I'm not giving up. For me, that position would absolutely fly here. What;s the alternative, give up?