Do high resolution files sound better than say a quality red book one
The answer is YES and NO
really? How can that be after all the physical (PCM data) amount of information contained in a 24/192Khz file is substantially more than a red book standard 16/44.1Khz right?
There are two questions at hand here:
1. Does one sound better than the other as you ask.
2. Does it matter?
My opinion is focused on #2. We can argue 'till cows come home about #1. But as a
practical matter, that question is moot. When Meyer and Moran test was conducted, a new physical format in the case of SACD/DVD-A was being introduced, both with limiting copy protection measures. Gearing up to produce all new physical format(s) is a huge undertaking, ultimately costing hundreds of millions of dollars which consumers would have had to pay in one form or the other (most likely through more expensive discs). So whether there was technical merit was a huge deal. The reason for existence had to be established first.
Note that Record labels then were fighting piracy as their #1 enemy. So for them, the new copy protection mechanisms was very important. And higher retail prices for discs.
Today, the situation is far, far different. High-resolution is distributed through online means. There is no new physical format to be invented. And cost of storage is a fraction of the cost of the music itself in the form of your NAS or other favorite form of music storage. Online bandwidth is also plenty and in the case of US, unlimited so there are no barriers here.
Most importantly, record labels have given up on protecting the high-resolution content so we get it as nicely as the CD in that we can copy it, make it smaller, compress it, etc.
So sitting here as an audiophile, do I want the files that were mastered in stereo at 96/24 or do I want to demand that they first truncate it to 16/44.1? My answer as I mentioned in the article is simple: give me the master. I can truncate if I want, thank you very much.
And the CD is there should someone want the bargain although for how long, that remains a big question.
Technically and a question you asked, 16 bits is insufficient to represent the ear's dynamic range (which is closer to 116 db than 130). So we want 24 bit to have full transparency. That we can prove although how much it matters in practice is harder to explain. The sampling rate choice makes this harder yet again as vast majority of listeners fail the test of 320 kbps AAC versus CD! But again, if it comes as a full meal, let's take it.
Ultimately if we get the original master, we are assured to not have left anything on the table. Any other lossy conversion in bit depth and sample rate is just that: lossy. I like to have the choice to not have that lossy transform applied.