Simply do a fair blind test and you'll hear no difference at all. Compare 2 sources using same equipment at a matched listening level. And it'll be valid if you don't know which source is playing, other than that you'll have your brain trick you (it's know since a long time!):
Adding more bits just lowers the noise floor, that's it. It's basic knowledge, undergrad level in electrical engineering. If you studied in science with advanced math, then you can read this:
You can go listen to this too, it's nicely presented by an engineer and can be understood by non-engineers:
D/A and A/D | Digital Show and Tell (Monty Montgomery @ xiph.org)
24/32 bits has its place in recording studio for obvious reasons (addition of noise, just like analog mixing) so you want end signal to have CD level performance. Not for playback where it doesn't bring anything: Final mix is saved in 16/44 and it's better than human hearing!
Oh, i forgot, for a proper test, convert a 24 bits file to 16 bits. There are so many versions of the same recording with different mixes, remasters, etc.
Lol you assume I haven't done tests...
Pop music normalized to -1dB with the VU meter sitting at -12dB and the peak meter never dropping below -25dB, I agree 100%, 16 bit and 24 bit would be indistinguishable. Classical music, though, there's a difference, and it's not even that difficult to hear.
I'll add, you do need to be using very good equipment - it needs to be set up right, and also the listener needs a trained ear.
Take two people off the street and they can't even tell the difference between a violin and a viola, a clarinet and an oboe.
I don't really want to debate this - we'd argue forever:
"you shouldn't be able to"
"I can"
"I don't believe you, you shouldn't be able to"
"I can"
"I don't believe you"
I'd invite you over and demonstrate, but the likelihood of us being in the same general area is next to nil. Plus, depending on how one is invested in "you shouldn't be able to", you could easily deny the [small] difference.
Think about this for a second: in a classical song, after a crescendo a violin comes in - quietly, at -55dB. There is 35dB dynamic range left. This is worse than a cassette.
Can you tell the difference between a violin recorded at -3dB on 16 bit digital, and that same source signal recorded on a cassette at -3dB?
(the answer, if you're being honest, is yes. So for a realistic reproduction of an orchestra, 16 bit audio is not enough. Is it enough for a $300 mini-system? Yes. Is it good enough for a $1,000 Sears catalog stereo? Probably. Is it good enough for a $15k system in a treated room? Not quite.
For practical purposes, 16 bit is enough. For true high-fidelity reproduction, it's not quite enough.