Oh yes, I had forgotten the distortion and was just thinking SNR.Yes, SPL.. that's not SINAD. Compression drivers will probably have around 2% distortion at 120 dB, meaning a 33 SINAD. Cones will do a bit better, but not by much.
View attachment 128502
Thet even spec upto 3% for the compression driver. No idea at what SPL this is speced, I bet it's not 120 dB.
Thank you!The maximum frequency storable is dependant on the sampling frequency not the bit rate. The bit-rate determines the dynamic range.
If you hear to 15.3kHz 44.1 sampling rate will be storing all frequencies you can hear.
IME whilst the "silent" bit between tracks may be more silent with 24-bit than 16 bit I have never seen or heard of a music recording with greater than 96dB of dynamic range, so 16-bit can store all the music from the loudest to the quietest parts.
That means 16-bit is fine for music (unless you want to turn the volume up a lot in the "silence" between tracks where 24-bit will be better) so 44.1/16 can store all audible (by me, and probably you) parts of a music recording.
20-bits of dynamic range may theoretically be used for film sound tracks, but I doubt it ever actually is because if it was almost no domestic system would be able to play it.
In this case for 24bit you'd be paying for expanded dynamic range and lower noise floor, not frequency response/extension.So, we can 'see' high frequencies problems with most HiRes files. But can we 'see' (or 'spot') a difference between a 16 bits and 24 or 32 bits files in the hearing frequencies? Amirm explained us he was able to spot the difference in hearing tests, but can it be 'seen' in the graphs?
I can't hear over 15.3KHz, does it make sense to pay for 24bits Hires files? that is my question?!
Thank you for the answer!In this case for 24bit you'd be paying expanded dynamic range amd.lower noise floor, not frequency response/extension.
I partially disagree with the first statement. It really all depends on the type of music/content (e.g., opposite of Pop music), how well it was recorded and mastered (e.g., if they record & master with 16/44.1 in mind the whole way through it's likely the hires variations will be hampered), and then listening environment and playback equipment. I argue a great DAC and amp (we know these exist thanks to @amirm), good noise isolating headphones or sensitive IEMs and/or a quiet room in the evening, and the right content is sufficient to hear an advantage over 16bit content. Heck, the noise floor advantage alone with 24bit content played through sensitive IEMs might be enough even if it's just to minimize the hiss. But to your point, the majority won't need more than 16/44.1. But then again, if you're going to start adding in noise and distortion (through your choice of DAC + amp + speakers/headphones + environment) wouldn't you want to start with the best copy available so you get the *most* out of your (whole) system (even if it's just at or near 16bit reproduction quality)?That means 16-bit is fine for music (unless you want to turn the volume up a lot in the "silence" between tracks where 24-bit will be better)...
20-bits of dynamic range may theoretically be used for film sound tracks, but I doubt it ever actually is because if it was almost no domestic system would be able to play it.
If I have well understood, a 24 bits records has a better dynamic and a lower noise floor than 16 bits files. And this is probably more 'relevant' than high Khz records because, as you wrote, you probably can't hear past 20 KHz.Thank you for these videos @amirm but they leave me confused.
Based on what I am seeing - it appears that HiRez audio is useless. Why do we need all that data above 20K? Data, which often appears to be at -100db anyways... I mean it seems that any lossless non-hi-rez file (provided it is created well) is all thats needed to be reproducible by normal playback equipment. All the rest seems academic: measurable, but impercievable.
What am I not understanding?
a 24 bits records has a better dynamic and a lower noise floor than 16 bits files.
You understand perfectly well.Thank you for these videos @amirm but they leave me confused.
Based on what I am seeing - it appears that HiRez audio is useless. Why do we need all that data above 20K? Data, which often appears to be at -100db anyways... I mean it seems that any lossless non-hi-rez file (provided it is created well) is all thats needed to be reproducible by normal playback equipment. All the rest seems academic: measurable, but impercievable.
What am I not understanding?
Or in good wine...
it appears that HiRez audio is useless.
What am I not understanding?
You understand perfectly well.
Then why are so many subscribed to HiRez services?
I never did, but it was because I did a bunch of tests to see if I could tell the difference between 320 stream and any higher stream and I couldn't... Heck, if I am being honest, I probably couldn't reliably tell the diff between 96kbps stream and hirez.
My pleasure. On your conclusion, these are just a few tracks out of millions available. What I am reviewing here is production quality and standards for the most part, and the formats as a supporting role. In general, it is easy to show that original productions exceed the bandwidth of CD in many high-res recordings.Thank you for these videos @amirm but they leave me confused.
Based on what I am seeing - it appears that HiRez audio is useless.
Yeah, pay extra for it if you want, but I really don’t like it when people say there is a massive difference and talk as if 256 AAC is akin to listening thru a walkie-talkie.Then why are so many subscribed to HiRez services?
I never did, but it was because I did a bunch of tests to see if I could tell the difference between 320 stream and any higher stream and I couldn't... Heck, if I am being honest, I probably couldn't reliably tell the diff between 96kbps stream and hirez.
I'm so sorry guys - I must be really dense. I just dont understand whats happening when these formats get encoded. It sems to me that there ought not to be any data outside 20-20K. These are digital files - why isn't all of that simply <null>?
Whithin the 20-20K range, I can easily understand that having more samples is better, after all an analog wave has an infinite number of steps; but even there there must come a point where finer steps will become imperceivable. I suppose not unlike the situation with pixel density and our ability to resolve. At some point the viewing distance will make 4K > 8K resolution pointless, is individual pixels are simply imperceivable.
Because we encode the signal levels over time, not frequency. There isn’t such a thing as frequency x = null. Well.. there is when doing a Fourier transform, but that means processing the data.
Yes, Shannon proved that a long time ago: you don’t need more that double highest frequency you want to encode (and a bit of headroom for the filters).
@amirm maybe it’s time to make a proper video on digital audio sampling, the sampling theorem and the proof of it.
Hopefuly, on my return there will be fewer of these posts!