Sorry, I didn't read carefully.
It's not immediately intuitive, particularly as a lot of marketing still promotes the "stair step" misconception.
Sorry, I didn't read carefully.
And then they advertise a DACs with a NOS so that these stair steps interfere moreIt's not immediately intuitive, particularly as a lot of marketing still promotes the "stair step" misconception.
... and also explained many times by @j_j here. Explaining something on a public forum is hopeless, it is never ending. And never ending nonsense argument of 22.7us. It will never end. Another guy will come with the same one. Best keep overlooked.The first reply already gave the link to the article with derivation of the formula: https://troll-audio.com/articles/time-resolution-of-digital-audio/
(sigh)However this is the curve-ball he threw out. He claims that 96kHz is far better for stereo in the "time domain" and it's better, otherwise there can be incorrect/unexpected phase shifts between the channels, but by going higher the phase shifts are much smaller.
Any real signal (finite energy, finite length) can be constructed from a sum of sine waves, or an integral if in the continuous domain.OK - maybe I need a primer in stereo then - because aren't all these examples just a simple sine wave which would just be a single mono channel?
Yes, it's bandwidth, not sampling rate, that matters. Dithering matters, but down at the nanosecond level.This is true. For a sine wave in the non-dithered case, it depends on the frequency and effective bit depth (see the first link in post #2). With proper dither, I believe it should be identical to a non-quantized, continuous time (i.e. analog) signal with the same SNR.
This is a great 101 primer for digital audio:Perfect, thanks all.
I will try and have a reasoned conversation and work out why he feels there is a time domain issue now I have some better understanding of where he might be going wrong with an assumption.
"They're perfectly accurate in isolation, hence also perfectly accurate to each other." I think that line is the one I need!
Thanks; your comment made me realize that the formula given in the first link in post #2 is in fact incorrect—it uses the frequency of a given sine wave rather than the actual signal bandwidth after quantization.Yes, it's bandwidth, not sampling rate, that matters. Dithering matters, but down at the nanosecond level.
I would say it's almost always like thisSometimes a little knowledge and theory can get you into trouble.![]()
Regardless the reasoning used by a friend of yours, it is a fact that many musical instruments have and many transient signals (like hand clapping) have spectrum with frequencies far exceeding 22kHz. So, 44.1kHz sampling frequency inevitably results in spectrum truncation and signal distortion (I do not mean harmonic distortion). Do not ask me if it is audible or not, I am not sure. And I would be careful to say that it is always inaudible. To me, 96kHz sampling makes sense regarding audio signal fidelity.He claims that 96kHz is far better for stereo
Without knowing if it's audible or not, it seems strange to say that 96kHz sampling makes sense. Given that we know that 20kHz is actually extremely generous as a limit to human high-frequency hearing, and most people are well shy of that, it seems more likely that 44.1kHz is already more than necessary. For various reasons, moving to 48kHz as a standard can be sensible. But 96kHz or higher just seems wasteful (vanishingly little content but a lot of noise and other potential spuriae can be found in that extra octave or so) and potentially harmful given that most systems are not designed to reproduce extremely high frequencies.Regardless the reasoning used by a friend of yours, it is a fact that many musical instruments have and many transient signals (like hand clapping) have spectrum with frequencies far exceeding 22kHz. So, 44.1kHz sampling frequency inevitably results in spectrum truncation and signal distortion (I do not mean harmonic distortion). Do not ask me if it is audible or not, I am not sure. And I would be careful to say that it is always inaudible. To me, 96kHz sampling makes sense regarding audio signal fidelity.
Regardless the reasoning used by a friend of yours, it is a fact that many musical instruments have and many transient signals (like hand clapping) have spectrum with frequencies far exceeding 22kHz. So, 44.1kHz sampling frequency inevitably results in spectrum truncation and signal distortion (I do not mean harmonic distortion). Do not ask me if it is audible or not, I am not sure. And I would be careful to say that it is always inaudible. To me, 96kHz sampling makes sense regarding audio signal fidelity.
There is some thin, but unproven justification of going to 64kHz sampling, based on impulse responses (FIR or IIR, doesn't matter which, for different reasons) of the anti-aliasing filters interacting with the nonlinearity of the ear. THIN, mind you, and as of yet without evidence, but that can be potentially made hypothetical by the extremes of human hearing as we know today.
Whereas (and feel free to correct me if I am wrong - I am layperson preaching here) the first order effect is that the biomechanical properties of the ear will filter out the frequncies above the limit of around 20kHz. So even if those frequencies (eg from hand clapping) are left in the signal by a 96kHz sample rate, and if they get through the output transducer - it will make no difference to audibility since they don't reach the brain in any case.
This is quite a courageous statement, as it only takes into account a certain kind of signal path (to brains) and certain, physical sensor-like processing. Even in such simplified case, intermodulation effect cannot be completely excluded.So even if those frequencies (eg from hand clapping) are left in the signal by a 96kHz sample rate, and if they get through the output transducer - it will make no difference to audibility since they don't reach the brain in any case.

You also have sound from Ultrasound, which demodulates ultrasonics to audible sounds in the air. This surely might cause audible effects. But if so, then they were already recorded in the original recording (assuming that a microphone was used).
But surely you can construct an artificial signal that would yield an audible sound with only audio > 20 kHz and assuming you have a playback chain that can put such a signal in the air.
But this is the argument that says reproducing inaudible frequencies can actually reduce audible quality.You also have sound from Ultrasound, which demodulates ultrasonics to audible sounds in the air. This surely might cause audible effects. But if so, then they were already recorded in the original recording (assuming that a microphone was used).
But surely you can construct an artificial signal that would yield an audible sound with only audio > 20 kHz and assuming you have a playback chain that can put such a signal in the air.
Which is why I wrote : "around 20kHz"First, young people CAN (at least some of them) hear a bit beyond 20kHz. So that, while not particularly of interest to audiophiles, is something that is determined. I know it's been demonstrated in several labs at various times.