I've seen it. I understand that you can recreate the analog signal of a sine wave with 2 samples perfectly.Your questions suggest the opposite of familiarity. Please don't make me drag out the link to Monty's video
I've seen it. I understand that you can recreate the analog signal of a sine wave with 2 samples perfectly.Your questions suggest the opposite of familiarity. Please don't make me drag out the link to Monty's video
The easy answer is that you want the same data as they used in the DAW. But many DAWs use 32 bit floating point internally. So this isn't actually viable. Most DAWs will run at 96kHz. However: The optimal bandwidth is clearly the one which conveys all useful information to the listener and avoids cluttering the product with redundant or harmful crud. So target 20kHz bandwidth.Is their an optimal bandwidth to target in the final master that we should be seeking to purchase from media producers? Such that we are getting a simple copy of the final master and avoiding the conversion step to CD/Redbook spec?
I've seen it. I understand that you can recreate the analog signal of a sine wave with 2 samples perfectly.
Thanks. That's helpful.There is a great deal more to it than that. Indeed if that is the takeaway it isn't helping.
If you sample a defined bandwidth signal at a rate equivalent to twice the bandwidth you can exactly capture the entire content of the signal. It isn't about sine waves, it is about the entire content.
Add Shannon on information and you have this:
If you have a bandwidth limited channel with a defined signal to noise you can capture the entire content of the channel if you sample at twice the bandwidth and with S/N / 6.02 bits.
Which is why a CD gets you 22kHz of bandwidth and 96dB of S/N. Every last vestige of information in the defined channel is captured and will be reproduced again. There are a lot of subtleties in what is going on. And it seems counter-intuitive. But it solid.
In an ideal world, the Shannon-Nyquist theorem would operate perfectly, and sound waves being a sinuisoidal function would be regenerated with no loss of information when sampled at a frequency equal to twice the largest sound wave we would want to capture. However, due to quantization error, the sample rate should be set marginally higher. Therefore, 44.1kHz was chosen as the redbook standard to provide this margin as well as to be compatible with NTSC videotape recording rates which were a popular digital recording medium at the time. I am unsure why 48kHz later became the standard for DAT, though.44.1, 96 kHz does describe the sampling rate. The theoretical maximum bandwidth is 1/2 of the sample rate.
They are two sides of the same coin. Sample rate is measured in Hz, 1/second, which intuitively makes sense because the sample rate is the number of equally spaced signal readings made each second. “Frequency” in this context means how frequent a reading is taken in any given second. Perfectly logical, right?So it describes both the sample rate and the bandwidth recorded?
In the simplest form, just measure the energy in your channel when there is no signal.How is the signal to noise defined in practice?
The nominal location of the filter's cut-off is defined from the sample rate. It should reach cut-off before you reach the Nyquist limit. Amir calls out filters that are not reaching this requirement. The problem here is that if they don't cut off, there can be leakage of signal above the cut-off frequency into the pass band, which manifests itself as an alias, which is energy at a frequency in the pass band at a different frequency to where is really was. In principle this can sound bad. There is a tension between hard cut-off and in-band effects of the filters. Overall however there is no excuse for choosing a slow cut-off.I see in Amirs DAC measurements the high pass filter slope is inconsistently implemented and the ultimate attention level is also inconsistent. How does this influence the choice of sample rate?
Yes. Energy up in the ultrasonics are what form the alias products. If not properly filtered out they manifest themselves in the audible band with frequencies equal to things like the sample rate minus their initial frequency. In general, even with the softer filters sometimes seen you will be hard pressed to actually hear anything, but the mathematics is clear on the reality.Lastly, Amirs videos show.music correlated information on the ultrasonic range. Does that information along with the variable implementation of DAC filters have any bearing on the final analog signal output?
48 khz was also due to video via a different path.In an ideal world, the Shannon-Nyquist theorem would operate perfectly, and sound waves being a sinuisoidal function would be regenerated with no loss of information when sampled at a frequency equal to twice the largest sound wave we would want to capture. However, due to quantization error, the sample rate should be set marginally higher. Therefore, 44.1kHz was chosen as the redbook standard to provide this margin as well as to be compatible with NTSC videotape recording rates which were a popular digital recording medium at the time. I am unsure why 48kHz later became the standard for DAT, though.
Oversampling (twice the desired bandwidth) is beneficial though, "prior Art" by the inventors of the CD standard: https://en.wikipedia.org/wiki/Oversampling....Why even sample above 25+30khz? It seems like it just adds noise and data bloat to the file and doesn't give any benefit.
You are wrong. Two sample points would reconstruct the sinewave as well as 20 sample points would. (please I know actually you need ever so slightly more than two sample points. But close enough for our purposes here).Ok, you do say that doubling of sampling rate is enough to capture "sufficient" data at a given analog signal frequency.
I know that 1 Hz has nothing to do with music but lets use it as an example to simplify mathematically and visually. So we have 1 full pure sine wave with 1 Hz, and we are sampling it 2 times per second. This means we're getting maximum 2 peak points of the signal which is very far from from representing an original waveform. Thus if we'd used say 20 Hz samplings rate, it would give us more clear shape of a sine wave. And the same should apply to the higher frequencies... Am I wrong here?
Or maybe all these does not matter because we finally reach the speakers stage for reproducing the sound waves, which by itself have mechanical limitations (precise cone movement over the voice coil) and thus all shortcuts done in digital processing are not noticeable??
Sorry for my English
Or maybe all these does not matter because we finally reach the speakers stage for reproducing the sound waves, which by itself have mechanical limitations (precise cone movement over the voice coil) and thus all shortcuts done in digital processing are not noticeable??
Sorry for my English
Wow, started in 1977, I was 10 years old then. Shows how much "ahead of the pack" this early digital tech was. In 1977, analog tape reigned supreme...
Thanks, a really good educating video. I realized a lot. Now I need to dig into a bandwidth limited signal term, didn't fully understand...You are wrong. Two sample points would reconstruct the sinewave as well as 20 sample points would. (please I know actually you need ever so slightly more than two sample points. But close enough for our purposes here).
I know it seems so very non-intuitive, but it is so. Any given set of sample points reconstruct only one waveform which will pass thru all those points.
Your thinking, which on its face seems obvious, more samples equals more accurate sampling, is what confounds so many, and it is why higher sample rates are not of higher resolution, but only for higher bandwidth.
Go watch THE video from Monty. Watch it over and over, keep going until you understand each point along the way. Things will make sense then.
Oversampling gets us into an entire new area of discussion. The Wikipedea link is a nice quick overview, but it gets deeper than this as we swiftly get into noise shaping. Sampling at twice the Nyquist rate can trivially get us one additional bit of resolution. But that is only of benefit if there is actual signal to be found and if there is approriate dither present. Otherwise it gets you nothing. It is also about the most inefficient way of getting another bit resolution around. Which is where noise shaping comes in. Internal to ADCs and DACs appropriate use of oversampling is commonly used to get the needed performance. Outside that realm there is no justification.Oversampling (twice the desired bandwidth) is beneficial though
Thanks, a really good educating video. I realized a lot. Now I need to dig into a bandwidth limited signal term, didn't fully understand...
Regarding quantization, this process "joins" the sampled points of signal amplitudes, smoothes them to reconstruct analog wave, but it produces a noise (difference between an input value and its quantized value, or quantization error). Now here comes a logical question, if we use more than twice of sampling rate, theoretically we could have more sampled points, less quantization errors and thus less noise?
I understand cymbal crashes have significant ultrasonic overtones, but am I right in saying the recording system as well as our ears, basically captures the in-band intermodulation effects of these ultrasonic tones, so we don't really need to have the original recorded? - Hope that makes sense