• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Up Sampling

I am sure that we have a moderator here who uses hands - for measurement.. :p
 
Last edited:
The only person needing 24-bit dynamic range is a lazy recording engineer who wants to be able to record every noise, from a mosquito at the end of the room to a Saturn V blast off, without adjusting the level control - ie nobody at all, it is bonkers.
24 bits makes recording more idiot proof
Such lazy practices put burden on virtual instrument users.
http://www.soundsonline-forums.com/showthread.php?t=42110
Some of the replies have some good points.

Modern sample-based virtual instruments can be very large (up to terabytes) and require instant and random access to hundreds of samples simultaneously.
 
The article is written by the guy who makes a NOS R2R D/A and is rather skewed and contains a number of errors. That said, if you read between his biased lines, it has some value. Thanks for posting. :)

Yah, I figured I was in trouble when I didn't have time to read it thoroughly, but also figured someone would call it (and me) out on any inaccuracies... At least now I won't bother to read it thoroughly and will hopefully remember to not post it again.

Thanks - Don
 
The only person needing 24-bit dynamic range is a lazy recording engineer who wants to be able to record every noise, from a mosquito at the end of the room to a Saturn V blast off, without adjusting the level control - ie nobody at all, it is bonkers.

Or has to deal with musicians (and/or singers) who randomly vary their volume 10-20 dB, either intentionally, or by moving around the mic (more common). Many musicians move while they play and have no idea how a microphone placed fairly closely will exaggerate that movement in the sound level. It is nice to have extra headroom without burying the soft stuff in the noise floor.
 
Shannon's Theorem (I call it the Nyquist Theorem, or Nyquist-Shannon Theorem, but it is all the same thing),.....

The most recent term is imo WKS-Theorem (Whittaker, Kotelnikow, Shannon) although Lüke /1/ in his interesting article mentioned that Raabe also independently developed and published the Sampling Theorem in 1938. For simplicity i tend to stick to the shorter Shannon Theorem.

.
...... relates back to information theory and defines the information bandwidth that can be successfully recovered from sampled data. You could apply simple interpolation, which does generate "new" samples in the sense that you are creating data that was not there in the original, but those samples would not contain any additional information so perhaps the problem is simply the way I define "new" samples. Even if you do nothing but zero-pad the data, the added zeros are "new" data not present in the original bit (sample) stream.

Iirc this is the third or fourth thread on "upsampling or oversampling" and i think in each thread we wrote essentially the same as in this one, which means we did not resolve our difference in description, but i was wondering why it even emerged.
I think we now are able to address it and explain the different definitions/terms in general/other fields and wrt their usage in (consumer) audio. :)

Taken your example of "zero padding" - which means inserting N-1 additional zero samples between the original samples for a N times increase in sampling rate - the mere inserting is often called "upsampling" in other fields, done by an "upsampler" stage - so no interpolation is done at this state.
In audio - and in our typical discussions about upsampling and oversampling - the term "upsampling" denotes the combination of the "upsampler" stage and the following "interpolator" stage (which is a low pass digital filter) and the result of the operation is a data stream ready to be input data for a DAC unit (the DAC unit including then the analog anti-imaging filtering needed).

When upsampling data, which is what I thought was the discussion, the "new" samples must fit the theorem with respect to the new (faster) sampling rate, thus now the samples are no longer bound by the lower rate. That is, the new values may not fit the constraints of the lower sample rate, but can have information bandwidth greater than the original.

There is the difference wrt the process overall. In audio the terms "upsampling/oversampling" means typically (i remember only two manufacturers doing it differently, see below) that the new data stream looks as if the original analog signal (still with the same bandwidth restriction as before) were sampled with a higher sample rate.
No additional content above the nyquist frequency of the original signal should be resultin in the audio band from this process.

In other words, the additional ("new") samples created during upsampling can contain information not in the original samples due to the new (higher) Nyquist rate. To use numbers, if I upsample a CD-rate bit stream from 44.1 kS/s to 88.2 kS/s, I can now create data with frequencies up to 44 kHz where before it was limited to 22 kHz. Some papers use "extrapolation" to describe data that contains frequency information beyond the original signal. There are predictive algorithms that do try to add higher-frequency data based on the trend of the original samples, at least in the RF world; audio-rate converters are not my day job.

As said above, in the audio field it was mainly asserted to preserve the original content (approximately "perfect") while enabling usage of less compromised/gentler analog reconstruction filters. (or as in the Philips case to get better results due to noise shaping although using only 14 bits converters in the beginning)
As said above, i remember only two manufacturer (in the old(er) days) that deliberately tried to add content (or accepted this as a side effect) above the nyquist frequency, one was Pioneer with their so-called "Legato link" technology:
legatolink.jpg

(copyright by Pioneer)
they obviously wanted to add content above nyquist, allegedly due to better perceived audio quality overall,

and the other was Wadia with their so-called "french curve" process which was an interpolation routine based on Bezier curves, also alledgedly done due to better perceived audio quality overall.

At this point I suspect we are far beyond what most of the readership cares about and down to differences in whatever courses and experience we have in defining the terms...

I hope that it helps our members as the distinctions are quite important for the understanding.


/1/ Lüke, Heinz Dieter.The Origins of the Sampling Theorem. IEEE Communications Magazine, April 1999, 106
 
Last edited:
So "Legato" means "second law violation"?
 
Upsampling and oversampling mean different things to me but this feels like one of those religious debates for the sake of debating. I've babbled enough...
 
@SIY,

So "Legato" means "second law violation"?

Why should "Legato" means "second law violation"?

@DonH56,

Upsampling and oversampling mean different things to me but this feels like one of those religious debates for the sake of debating. I've babbled enough...

I´ve just pointed out what the meaning and usage of these terms in the audio field is; why that should be a "religious like" debate is beyond my understanding.

That "upsampling" might be better sounding (or differently sounding) is more like a "religious debate" as the folks at dCS ones said "it (i.e. upsampling) shouldn´t sound differentely (compared to oversampling) but it does" .
 
Why should "Legato" means "second law violation"?

You'd have to ask Pioneer that, I'm not the one claiming to have defeated first principles.
 
You'd have to ask Pioneer that, I'm not the one claiming to have defeated first principles.
I was just interested in what triggered your question "So, Legato means second law violation?", so i still think it is more reasonable to ask you instead of Pioneer.

And additionally i´d be interested in:
By what statement(s) did Pioneer "claim" to have defeated first principles?
 
See your figure and commentary in post #51.
 
That doesn´t answer my questions; it was already a given that my post triggered your posts, so it´s sort of circular but does not help......
 
The Legato link stable platter Pioneers were not well received by the audio community and tested relatively poorly. I have some reviews someplace where I think the subjective reviewers saw the bench tests before they listened and consequently, the reviews were not great.

I was selling Pioneer at the time (91/92) and didn't think much of them, but that was primarily to do with the stupid 'stable platter' which proved to be a badly thought-out design, cheaply constructed and over-priced. To be honest, I didn't listen hard to them as I regarded the entire 'design' so flawed I essentially sold anything else but.
 
the stupid 'stable platter' which proved to be a badly thought-out design
Quite. A high inertia platter record player helps speed stability but trying the same with a CD player where steady speed is not part of the specification displays poor understanding.
They aren't the only ones though, one very expensive and highly regarded transport has a high inertia platter as a marketing feature.
 
Back
Top Bottom