If one had a filter after the microphone, or in your former case a some RF filter, then they could over sample the signal in the first stage of the AD.Oversampling means the Nyquist frequency is higher than the original Nyquist frequency.
Yes, if the signal bandwidth is less than the original Nyquist bandwidth, nothing new is added.
As the sample rate is higher than Nyquist says it needs to be.
Well the title of the thread is “what is the point”.I am not sure what this means. Upsampling converts a signal acquired at one rate to the same signal acquired at a higher rate. There are different ways to implement it.
I am sure I don't care; this whole thread seems to have turned into a big debate about nothing.
People seem assionate about it there is some golden things that are going to pop out.
The only point of it is to service the DAC or DSP approach… and usually in a way that is easiest or makes for the cheapest set of hardware chips.
Would the end user care? And should they?
I cannot imagine why, biut people talk about silicon chips in a similar way as some people discuss the water that got turned into a Shiraz.
And the stupidest example of a 192ksample/sec file which has the music band limited to 22kHz.
That seems more interesting than upsampling.Time resolution is important in some applications like radar, which is where my career started.
As I said earlier, plenty of experts to argue every point, and I am just not that interested. Sorry I stepped into this one.