They refer to the same mathematical process. They are, however, used in different contexts.
Oversampling refers to what goes on before the anti-aliasing filter in any modern DAC (the one in your receiver, the one in your iPod, and so on...). The purpose of this filter is to pass audio frequencies (20khz and under) unmodified, but dramatically attenuate frequencies past the Nyquist frequency. If this is not done, you end up with distortion in the audioband (aliasing).
The problem arises because if we have a filter which cuts really sharply, it has various unwanted effects: filter design involves mathematically certain compromises. So, if you want to have 20khz at full volume, but you want to stop frequencies above 22khz leaking in, you're going to have to compromise in the design of your filter, potentially audibly so.
The solution is very simple: oversampling. This simply increases the sample rate so you have a wider transition band (more room) available for your filter, as the Nyquist frequency is higher. This means that it doesn't have to be so steep, and you don't end up with a filter that degrades the audio.
When you read about a DAC featuring an upsampling process, this doesn't take away whether or not it oversamples (if it didn't do that, it would be what some call 'NOS'/non-oversampling and what most here would call 'broken').
The purposes of such sample rate conversion is usually to do with jitter reduction: all the timing information of the digital audio interface is, in effect, 'thrown away', and the digital audio is reproduced to a local, low jitter time reference. This process can be done highly effectively and completely transparently. It is not the only way to reduce jitter, but it is a perfectly good way to go about it.
Source and longer explanations taken from a great reddit post I had saved
here