What exactly does up-scaling do with the signal, (for good or bad)?, currently i use Sox in JRiver , and does it have any audible benefits at all?
Just adding a different viewpoint...
First I'll state what you already know: Once sampled, you can never get more information from those samples. Let's say it's 48 kHz sampling, with a bandlimiting lowpass filter below half the sample rate, starting to roll off at 20 kHz.
If you need a different sample rate (maybe because your player doesn't support 48k), you could play it back to analog, and sample it again. But ultimately it's math and that can be done in the digital domain. You could resample it to 96 kHz, but it changes only the rate, the frequency response still drops off at 20k be cause that's all the original samples had.
OK, "my player doesn't support playback at 48k" is unlikely. Why else might we want a higher sample rate?
1) For playback higher sample rate lets us use a simpler analog filter in the DAC. More relaxed slope, and avoiding phase changes near the top end of hearing. You still need digital filters to raise the sample rate. Again, they won't improve the original sampling, but in normal music playback some latency is not a problem, so a good resample at least won't harm the original sampling while taking it to a higher rate. And the higher rate will allow the DAC to (potentially) be gentler and therefore (potentially—I'm not saying you will hear this, necessarily) change the original recording less.
2) Different use case—signal processing, such as making audio effects for the recording industry: Non-linear processes (compressors, limiters, fuzzbox and amp simulation, anything remotely like a signal clipper) create harmonic distortion. Clip a sine wave, and you have a waveform with additional harmonics. They can alias down into the audio band, and become easily noticeable (when the guitar play bends a string upward, you'll hear aliased tones move downward—at significant damage to the music). Upsampling for more headroom give more space above the audio band between spectral images (requires more explanation), so that they have a chance to fall off before projecting down into the audio band. For this use case, it's usually upsample, do the task (fuzzbox simulation, etc.), downsample to original rate. Typically, you might need 8x the standard rate for the oversampling, or higher.
Obviously, we're talking about #1 here. I mention #2 in part because people with a little understanding often make assumptions such as, "why don't we do everything at 96k? It will help audio plugins too, then they won't need to oversample." No, #2 typically needs a lot more oversampling to make sense than #1. Using a higher sample rate than you need has the downside of cutting your computation power by the same factor (you can do half as much at 96k as 48k).