• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Difference between MP3 and FLAC and crazy EQ changes

I'd still be interested to see if you could hear a difference in a level matched blind test even with restorer operational.
I would because of the bass - the Restorer adds a lot of bass. If the song has almost no bass, it would be tougher. Does anyone know how the restorer affects 320kbps and FLAC files?

I find it fascinating that an upscaling which is the opposite of compression is used for Audio but upscaling has been used to great effect for TV and gaming and, in fact, is mandatory for almost all viewing including 4k sources.
 
I would because of the bass
But that is not due the encoding/compression - it is due to whatever the "restorer" is doing. I suspect the restorer will do the same to both Ogg Vorbs and Flac, unless it is applied at the time of decoding - otherwise, after decoding it has no idea what encoding the signal had, to apply a difference.

Either way - if you want to check, compare ogg - vorbis blind against flac - both with restorer off - that will tell you audibility of the compression - the online test you've been linked to is a quick and easy way to do that.

I find it fascinating that an upscaling which is the opposite of compression is used for Audio but upscaling has been used to great effect for TV and gaming and, in fact, is mandatory for almost all viewing including 4k sources.
Upscaling doesn't make any audible difference at all. Not sure what you mean by calling it mandatory.
 
I find it fascinating that an upscaling which is the opposite of compression is used for Audio but upscaling has been used to great effect for TV and gaming and, in fact, is mandatory for almost all viewing including 4k sources.

Upscaling doesn't make any audible difference at all. Not sure what you mean by calling it mandatory.
They're referring to video, where upscaling does make a visible difference. I'm not sure what they're talking about it being "mandatory" in 4k: unless your monitor has a higher resolution than 4k, it would be downscaling the video, or doing nothing if the resolution is correct. (Unlike Audio, Video quality does keep getting better, and lossy compression is absolutely noticeable (although display stream compression is allegedly transparent)).
 
They're referring to video, where upscaling does make a visible difference. I'm not sure what they're talking about it being "mandatory" in 4k: unless your monitor has a higher resolution than 4k, it would be downscaling the video, or doing nothing if the resolution is correct. (Unlike Audio, Video quality does keep getting better, and lossy compression is absolutely noticeable (although display stream compression is allegedly transparent)).
Ah - OK, yes I was thinking the statement was about upsampling of audio used in video applications.


And correct (for benefit of @techsamurai)- video is pixel based (you are effectively seeing the individual samples with nothing between), and at typical video resolutions the individual pixels are large enough to see, though I feel this stops at <4k with typical viewing distances.


This is different from audio where the original signal is perfectly reconstructed - including in between the samples. Upscaling does not improve the (in between sample) resolution, it only increases the maximum range of the encoded frequency band. 44.1kHz already captures >20kHz, and hence everything the human ear can detect.
 
This is different from audio where the original signal is perfectly reconstructed - including in between the samples.
Unm, no? You loose the information between the samples that occurs at a higher frequency, e.g. suppose you have a 1hZ and 10Hz pure tone playing simultaneously:
1000043059.png

if you sample this at a rate of 2Hz, and then play it back with a perfect reconstruction filter, you'll get:
1000043060.png

I.e. the high frequency information between the samples (at t = 0, 0.5, 1, etc.) is lost.

Upscaling does not improve the (in between sample) resolution
It does, this is exactly what a reconstruction filter (on any competent DAC) does. The thing is that whatever filter/algorithm you use can only guess at what was in between the samples, which is why a video/picture looks worse when you downscale and then upscale it (the upscaled imaged will have guessed incorrectly what the extra pixels should be).

The reason why it's pointless with audio is that we've reached the limit of our hearing ability: any more than than 40,000 samples a second doesn't let us hear more detail. Whereas more than say 2160×3840 pixels does let us see more detail; similarly with increasing the frame rate (or is 60Hz enough?)
 
Last edited:
Assuming that Spotify have used the same wav-input when encoding both ogg and flac, maybe they made some choices in the ogg-encoding (increasing loudness, reducing noise, EQ, I can only guess) that made an audible difference?
 
Unm, no? You loose the information between the samples that occurs at a higher frequency
But that is not digital audio - or at least not properly implemented digital audio.

Correctly implemented digital audio is band limited to less than Fs/2 before sampling. So there are no frequencies higher than half the sample rate present in the digitised audio.

It is this band limiting that allows the wave form to be perfectly reconstructed. Whether or not you upsample.}

In your example, as long as the sample rate is >20Hz (2x your maximum frequency of 10Hz), then the waveform can be perfectly reconstructed. And yes - fully in between the samples, perfectly reconstructing that 10Hz signal on top of the 1Hz one.

This is why 44.1kHz sampling is perfectly adequate for digitising 20kHz band limited audio. The signal can be perfectly reconstructed.
 
Last edited:
Assuming that Spotify have used the same wav-input when encoding both ogg and flac, maybe they made some choices in the ogg-encoding (increasing loudness, reducing noise, EQ, I can only guess) that made an audible difference?

It's not - I should update the post.

Denon's Restorer is creating all kinds of sound differences. It's absurd - it's actually not bad for my system but that simply means I have issues elsewhere particularly in bass when I turn it off. When I change to Pure Direct the bass is there, almost as good as Restorer with Low.

I need to understand my Audyssey curves and choice and why I don't have bass from 40hz upwards when Restorer is off and I have maximum bass when it's Low. I swear to god, Audyssey XT32 and Denon are out to get me. This is like Mission Impossible only I had no idea my system is starring in it.

I'm almost tempted to live with Restorer on Low and forget it's there :) It sounds glorious, just needs a bit of warmth and if the voice is not 100% the same but the music sounds better, I'll just live with it.
 
But that is not digital audio - or at least not properly implemented digital audio.

Correctly implemented digital audio is band limited to less than Fs/2 before sampling. So there are no frequencies higher than half the sample rate present in the digitised audio.

It is this band limiting that allows the wave form to per perfectly reconstructed. Whether or not you upsample.
But you're not reconstructing the original analog waveform, you're only reconstructing the filtered one. (if I record a 1MHz soundwave, and play it back, I would expect to get a 1MHz wave, anything else is not a perfect reconstruction).
 
I compared 320kbps to Lossless with headphones. I used a song with isolated highs with distortion and isolated bass.

I'm not going to lie - I could not isolate differences between 320kbps and Lossless - dropping to very low resolutions had a sound change that was audible as if you're switching something but that change was not audible or obvious switching from 320 to Lossless.

The song is Das Boot 2001 DJ Mellow Mix from 5:30 to 5:50. I tried other stuff too but once you have a lot of sound, it's even harder to tell what's happening.
 
Denon's Restorer is creating all kinds of sound differences. It's absurd - it's actually not bad for my system but that simply means I have issues elsewhere particularly in bass when I turn it off. When I change to Pure Direct the bass is there, almost as good as Restorer with Low.
Hmm very curious, I guess they didn't want to have a feature that made no audible difference (which would be the correct way to "restore" high quality lossy audio to lossless).

I wonder if anyone's actually measured what this does, is it possible to simulate it with EQ?
 
I compared 320kbps to Lossless with headphones. I used a song with isolated highs with distortion and isolated bass.

I'm not going to lie - I could not isolate differences between 320kbps and Lossless - dropping to very low resolutions had a sound change that was audible as if you're switching something but that change was not audible or obvious switching from 320 to Lossless.
Is this with or without the restorer on?
Does it detect if the audio being played is lossy and modify that, but if it's "lossless" it leaves it alone? thus tricking people into thinking its restoring the lossy audio. Instead of what it actually seems to be doing, which is changing it. (you could test this: convert a FLAC to Vorbis, record the Vorbis played through the restorer as FLAC, and convert the Vorbis back to FLAC. If the restorer version is more different from the original FLAC than the Vorbis–FLAC version, it's definitely not doing what it's supposed to).
 
Hmm very curious, I guess they didn't want to have a feature that made no audible difference (which would be the correct way to "restore" high quality lossy audio to lossless).

I wonder if anyone's actually measured what this does, is it possible to simulate it with EQ?

I was thinking the same. In what I heard, it gives bass a huge boost particularly in the 40+hz where the sub is no longer involved in my system. In Sade's Kiss of Life, bass disappeared when I turned off Restorer and came back with Low Restorer and Pure Direct. So there's definitely something wrong with my 40hz+ bass range and I'll have to test the sub.

In the song S'Wonderful, the upright bass was much clearer (almost inaudible without Restorer) and the cymbals/snare (whatever it is) also did the same so it definitely affects both bass and treble. I think the fundamentals for upright bass are 40hz-500hz.

Diana Krall's voice was also different particularly the way she emphasized some words like (ple-beian). It's crazy and it's definitely audible with speakers (and obviously headphones).

I may do a new Audyssey setup and then re-apply curves.
 
Is this with or without the restorer on?
Does it detect if the audio being played is lossy and modify that, but if it's "lossless" it leaves it alone? thus tricking people into thinking its restoring the lossy audio. Instead of what it actually seems to be doing, which is changing it. (you could test this: convert a FLAC to Vorbis, record the Vorbis played through the restorer as FLAC, and convert the Vorbis back to FLAC. If the restorer version is more different from the original FLAC than the Vorbis–FLAC version, it's definitely not doing what it's supposed to).

It has to be input-based because it's clearly ignoring the fact that the sound is lossless. I'm still surprised that I was able to detect the Restorer's different impact with FLAC vs 320Kbps cause clearly my mind went berserk after hearing it. I can't imagine if Restorer didn't work at all with Lossless - I would have checked the sub to see if it's on and assumed by bass woofers had both died :)
 
But you're not reconstructing the original analogue waveform, you're only reconstructing the filtered one. (If I record a 1MHz soundwave, and play it back, I would expect to get a 1MHz wave, anything else is not a perfect reconstruction).
Digital sampling (more accurately discrete time sampling) MUST always be band limited to half the sample rate. If you needed to record a 1MHz signal**, you would need the original sample rate to be 2MHz. This is not upsampling, it is simply sampling the signal at the rate appropriate for that signal. AND it will still have to have an anti-aliasing filter to make sure there are no frequencies higher than 1MHz in it.

For audio we only need 20kHz and lower to be recorded. Anything higher is not audible for humans. So if your audio is delivered to you in 44.1kHz sample rate (e.g. CD) , that is perfectly adequate - it will contain all the signal that you can hear.

More importantly - even if you upsample from 44.1kHz to any hi res format you like - the upsampling process cannot replace the frequencies that have already been filtered out to make it fit into 44.1kHz.

The same is true even if you have high res files - say at 96kHz (<48kHz bandwidth). Not only do they contain a bunch of ultrasonics that you can't hear (and most of that mostly noise), upsampling those also cannot improve the quality beyond what is already captured at the time of sampling.



** EDIT: And let's put to one side the fact that 1MHz can effectively not travel through air - or at least that absorption is so high it would attenuate to nothing in a few CM. Nor are there any microphones designed to detect it.
 
Last edited:
More importantly - even if you upsample from 44.1kHz to any hi res format you like - the up sampling process cannot replace the frequencies that have already been filtered out to make it fit into 44.1kHz.
Aah I see what you're saying. However the same can be said for video, I guess the difference being that with video we have more knowledge of what the missing pixels might have been, and what recreations of those pixels will look the best to our eyes. Actual upsampling algorithms for Audio though are probably very simple and don't do any of the advanced stuff that video does. But I could imagine something that intelligently upsamples say 20kHz audio to 40kHz in a way that sounds nicer than a more primitive upsampling.
 
However the same can be said for video,

Not so - as I pointed out above. With video we don't look at an analogue video signal. We look at individual pixels. Each pixel is effectively an individual sample point. There is no analogue value in between the sample points (pixels) that is being reconstructed. When we upscale video (say 2x) we effectively create new pixels (new sample points). These sample points are given a value in an analogous way to the reconstruction of the analogue sound waveform in between the digital audio sample points - but it is only at one point - not at an infinity of points between samples as is done for audio.

There are some analogies that can be drawn between video and audio but they are far from being the same thing. One very significant difference is that with video samples are spread over a 2d space, not a 1d time.
 
that intelligently upsamples say 20kHz audio to 40kHz in a way that sounds nicer than a more primitive upsampling.

You still seem to be missing the point that the reconstruction filter in a DAC perfectly reproduces the wave form between the two sample points. It doesn't guess, or approximate, it knows mathematically what it is. It is far from primitive. There is nothing less primitive than perfect reconstruction.

I see you are relatively new here, you might not have seen one of our favourite primers on how digital audio works. You can clearly see (at 5:30) how a 20kHz sine-wave is perfectly reconstructed from a 44.1kHz sample rate - with pure analog test gear displaying it.

 
My usual recommended test:


I absolutely cannot tell the difference between FLAC and Spotify "free".

If songs had more super-high frequency content near 20kHz and if my ears were a bit younger (last time I checked I topped out at 18kHz), then maybe...
18kHz is actually outstanding. Luckily for many of us, there's little content above 12kHz.
 
I would because of the bass - the Restorer adds a lot of bass. If the song has almost no bass, it would be tougher. Does anyone know how the restorer affects 320kbps and FLAC files?

I find it fascinating that an upscaling which is the opposite of compression is used for Audio but upscaling has been used to great effect for TV and gaming and, in fact, is mandatory for almost all viewing including 4k sources.
Wait

Are you comparing VIDEO upscaling to AUDIO upscaling...? Please elaborate... sure hope you are not going that route..
 
Back
Top Bottom