• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Difference between MP3 and FLAC and crazy EQ changes

You still seem to be missing the point that the reconstruction filter in a DAC perfectly reproduces the wave form between the two sample points. It doesn't guess, or approximate, it knows mathematically what it is. It is far from primitive. There is nothing less primitive than perfect reconstruction.
I know what a reconstruction filter does (and no DAC has a perfect one... just very good ones).

If you downsample an audio file, then upsample it back, you will (probably) not get the original file back. Some more intelligent process however might be able to

This is the same as video/pictures: downscaling an image and then upscaling it may get you something closer or further from the original image.

Anyway, I think I've lost track of what we are arguing about. It certainly wasn't anything important (I was not claiming that upsampling >40kHz was actually a useful thing to do)

(I wonder if anyone tries to get a quality improvement by upscaling 16-bit audio to 32-bit?)
 
Wait

Are you comparing VIDEO upscaling to AUDIO upscaling...? Please elaborate... sure hope you are not going that route..
They were just pointing out the irony that video upscaling is useful, but audio upsampling is not.

It makes sense though: humans have evolved very complex eyes and vision possessing brain parts, compared to ears & audio-processing.
 
Wait

Are you comparing VIDEO upscaling to AUDIO upscaling...? Please elaborate... sure hope you are not going that route..

Of course, they are not the same but clearly Marantz and Denon are doing that. I would say they went that route and our hopes were not taken into consideration.

Lossy formats went downscaling, right? It seems to me that these are clear cases of downscaling and upscaling in audio.
 
We don't know what "restore" is doing. Of course, FLAC is lossless so there is nothing to restore. You can't REALLY undo the "damage" done by MP3, but the data-loss is often inaudible anyway. The compression algorithms try to throw-away details you can't hear anyway, along with some other "tricks".

Sometimes people look at the spectrum and see the loss of high frequencies to show how MP3 damages the sound. But if compression artifacts are heard with high-bitrate lossy audio it's usually something called "pre-echo" rather than the loss of highs. Even if you can hear to 20kHz, the highest frequencies tend to be drowned-out by not-quite-as-high-frequencies so we usually can't tell if they are filtered-out. And, I think the loss of highest-highs is more a characteristic or MP3 than the other lossy formats.

With all of that said, an exciter effect can be used to generate high frequency harmonics. You don't get the original lost high frequency information back, but you get something added and these effects DO "enhance" the sound (or alter or damage it ;) ).

Lossy formats went downscaling, right?
It's not that simple. All of the popular lossy formats have a sample rate up to 48kHz. Some can go higher. But the individual samples aren't stored. So they don't actually have a bit-depth but most have MORE dynamic range capability than 16-bit audio.
 
But you're not reconstructing the original analog waveform, you're only reconstructing the filtered one.
If theres no signal above the cuttoff freq (like most music) then you are reconstructing the original analog signal. And whats it matter if you arent reconstructing inaudable signals? Its like making a tv that displays UV, bad engineering.
 
If theres no signal above the cuttoff freq (like most music) then you are reconstructing the original analog signal.
(well if it was recorded with a microphone, there's probably some random ultrasonic noise in the room).
And whats it matter if you arent reconstructing inaudable signals? Its like making a tv that displays UV, bad engineering.
Oh, I agree completely! The argument was just about whether video and audio upscaling are conceptually similar (as they recreate lost information). Of course the difference is that we haven't gotten video to be as good as our eyes yet.
 
I have been using Spotify for a very long time and I was forbidden from switching to higher res options by my family.:)

As everyone knows, Spotify's highest resolution was Ogg Vorbis 320kbps and they say the differences are inaudible. Right before Christmas, I was listening to music and I noticed that I was more engaged and unexpectedly so. After investigating my system, I discovered Spotify had offered FLAC all of a sudden. I checked online and it was true, not a system glitch.

Since FLAC came out, I'm listening to a lot more music and particularly my favorites and also new music.

The craziest part is that I've had to make some EQ changes that I cannot understand.

The 1st set of changes was:
Treble -3db (it was +3db)
Sub +5db

These came within a month of processing the music, I guess.

The sub is crossed over at 40hz and LR play full range.

The 2nd set is:
+2db Bass
Sub -1db (-4db overall)
+1db Treble

These came one month later.

I also play louder (+5-10db) and music is not emotional or engaging (like my old system which turned you into a musician with every song) but it is so clear, I can hear the hair on the performers move - kidding but it is so clear, it's not even funny.

I don't understand what is happening because all that changed was a resolution change. The resulting EQ changes I made are monumental. Am I wrong to make them? Going back, sounds dull, flat, and one-dimensional. Why didn't I feel compelled to make the same EQ changes when listening to mp3 and why am I listening louder and like music more?

Is it the Denon's 4800h? Does it have a different FLAC circuit vs MP3?

Has anyone run into this with source quality changes?
Is volume normalization and playback level off?
 
Oh, I agree completely! The argument was just about whether video and audio upscaling are conceptually similar (as they recreate lost information). Of course the difference is that we haven't gotten video to be as good as our eyes yet.
very true, and I don’t want to dispute or nitpick the point that fundamentally we can fairly trivially capture and reproduce the full range of human hearing but cannot say the same for our eyeballs.

but on the value of upscaling and oversampling, there are some interesting parallels to video. DACs commonly oversample and so can video, and the benefits are similar in both cases. this is adjacent to whether all the ‘original’ information is there.
there are also some video playback methods to reduce visible artifacts of compression and colour banding - conceptually similar to this D&M “Restorer” thing.

In all cases though starting with the most information possible is ideal. from there, whether upscaling and oversampling can improve perception (or not) is going to massively depend. at least for video you can argue even the original source is “lossy” so clever modifications to it can still preserve artistic integrity. weaker case for audio, but what you do with your system is up to you.
 
Anyway, I think I've lost track of what we are arguing about.

You contradicted my statement:
This is different from audio where the original signal is perfectly reconstructed - including in between the samples.

By showing a situation where there was an insufficiently high sample rate to capture the signal in the first place.

But yes - let's leave it there.
 
Very few studio mics pick up much above 20khz.
Aah right. The ASR youtube has some videos showing high-resolution recordings with ultrasonic noise, so I wonder how they got there?
 
Likely quantization noise.
As well as wide band electrical noise added to the signal by every device it goes through (including the mic and the ADC) between the diaphragm, and the ADC.
 
It's been a long time, and perhaps my memory is not 100%. Certainly, the tech world has changed since then, but when the Motion Pictures Engineering Group debuted their standard methods of bit-rate reduction (Layer 2 & Layer 3 - don't remember Layer 1) I participated in a number of workshops to teach radio engineers how to hear problems with this new technology.

Problems that were endemic to analog audio, such as hum, buzz, rfi, distortion (as we are used to hearing it), and frequency response anomalies were basically absent, with an entirely new set of problems that weren't necessarily obvious unless you new to listen for them. The point being that it's unlikely that going from lossy to lossless alone was responsible for the frequency response changes reported by the OP. Even aggressive bit-rate reduction resulted in more or less flat frequency response (while sounding terrible despite this)

Now, it's entirely possible that Spotify is using differently mastered sources for their flac files or are processing them differently.

BTW, my recollection is that Layer 2 sounded better than Layer 3 until you reduced the bit rate to under 100k. MPEG Layer 2 at 128k over ISDN lines became the broadcast standard to replace analog connections previously provided by the phone company for a while. We may scoff at 128k, but it was a massive improvement over the analog service provided by the phone company at the time. On the consumer side, aggressive reduction in bandwidth made Layer 3 the market's choice, hence the .mp3 format
 
It is not possible to restore a lossy file back to lossless. That's by definition. Lossy files just do not contain the necessary data to make any accurate assumption about what the original, lossless file used to be like.

Same story with images and videos, at least up until recently. AI upscaling can actually add detail to a lossy file, however it does so by "guessing" based on the training of the AI algorithm. AI still doesn't have any knowledge of what the lossless counterpart to a lossy file used to be like.

I guess something similar could be done with audio. The closest thing I've seen (or heard) is AI noise reduction, which is pretty effective at removing background noise. But again, AI can only "hallucinate" new details, not restore any data that was lost.

That being said, loss of detail in audio is not the same as loss of detail in video. A lossy audio file will still produce a continuous wave form when reconstructed back to analog. There will be no "pixelation" or "gaps" in the output of a DAC like there might be in video. "Loss", with regards to audio, consists of missing frequencies, added noise and distortion, ultrasonic images, etc.
 
I guess something similar could be done with audio. The closest thing I've seen (or heard) is AI noise reduction, which is pretty effective at removing background noise.
If the noise is fairly constant and there is some fragment with only the noise present, then it has already been fairly easy to do that for a long time, e.g. with SoX's noiseprof and noisered effects.

In the attachment there is a recording of my headphones (HD650) through my headset microphone (Plantronics something) and a version with noise reduction applied.

Oh, and I'm sorry in advance ;)
 

Attachments

Last edited:
Back
Top Bottom