What a bad reply. Even idiots know that there is scientific research behind lossy compression codecs. They are useful? Yes.
Are they perfect to the point of being able to replace a lossless encoding? In my opinion and not only (there are evidences of this, thanks
@prerich for your good posting), no.
Please pay attention to your writing, it is offensive.
Nice goalpost moving. As if anyone was talking about, or even defining, 'perfection'. What's required is proof that one really
hears a difference
that is due to lossy data compression.
Do 'even idiots' know that in normal listening there are codecs and settings where most people simply *cannot* identify such a difference, in a fair, that is, level matched blind, test? You tell me.
Which isn't to say no audible difference could
ever be detected. For example, if you take a signal difficult to encode by even the best codecs at the best settings, and subject it to forensic A/B listening with headphones, and focus on just a small segment where a sharp transient occurs, playing it over and over, you
may start to be able to tell a difference verifiable an ABX test. You may through such means train yourself to be able to identify such difference under such conditions. Such tedious forensic listening to a tiny bit of audio is not normal listening. It doesn't mean afterwards that you could walk into a room with a surround system, plop yourself down and within seconds or minutes casually be able to ID audio based purely on its *lossiness* or not. And it sure doesn't mean someone with zero training for lossy artifacts could do it. Those who claim they can do so are indulging in clownish audiophile braggadocio.
So NO, it's would not be enough to just say 'if it's lossy, it's going to sound worse'. That would be a profoundly false claim, easily disproved by fair listening test.
prerich's posts did not prove your point. He did not isolate the variables that would be required to prove you point about *lossiness*.