• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

spotify quality vs "hi-fi" lossless options, i cant tell a difference.

Brian Hall

Addicted to Fun and Learning
Forum Donor
Joined
Nov 25, 2023
Messages
548
Likes
1,008
Location
Southeast Oklahoma
Here's a good album to test with:

1710209224113.png


The same album is on Spotify, Qobuz, Tidal and Amazon.

Web links:




https://music.amazon.com/albums/B000V8E5SE?ref=dm_sh_WflChC98b13P2clLld3uXBI2U

I don't hear any differences (not blind testing) except the Tidal version seemed slightly louder.

Great album anyway if you like Paganini's music.
 

Count Arthur

Major Contributor
Joined
Jan 10, 2020
Messages
2,249
Likes
5,037
Bandwidth and storage are so cheap these days that I fail to understand why we're trying to save a few kbit/s here or there. OTOH I think 24/192 is wasteful.
I was watching a video by a mixing and mastering engineer the other day, and his opinion was that beyond 16bit, 44.1kHz is unnecessary for playback, but for mixing and mastering, greater bit depths and higher bit rates give you more headroom for manipulation; which is why many studios work with 24bit, 96kHz files.

I'm not a sound engineer, but perhaps someone that is, could elaborate on that.
 

krabapple

Major Contributor
Forum Donor
Joined
Apr 15, 2016
Messages
3,197
Likes
3,767
I hear a difference, and did a blind test (someone else in an other room was playing the music, we just saw the speakers and the amp), but i can enjoy higher bitrate lossy music also. Lossy encoding came a long way, and on a lot of systems that are used a lot, you and i won't hear the difference. I hear mainly the difference on higher volume, and with music that is not overly compressed. With modern pop, i sometimes hear it when the music has very deep bass, but mostly not.

But i stil prefer lossless, so i collect my music in lossless format. I don't stream much, and always on free streams (youtube) that are lossy. But when i listen to a quality system on higher volume, the difference is obvious for me. If it isn't for you, that's good, you can use lossy for me. I also still enjoy vinyl (that is lossy also, but in analog format) so i can understand it does not matter for you, but it does for me in critical listening situations or on high volume (dj's and so).

Your anecdote doesn't really comport with how artifacts of lossy are heard.

Nor did you supply any details of the lossy codec. And of course, the test wasn't really well-controlled.

IOW I wouldn't be so sure what you heard was 'lossiness'
 

krabapple

Major Contributor
Forum Donor
Joined
Apr 15, 2016
Messages
3,197
Likes
3,767
I was watching a video by a mixing and mastering engineer the other day, and his opinion was that beyond 16bit, 44.1kHz is unnecessary for playback, but for mixing and mastering, greater bit depths and higher bit rates give you more headroom for manipulation; which is why many studios work with 24bit, 96kHz files.

I'm not a sound engineer, but perhaps someone that is, could elaborate on that.
absolutely true for bit depths, more questionable for sample rates
 
Last edited:

Keith_W

Major Contributor
Joined
Jun 26, 2016
Messages
2,660
Likes
6,064
Location
Melbourne, Australia
I was watching a video by a mixing and mastering engineer the other day, and his opinion was that beyond 16bit, 44.1kHz is unnecessary for playback, but for mixing and mastering, greater bit depths and higher bit rates give you more headroom for manipulation; which is why many studios work with 24bit, 96kHz files.

I'm not a sound engineer, but perhaps someone that is, could elaborate on that.

Do you have a link to the video?

absolutely true for bit depths, more questionable for sample rates

I would imagine you are saying this because this gives room to truncate bits when adjusting volume for mixing? Just a question - if you have 16/44 source and you are applying DSP to it, is there any benefit in upsampling 16/44 to 24/44 or higher?
 

dasdoing

Major Contributor
Joined
May 20, 2020
Messages
4,301
Likes
2,770
Location
Salvador-Bahia-Brasil
even the headroom argument is questionable. for starters, the extra bits are added to the bottom, not the top. therefore headroom is the wrong term. second: the noise in the recordings is above the 16bit bottom. no reason to add space below the noisefloor
 

dasdoing

Major Contributor
Joined
May 20, 2020
Messages
4,301
Likes
2,770
Location
Salvador-Bahia-Brasil
the higher sample rate is useful when digital non-linearities are added. But there is oversampling available most of the time to make this unnecessary
 

danadam

Addicted to Fun and Learning
Joined
Jan 20, 2017
Messages
994
Likes
1,545
Bandwidth and storage are so cheap these days that I fail to understand why we're trying to save a few kbit/s here or there. OTOH I think 24/192 is wasteful.
These explanations sound reasonable to me: reddit comments:
[...]
I suspect all that would wind up costing a lot more than would be covered by the increase in subscription revenue from current Spotify users upgrading their plans to lossless and people switching back to Spotify from Tidal and Qobuz.
 

danadam

Addicted to Fun and Learning
Joined
Jan 20, 2017
Messages
994
Likes
1,545
I would imagine you are saying this because this gives room to truncate bits when adjusting volume for mixing? Just a question - if you have 16/44 source and you are applying DSP to it, is there any benefit in upsampling 16/44 to 24/44 or higher?
I guess that depends on what kind of DSPs, how many times they are applied and what is the format of data between DSPs.

First of all, I can't say I'm very familiar with how DSP plugins work, so I can be very off base here. I kind of expect that they accept anything on the input, convert it to float/double, do their stuff and output float/double. In that case it doesn't really matter if you upsample 16->24 or not.

But let's say they output in the same format as they receive the input, or let's say you decide to convert to the input format after each DSP. In that case, you upsampling 16->24 probably doesn't matter much if the number of DSPs is 1 or 2 and possibly matter if the number of DSPs is 100.

Here is applying random +/- 0.5 dB 1Q equalizer at random frequency using SoX. The 1st time I apply it 2 times. The 2nd time I apply it 100 times. The in-between format is 16-bit with dither, 16-bit without dither and 24-bit without dither.

002.png

100.png
 

Ambient384

Member
Joined
Mar 20, 2022
Messages
66
Likes
26
You mean MDCT codecs since Subband ones like Musepack/MP2 lack that issue. Folk there were laughing that MPC at ~170kbps was transparent on a old killer sample that AAC/MP3/Vorbis struggled on. It seems like in heavy Electronic samples Musepack behaves more like Faux-Lossless codec since It leaves stuff It can't compress at 450 ~ 1411kbps.
Re-quoting this because I can't tell LAME MP3 at V2 from lossless 99% of the time and the 1% is encoded to V0 instead. It insane at what the LAME encoder gotten out of MP3 quality wise.
 

welwynnick

Active Member
Forum Donor
Joined
Dec 26, 2023
Messages
244
Likes
199
I was watching a video by a mixing and mastering engineer the other day, and his opinion was that beyond 16bit, 44.1kHz is unnecessary for playback, but for mixing and mastering, greater bit depths and higher bit rates give you more headroom for manipulation; which is why many studios work with 24bit, 96kHz files.
I'm not a sound engineer, but perhaps someone that is, could elaborate on that.
I used to work with a very experienced engineer who did work on a sound mixing desk, and he said it was difficult explaining the following to some people.
Just because an audio sample has been captured and converted to digital does not it make "safe" and immune to degradation.
Whenever a process is applied to digital audio, the accuracy of the data is degraded by half a bit with each process. The more processes, the more degradation.
A process is anything that can cause the number of the data to change - so pretty much anything except an integral delay.
The reason is that when the sample is changed, half the time there will be a rounding error and half the time there won't.
If the processing pipeline is the same as the capture resolution, say 16 bits, those rounding errors will build up, half a bit at a time.
The answer is to perform all the storage and processing in greater bit depth than the required fidelity, so the degradation only affects the noise, not the wanted signal.
This will also apply to any processing that's performed in the replay equipment.
 
Last edited:

Philbo King

Addicted to Fun and Learning
Joined
May 30, 2022
Messages
669
Likes
877
I used to work with a very experienced engineer who did work on a sound mixing desk, and he said it was difficult explaining the following to some people.
Just because an audio sample has been captured and converted to digital does not it make "safe" and immune to degradation.
Whenever a process is applied to digital audio, the accuracy of the data is degraded by half a bit with each process. The more processes, the more degradation.
A process is anything that can cause the number of the data to change - so pretty much anything except an integral delay.
The reason is that when the sample is changed, half the time there will be a rounding error and half the time there won't.
If the processing pipeline is the same as the capture resolution, say 16 bits, those rounding errors will build up, half a bit at a time.
The answer is to perform all the storage and processing in greater bit depth than the required fidelity, so the degradation only affects the noise, not the wanted signal.
This will also apply to any processing that's performed in the replay equipment.
Yep. This is why DAW software uses 32 or 64 bit floating point. Even multiplying a 16 bit value by 1.01 (+0.0864 dB) requires 32 bits FP or 24 bits integer to hold the result.
 
Last edited:

Axo1989

Major Contributor
Joined
Jan 9, 2022
Messages
2,902
Likes
2,954
Location
Sydney
I can't find the video I originally watched, but here's one that covers the topic:


That was great. I enjoy this guy on video, the explanations are both detailed and straightforward, and his tone is free of annoying YT-isms and stylistic noise/overload. Much pointless discussion in forums (including here) can be obviated by watching a few.
 

Ambient384

Member
Joined
Mar 20, 2022
Messages
66
Likes
26
Re-quoting this because I can't tell LAME MP3 at V2 from lossless 99% of the time and the 1% is encoded to V0 instead. It insane at what the LAME encoder gotten out of MP3 quality wise.
Also forgot Helix MP3 at V150(256kbps VBR) being even better. lol
 
Top Bottom