• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Could you help me setup how to rip CDs into FLACs?

Be aware that the HiRes versions of old albums are often from the worst remaster available. Lots of them sound horrible without any dynamics. Just check out the "DR database" and you'll see that the HiRes versions are usually among the worst. I usually look for the first release when I get (used) CDs. Not all remasters are bad, but I'm alarmed when I read "remaster".
I found some of them, two years ago was astonished to listened an opera in 24/192 kHz that sound worse than my CD.
 
Is true in the case of the OP: he will demand advises to rip his CDs to FLAC to use them on a DAP.

In which cases you need FLAC on 24/96 for playing (weather professional or not)?

Even at most exigent and aggressive multichannel equalization you can upsample your master with software. I think there exists DACs that systematically transforms each file to 24/192 kHz, if I’m not wrong even WiiM Ultra do that to preserve dynamic range and headroom when EQ or changing digital volume.

No need to rip on any other resolution than CD, is a nonsense
Ripping a CD into a 96/24 Flac file? This procedure makes no sense. An actual Hi-res file come from the original master, it is created specifically by the label and sold on music portals
 
Be aware that the HiRes versions of old albums are often from the worst remaster available. Lots of them sound horrible without any dynamics. Just check out the "DR database" and you'll see that the HiRes versions are usually among the worst. I usually look for the first release when I get (used) CDs. Not all remasters are bad, but I'm alarmed when I read "remaster".
Most of my hi-res flacs are from new releases, so no problem. Yes, I know very well the issue with remastering of old albums, fortunately not every release is messed up with this procedure: a lot of my old school hi res files sound as they should

here's one link to buy hi res flac and DSD: https://www.hdtracks.com/
 
Last edited:
Thanks for explanation, I could imagine by myself because reducing volume itself will reduce dynamic range on 16 bits dramatically. I’m on learning process of sound treatment, not a lot of security on knowledge but begin to orient myself :)
Is about 1 bit per 6 dBFS, I’m right? So reducing 24 dB lets 12 bits to encode signal strength and augments quantization noise floor by the same 24 dBs…?
The quantization noise floor is relatively fixed, similar to the noise floor of an analog system. Analog or digital, when you reduce signal amplitude the SNR is reduced, though distortion also usually falls. There is no "augmentation" of the noise floor as you reduce volume; the smallest step (least-significant bit, lsb) is unchanged in amplitude and remains related to the bit depth (resolution) and full-scale output.

There are several articles discussing data conversion in the list linked in my signature.
 
I found some of them, two years ago was astonished to listened an opera in 24/192 kHz that sound worse than my CD.
Mixing and mastering far dominates the sound you hear, much more so than the trade between 16/44 or 24/96 bit/kS/s resolution and sampling rate. Remember that records (LP "vinyl") tops out around 60 dB and tape around 80 dB so red book CD at 96 dB offers much more dynamic range than either of those mediums.
 
The quantization noise floor is relatively fixed, similar to the noise floor of an analog system. Analog or digital, when you reduce signal amplitude the SNR is reduced, though distortion also usually falls. There is no "augmentation" of the noise floor as you reduce volume; the smallest step (least-significant bit, lsb) is unchanged in amplitude and remains related to the bit depth (resolution) and full-scale output.

There are several articles discussing data conversion in the list linked in my signature.
Thanks, I was intended to express the relative augmentation of the 1 bit noise floor with respect to the maximum number of 12 bits: If I reduce volume (digitally) and the disposable bit depth is 12 instead of 16, then after conversion this will produce a signal in which the SNR has been reduced by 24 dB.
I think this time I expressed well, or perhaps even more messy :)

But if almost all DACs upsample to 24 bits, the disposable numbers will be 20 and SNR still 120 dB after analogue conversion.

In this case, the claim about 24/96 kHz for “audiophile quality” has no justification on playback domain, or I’m wrong?

I mean, no advantages are added with 24/96 native resolution to CD native upsampled.
 
Is about 1 bit per 6 dBFS, I’m right?
Correct.

If you play 16 bit audio on a 16 bit DAC and reduce the volume by half, only 8 bit are left
MSB LSB
1111111111111111
000000001111111111111111

If you do the same on a 24 bit DAC
111111111111111100000000
000000001111111111111111
You have reduced the volume by half and still have the full 16 bits in the register.

In practice of course there is a limitation, the noise floor of the playback chain. Modern power amps can have a SNR close to 120 dB so are able to resolve 20 bit. Likewise recording chains produce noise. A true 24 bit recording might contain musical information down to -120 dBFS but don't be surprised if a lot of them don't reach this level. In my experience -100 dBFS is common.
 
Last edited:
Thanks, I was intended to express the relative augmentation of the 1 bit noise floor with respect to the maximum number of 12 bits: If I reduce volume (digitally) and the disposable bit depth is 12 instead of 16, then after conversion this will produce a signal in which the SNR has been reduced by 24 dB.
I think this time I expressed well, or perhaps even more messy :)

But if almost all DACs upsample to 24 bits, the disposable numbers will be 20 and SNR still 120 dB after analogue conversion.

In this case, the claim about 24/96 kHz for “audiophile quality” has no justification on playback domain, or I’m wrong?

I mean, no advantages are added with 24/96 native resolution to CD native upsampled.
Correct on the SNR reduction. Note you also reduce the SNR if you reduce the volume in an analog system.

Upsampling a 16-bit source will not generally add any information, so while the DAC may provide 24 bits, if the source started as 16 bits, that is what you can obtain. That is still well beyond what we can hear in terms of dynamic range as related to musical sources.

IME the biggest advantage to higher resolution (like 24-bit ADCs and processing) is for live recording or in the studio where recording, mixing, and mastering multiple tracks is easier if you have more dynamic range to play with.
 
Correct.

If you play 16 bit audio on a 16 bit DAC and reduce the volume by half, only 8 bit are left
MSB LSB
1111111111111111
000000001111111111111111

If you do the same on a 24 bit DAC
111111111111111100000000
000000001111111111111111
You have reduced the volume by half and still have the full 16 bits in the register.

In practice of course there is a limitation, the noise floor of the playback chain. Modern power amps can have a SNR close to 120 dB so are able to resolve 20 bit. Likewise recording chains produce noise. A true 24 bit recording might contain musical information down to -120 dBFS but don't be surprised if a lot of them don't reach this level. In my experience -100 dBFS is common.
Half the number of bits, perhaps, instead of half the volume? One-half the signal level (in volts) would be a factor of two, so only 6 dB or one bit. One-half the perceived volume (loudness) in the midrange is about 10 dB so a little less than two bits. Going from 16 bits (~98 dB SNR) to 8 bits (~50 dB SNR) is one-half the number of dB but a much larger reduction in signal level (and perceived loudness). This stuff gets sooo confusing...

And yes I know you know this, feeling petty this morning, sorry Vincent!
 
Correct on the SNR reduction. Note you also reduce the SNR if you reduce the volume in an analog system.
Thanks for your explanations! I’ve just read an article about the signal chain and how noise and distortion propagate along different blocks of the chain.

Was shown that whereas increasing digital volume the SNR increases, also do THD. In the example given on the article, the optimum point was 50-60% to reach maximum SINAD. Of course they were other blocks that modify this optimum by adding its own noise and distortion, I’m just referring to the DAC line out.

One thing I don’t know, but probably the answer can be deduced by logic is: a well recorded track do cause clipping of the DAC if digital volume is at its maximum?

By logic the answer should be “No, the recording is done such a way it never reaches 0 dBFS and the digital level control only reduces from 0” but I have the doubt.
 
Correct on the SNR reduction. Note you also reduce the SNR if you reduce the volume in an analog system.

Upsampling a 16-bit source will not generally add any information, so while the DAC may provide 24 bits, if the source started as 16 bits, that is what you can obtain. That is still well beyond what we can hear in terms of dynamic range as related to musical sources.

IME the biggest advantage to higher resolution (like 24-bit ADCs and processing) is for live recording or in the studio where recording, mixing, and mastering multiple tracks is easier if you have more dynamic range to play with.
I'm using a handheld 24-bit recorder in a not so professional context, making recordings of an Elementary School band (playing at assemblies) off a mixing board. I allow plenty of headroom for peaks and apply compression and normalization in post-production. The end results are instructional recordings of individual songs in a MP3 format to distribute to musicians. The sound quality coming off the board isn't really all that good to begin with as everything is going through Shure 58s or microphones of similar quality. So, the compressed MP3 files actually sound better than the 24-bit master. But the point is having intelligible recordings of these songs. This is a case where having high resolution (or at least high bit depth) is useful for initial recording even if the end product is in the MP3 format.
 
Thanks for your explanations! I’ve just read an article about the signal chain and how noise and distortion propagate along different blocks of the chain.

Was shown that whereas increasing digital volume the SNR increases, also do THD. In the example given on the article, the optimum point was 50-60% to reach maximum SINAD. Of course they were other blocks that modify this optimum by adding its own noise and distortion, I’m just referring to the DAC line out.

One thing I don’t know, but probably the answer can be deduced by logic is: a well recorded track do cause clipping of the DAC if digital volume is at its maximum?

By logic the answer should be “No, the recording is done such a way it never reaches 0 dBFS and the digital level control only reduces from 0” but I have the doubt.
Do not get hung up on "digital". Increasing the signal level will increase the SNR in any system, analog or digital, as the signal is increasingly larger than the noise floor. At the same time, analog circuits' distortion increases as the signal level increases, for a variety of reasons.

Depends on what "well recorded" means and what you mean by "DAC clipping". The DAC's output is mostly limited by the input codes, so no input can exceed full-scale unless the source was clipped (not well-recorded). In that case, the input (bits, numbers) cannot exceed full-scale for the DAC, but the output will be clipped because the input was clipped.

But inter-sample overs, peaks over full scale even when the input code is within full-scale range, are also possible. I do not address inter-sample overs in my simple articles so you might want to search. Benchmark has a good article on them, I believe, but have not looked. If you are at full scale and have two samples near the peak of a sine wave, then the ADC's analog output can actually exceed full-scale range, even without clipping. The analog circuits must have headroom to handle that, or the digital processing must ensure it never happens. That is far from a simple discussion.
 
One thing I don’t know, but probably the answer can be deduced by logic is: a well recorded track do cause clipping of the DAC if digital volume is at its maximum?

By logic the answer should be “No, the recording is done such a way it never reaches 0 dBFS and the digital level control only reduces from 0” but I have the doubt.
A well recorded track most likely would start as a high bit-rate recording with peaks well under 0 dBFS, subsequently compressed (that's what usually happens), then normalized. Usually the end result is normalized to 0dBFS but sometimes a little bit under. Problems emerge when the file is excessively compressed prior to being normalized, this is what is known as "brickwalling". While obvious forms of distortion are not apparent, the lack of dynamics is fatiguing.
 
I elected myself to remind you of 2 matters that are NOT trivial:
#1 >> Back-up (and/or archive) right-upon transcoding.*
#2 >> Start by creating your own DeweyDecimal system for labeling each CD's output, and each folder/subfolder you will put them in.
*extra-point for starting such a system in NAS, with at least 10X capacity as you currently think you need.

This sounds like what I am working on right now, Especially since dBpoweramp names and folders in a different way. Im testing the perfect workflow to be more efficient and descriptive.
I'm also deciding at the same time to go back and rip a true 320k mp3, for whatever reason in the future if I need a plain mp3, figured good to have it.
dBpoweramp makes me feel like im flying through CDs.


I have a question, has anyone ever heard of "ADES" - Advanced Digital Encoding?
I have a CD that has a small description behind the CD that says "CD is ADES Encoded, this recording has improved audio rendering and sonic quality of this original Master disc."

"For best performance and audio quality the original pressing of this disc must be played. Any copies or imitations of this compact disc will produce and inferior audio experience.
ADES (Advanced Digital Encoding Systems) all rights reserved."


I included the CD in question. Curious what this means, as in there will be less punch or range from a ripped copy vs the physical CD?
I tried playing the two back to back and I wore myself out from deciding back and forth that I could hear a difference, then no longer could.
 
I'm also deciding at the same time to go back and rip a true 320k mp3
If you do, you have to maintain 2 libraries.
A better solution is transcoding.
You rip to lossless and maintain a single library.
If you need a a version for a device with limited storage e.g. a mobile, you make a playlist and convert on the fly to a lossy format e.g. AAC or MP3.
A lot of media players support this. An example: https://www.thewelltemperedcomputer.com/SW/Players/MusicBee/MusicBee_Sync.htm
 
If your cds play without skipping on cd player using EAC is a waste of time. Just use dbpoweramp or media monkey.

Way back when, I tested 12 cds on various apps, all gave identical rips. I only use EAC for damaged discs.
I use EAC most of the time, however for playback I use JRiver (v31 currently), which works just as well for ripping accurately.
Even sometimes, (this is an exception!) when EAC didn't work, couldn't read the CD, JRiver did it without problems, so and yes DBPoweramp is also fine.
 
I have a question, has anyone ever heard of "ADES" - Advanced Digital Encoding?
"For best performance and audio quality the original pressing of this disc must be played. Any copies or imitations of this compact disc will produce an inferior audio experience."

Some sort of copy protection?

I tried playing the two back to back and I wore myself out from deciding back and forth that I could hear a difference, then no longer could.
If you can't reliably hear any difference, it's fine.
 
First time I read this statement, how come? I mean it doesnt make sense
It's because the recording really needed compression and normalization, being as it is a live recording with erratic dynamics. I know most folks 'round these parts consider compression to be a mortal sin, but many times it can render an otherwise unlistenable recording acceptable. In any case, the compression and normalization was far more important than the 24 bit depth, as the recording was made in a noisy environment from an audio source that wasn't good sounding to start with.
 
Back
Top Bottom