danadam
Addicted to Fun and Learning
- Joined
- Jan 20, 2017
- Messages
- 997
- Likes
- 1,554
Ever heard or replay gain?and moreover I don't need to run and decrease the volume when these tracks are in the playlist
Ever heard or replay gain?and moreover I don't need to run and decrease the volume when these tracks are in the playlist
Ever heard or replay gain?
Let’s step back and define what we mean by high resolution audio. There is no formal definition so I will resort to my own: anything above CD’s 16-bit/44.1 KHz in my book is high resolution audio. The most common step above that which is used frequently in video production is 16 or 24-bit samples at 48 KHz.
Fans of high resolution audio no doubt want to see much bigger numbers than 48 KHz. But 48 KHz with the original samples of 16 or 24 can still be beneficial.
Would my Ray Charles album be much better in 24 bit / 192 Khz and Linda Ronstadt be much worse in 16 bit / 44.1 Khz?
I have measured a number of DACs that have a steep reconstruction filter designed in such a way that tones near 20 kHz and above will create imaging artifacts (so, for example, at a 48 kHz sample rate, a 20 kHz tone might also create an an imaging artifact at 28 kHz). This spurious image tone can then interact badly with the original 20 kHz tone in devices downstream of the DAC (amplifier, transducer) to create intermodulation distortion which will definitely land in the audio band, well below 20 kHz. This can then trick the listener into thinking they can hear a 20 kHz tone, while what they're actually hearing are non-linear distortion artefacts that happened to land in the audible band.
I got that article by email yesterday and wondered if this forum might be able to share a few thoughts about the test setup (in a separate thread?).
This is the correct approach for someone involved in audio, that thinks there is an audible difference, but no previous study has turned a positive difference, do the work diligently to find the truth.
I got that article by email yesterday and wondered if this forum might be able to share a few thoughts about the test setup (in a separate thread?).
Mark Waldrep acknowledges that there will always be criticism on the test, but agreeing in advance about the setup might reduce that to a minimum. For starters he proposes to use audio material from his own catalogue. Whatever he selects, I hope it will be a variety of music styles, recording teams/equipment (mics, converters) and post-production equipment.
Just wondering: stereo or also multi-channel formats ?
If it were captured using the former, then the simple answer is that we are deprived from knowing so. I don't know what algorithm was used for that down conversion. At the extreme such as the resampler that was used in Windows XP, yes, it does degrade. At best, it was a good algorithm with proper dither. I just don't see a reason to have someone perform this type of processing for me. I now have gigabit ethernet access and when I pay good money for a download, then I like it to be in the resolution and sampling rate of the project.Would my Ray Charles album be much better in 24 bit / 192 Khz and Linda Ronstadt be much worse in 16 bit / 44.1 Khz?
Well, I did pass a previous test he put forward on AVS as he references:This is the correct approach for someone involved in audio, that thinks there is an audible difference, but no previous study has turned a positive difference, do the work diligently to find the truth.
Maybe that's exactly the reason they were passable. Playing them on equipment of that sort could possibly introduces artifacts/distortion of some kind, so you've only managed to identify them because they were objectively sounding worse than standard res audio.They were very challenging but passable on a laptop with IEM.
This has been my point about classic albums released in HiDef. When an album has been recorded in the 1960s or early '70s on a tape machine with a S/N ratio of 60dB, push it to 70dB with Dolby, with a frequency response of 20kHz at very best, with 3% distortion on peaks and using microphones that are drooping by 18kHz, there's no point whatsoever in cheating the public by claiming that a 24bit/96k or even more ludicrous, 192k HiRes release can possibly be better.My point is that it depends on the quality of the original recording and I couldn't explain it better than this:
"Provenance is defined as the ‘the place of origin or earliest known history of something.’ And according to Dr. Mark Waldrep, CEO of AIX Records, provenance, with regards to music, “…refers to the technology and techniques used during the original sessions for a particular track or album. Knowing the provenance of a particular music production provides useful information on what a user can expect with regards its fidelity. .......knowing that a classic rock album from the 1960s was tracked on a Studer 4-track, mixed to an Ampex 440 2-track deck, mastered to another tape copy and then cut to the ultimate vinyl master prior to pressing commercial copies should mean something too.” In other words, if the sound quality of the music in question couldn’t ever have been called “high definition” in the first place, then a lossless, “high resolution” version of it isn’t going to sound any better. To put it yet another way: A turd in high-definition is, at the end of the day, still a turd."
https://www.digitaltrends.com/home-theater/when-high-resolution-audio-isnt-hd/
Unless someone captures these tapes at 44.1/16 bit, I rather have the capture resolution/bit depth than 44.1/16. You never know what is used to down convert the higher resolution format uses in the capture/mastering.This has been my point about classic albums released in HiDef. When an album has been recorded in the 1960s or early '70s on a tape machine with a S/N ratio of 60dB, push it to 70dB with Dolby, with a frequency response of 20kHz at very best, with 3% distortion on peaks and using microphones that are drooping by 18kHz, there's no point whatsoever in cheating the public by claiming that a 24bit/96k or even more ludicrous, 192k HiRes release can possibly be better.
Even those recordings which have been remixed and remastered going back to the original session tapes can't be any better than the original Red Book release. Even early digital recordings were recorded with a 50kHz sample rate so again, no benefit in an HD release.
Just a cynical exploitation of a market willing to believe any old guff.
S.
there's no point whatsoever in cheating the public by claiming that a 24bit/96k or even more ludicrous, 192k HiRes release can possibly be better.
Not just you.If the record labels were serious about quality they'd offer good masters rather than the over compressed crap they push. I would much rather have a low rate well recorded MP3 than a whizzo high res bit of crap. But that's just me.