• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Decoder for material with latent DolbyA encoding

OP
J

John Dyson

Active Member
Joined
Feb 18, 2020
Messages
172
Likes
90
Update: I am working my a** on a last 'development' release of the decoder, trying to make a really real release that is easier to use. The earlier versions had a combination of problems, but still based on a proper DolbyA compatible decoder -- just that I never quite totall divined what the heck was going on in the recordings. This is all about almost ALL consumer recordings being very evilly compressed. The compression pumping effects are minimal because of the extremely fast attack/release, but also Ray Dolbys ingenious attack/release scheme that does a lot to avoid direct intermodulation distortions between the audio signal and the gain control. There are still all kinds of distortion mechanisms, but the structure of the detector scheme really does minimize the brute force problems.

Eventually, with a lot of patience and help from some friends over on 'Style' in PM messages and chatting, I am 100% confident that the correct and accurate EQ and configuration has been found.

Basically, I was always correcte about the involvment of DolbyA, but kept getting strange and sometimes misdirecting feedback from some people in Indurstry - along with my ineptness, I was always wavering around the correct EQ along with understanding better about how the DolbyA was used. For a while, I thought that the EQ was CD based, and I was close, but wrong about that. Most of the EQ is based on multiple single pole filters at approx 1kHz, 3kHz for a middle frequency dip of approx 9 to 10dB (I have the exact number, I just forget), plus some EQ above 3kHz, where there is multiple single pole boost at 3kHz, and then subseqent EQ at 3kHz and 9kHz. Bottom line, I have been able to cancel out all of the distortions that happen because of incorrect EQ going into a DolbyA decoding mechanism. If the EQ is even slighlty off, there is a sense of distortion when things are mismatched. Many of my misguided attempts were using 2 pole filters insead of 1 pole filters -- so never really matched the precise behavior until recently.

Additionally, the decoding scheme that I am talking about (the EQ & DolbyA compatible decoding) is applied in multiple layers. Those multiple layers and the calibration levels (the DolbyA threshold) are cascaded usually in 3layers or more of 10dB increments. This layered approach has the effect of doing compression in mostly 10dB chunks, also giving different EQ at different levels in the signal.

All of these layered DolbyA ENCODING does things like screwing the natural stereo image, increasing the hiss very audibly on older analog recordings, and giving the sense of all kinds of intermodulation of the sound. Bottom line, the compression really squishes the signal.

The current developmental version (not yet released) does work super well, actually revealing details that I have never clearly heard before because of the resolution of the distortions and compression of the encoding side of the 'FeralA' (coined term) scheme. I am not even sure if the signal has ever been intended to be decoded, because true DolbyA units would tend to create a lof of distortion, even in decoding/TRYING to recover the original DolbyA -- the *natural* hardware design cannot do some cool things that the DHNRDS (my decoder) can do in its fully unfolded decoding design. (The true DolbyA uses a few nested feedback loops with the associated unintended delays in the decoding confituration that screw up totally accruate decoding.) Unfolding the feedback loops precisely is a tedious affair, but the decoder is very accurate (moreso than true DolbyA) for decoding purposes. It also avoids creating significantly more modulation distortion, unlike the true HW design, and even has an option that can undo (demodulate) some of the original high frequence modulation distortion from the encoding process. At the lower frequencies (below 1kHz), the decoder must remove a lot of the modulation distortions anyway, because of the audibility of waveform distortions below 1kHz, esp at 120Hz and below. Undecoded FeralA signals have signifcant actual low frequency modulation distortion -- not normally recognized because most of us are used to it.
The new version of the multi-layer decoder (all in one run of the software) is beautifully clean sounding -- passing all of my previous tests and demos that only sounded a little 'better', where it now hits that point where intuitively the recording sounds 'correct', not just a little better.

Right now, the decoder does have a complete syntax, much simplified from the previous, that allows simplified experimentation and testing to find the correct configuration, but is still TOO COMPLICATED, even for me to use all of the time.
I am beating my head against the wall trying to find the best & easiest to use command line syntax, trying to make the command line easier to type and easier to conceptualize. I am testing and experimenting with my friends over on 'Style', but when the decoder is in its true final version, which hoping and praying that will be the next release in a few days -- I will also announce here once it has been tested/verified by a few other people.

I have already released a copy of the DolbyA compatible recording loop in C++, but it isn't the complete decoder. I will be eventually updating the source example (exactly as used in the decoder) to include more of the decoder, but will be missing the input bandpass filters for the 4 DolbyA compatble bands, and even the simplified advanced anti-modulation distortion code and will also be missing the input Hilbert detectors for the precision DolbyA compatible attack/release calculations. The vector SIMD code will also be missing, but that can be reasonably easily replaced by a good C++ programmer. ThHowever, the precision attack/release calculations and the feedback loop for the HF0/HF1 balance will be included. The source is not easily adaptable to languages like Java, Javascript, perl and others, because the SIMD support in C++ really makes the program much more practical to use because of speed issues.

The final release for the running code should be available in approx 1wk, maybe a little longer or shorter. The source code will be available in updated form perhaps 1wk later. I still would have to edit out the C++ classes and the supporting emulation code. I am hoping the source code would give hints to plug in writers to be able to create a true DolbyA decoder for casual (not necessarily the precision, super high quality use of the DHNRDS), so much better than the FeralA encoder, which is effectively what other DolbyA plugins actually do. DolbyA can SEEM to be decoded by strategic EQ -- thereby making DolbyA into a multi-band compressor instead of properly decoding the recording.

Oh well -- still finishing up the usability features of the decoder... Will report in the next week or two, hopefully with really good news.

John
 
OP
J

John Dyson

Active Member
Joined
Feb 18, 2020
Messages
172
Likes
90
There is a new decoder release available, lots of little strange things are corrected...

V1.6.6D

Not only is the sound much more 'studio like', but also there are some compressors added with very minimal audio defects... They work surprisingly well, with a 1 band, 2 band and 3 band version available, with 3 instances each so that various kinds of gain curves can be created. The compressors aren't really an important motivation for the decoder, but it is sometimes helpful to do a bit of post decoding mastering. Also, there is a limiter and anti-sibilance capability built in...

The decoder is now producing profoundly more clean results -- and some of the other 'secrets' about the ubiquitious compression process have now been uncovered. the results previously were 'better' than the nonsense quality on almost every consumer digital recording, but now the results truly sound like they are coming from a board during recording -- the results are THAT much better!!!

Since decoding is SO MUCH SIMPLER than before, the docs need an update, but there is much less documentation needed now. Decoding/undoing the nasty compression on almost every CD and download is no longer a science project at all... The decoder is totally free to use (DolbyA decoding mode isn't free, but I generally give out licenses to curious consumers if desired) -- the DolbyA decoding mode is NOT needed to uncover the veil on consumer recordings...

Here are some decoded short, legal sized examples -- from the typical fuzzy CDs & Downloads that are so common:

https://www.dropbox.com/sh/mjmdfxu8gdweoc2/AACE7AQA1kZ0AIFNxar_sZoJa?dl=0

The decoder, both Linux & command line Windows is available here. The Usage manual is currently inadequate, but if you know how to install simple executables, shouldn't be a major problem. I used to have an installation guide, but it is so old as to probably be invalid. Without docs, the Linux version might be easier to use:

https://www.dropbox.com/sh/1srzzih0qoi1k4l/AAAMNIQ47AzBe1TubxJutJADa?dl=0


One really strange thing is that when unfolding a feedback control system to feedforward, various delays and filters create jitter unless they are compensated. Well, not really jitter, but they effectively become modulated by the loop gain. When the loop gain changes, then the effective delay changes. I mean, it is a LOT more complex than that. I knew about these issues, but finally had to sit down and work through all of the delays, that need to be corrected. This thing might seem to be 'easy', but is far far from it!!!

John
 

Pluto

Addicted to Fun and Learning
Forum Donor
Joined
Sep 2, 2018
Messages
990
Likes
1,633
Location
Harrow, UK
So, I downloaded your musical example and it happens to be a well known piece with which I am familiar. What have you done? The piano has lost much of its HF i.e. it's very muffled. The vocal quality varies dramatically throughout the 50" example and the vocal sibilance has become rather more pronounced in your processed version.

Now don't get me wrong here – I think you're doing great work. 5 years ago I'd have given my left nut for a software Dolby A decoder module. If I were given this material to work with in the knowledge of what might have happened to it, I would have guessed that the Dolby operating point was wrongly set; the quiet bits sound too muffled but the loud parts are better. All in all, it rather sounded as though this material originated on a vinyl record and was subjected to some de-clicking and possibly other ‘vinyl-improvement’ treatment.

I can upload my comparison material somewhere, if you wish. Now I have no idea of the provenance of your recording, but there is absolutely no indication on mine that it has remained inappropriately Dolby A encoded. It may well have been subjected to deliberate partial encoding to some extent, as was the fashion at that time to liven-up a track that was perceived to be a bit dull-sounding, but there is certainly no evidence in support of the idea that someone simply forgot to enable the decoding processor in the mastering suite.
 
OP
J

John Dyson

Active Member
Joined
Feb 18, 2020
Messages
172
Likes
90
So, I downloaded your musical example and it happens to be a well known piece with which I am familiar. What have you done? The piano has lost much of its HF i.e. it's very muffled. The vocal quality varies dramatically throughout the 50" example and the vocal sibilance has become rather more pronounced in your processed version.

Now don't get me wrong here – I think you're doing great work. 5 years ago I'd have given my left nut for a software Dolby A decoder module. If I were given this material to work with in the knowledge of what might have happened to it, I would have guessed that the Dolby operating point was wrongly set; the quiet bits sound too muffled but the loud parts are better. All in all, it rather sounded as though this material originated on a vinyl record and was subjected to some de-clicking and possibly other ‘vinyl-improvement’ treatment.

I can upload my comparison material somewhere, if you wish. Now I have no idea of the provenance of your recording, but there is absolutely no indication on mine that it has remained inappropriately Dolby A encoded. It may well have been subjected to deliberate partial encoding to some extent, as was the fashion at that time to liven-up a track that was perceived to be a bit dull-sounding, but there is certainly no evidence in support of the idea that someone simply forgot to enable the decoding processor in the mastering suite.

Clarification: the source material is the nasty, hissy, hyper compressed CD or download off the shelf. Sometimes I use the decoder at too strong a level (it is a discrete layers), but anyone who knows how to use a command line computer, and has a good ear for the recordings can do a MUCH BETTER job than me. I am not offering my horrible mastering ability, but instead the beautiful decoder for people to use -- FOR FREE!!!

If you know how it is supposed to sound -- the FA decoder can make the recording sound that way -- if the recording that I started with is the same as your reference. The operation done by the software is NOT enhancement, but is instead *DECODING*, plus needed EQ faciities.

Sometimes, also -- compression in the reference recording will mislead the listener into there being 'detail'. Compression only distorts the relative levels -- often making otherwise lower level details be more artificially stronger. So, might consider that as an artifact of the reference material. On the other hand, as noted later on -- I might have removed one (or two) layers too many.

The FA decoding is NOT DolbyA by itself, but is an array of DolbyA decoding mechanisms layered 10dB apart with the appropriate compensatory EQ. The DHNRDS decoding is also capable of returning detail that is long gone with the normal DolbyA decoding mechanism (lots of flaws in the DolbyA HW decoding technique.)

* Unlike pure DolbyA decoding, there is variation in the encoding scheme -- so I can only guess what it originally was based upon the interference effects of an improperly set-up DolbyA unit. If you know how it is supposed to sound, then you can make the reccording sound that way. I am NOT a mastering engineer, but instead a very good DSP/CompSci/EE person, which doens' tmake me good at mastering!!!! I am very used to the sound of a maladjusted DolbyA unit (or an array of such), but that doesn' tmake my EQ correct.

Tell me which recording that you are speaking of and describe EXACTLY what you don't like. There are various layers of compression in the recordings -- usually starting at (given the reference levels based upon the shape of the DolbyA compression curve) usually at -46dB, -36dB -26dB -16dB, then wrapping back around to -56dB -46dB, etc. Usually I try to decode the material to 6 or 7 layers, which usually returns the recording to where they originally were. Sometimes I push the decoding too far, and start actually expanding the recording -- because the decoder is very good at avoiding gating.

* My error, if any, is either too many layers being removed -- then going into expansion, or incorrect EQ.

However, Since I don't know exactly how the *oriignal* recording sounded, without compression of any kind at mixdown, I can only guess while listening for decoding artifacts (gating, etc.) In fact, gating is one reason why I added the compressors so that it could be more easily heard with the tinnitus/etc that I have.

Almost all consumer recordings have been compressed as I described, but there is some audible variability in the settings. So, if you tell me generally what the original, pre-compression recording sounded like, we can probably reproduce it very closely.

The DHNRDS is MUCH more accurate nowadays than any old DolbyA unit, and in fact scrubs modulation effects also.

if YOU know how they are supposed to sound, the decoder can reproduce whatever that existed on the recording before the compression scheme. Using the decoder is NO LONGER a science project.

John
 
Last edited:
OP
J

John Dyson

Active Member
Joined
Feb 18, 2020
Messages
172
Likes
90
So, I downloaded your musical example and it happens to be a well known piece with which I am familiar. What have you done? The piano has lost much of its HF i.e. it's very muffled. The vocal quality varies dramatically throughout the 50" example and the vocal sibilance has become rather more pronounced in your processed version.

Now don't get me wrong here – I think you're doing great work. 5 years ago I'd have given my left nut for a software Dolby A decoder module. If I were given this material to work with in the knowledge of what might have happened to it, I would have guessed that the Dolby operating point was wrongly set; the quiet bits sound too muffled but the loud parts are better. All in all, it rather sounded as though this material originated on a vinyl record and was subjected to some de-clicking and possibly other ‘vinyl-improvement’ treatment.

I can upload my comparison material somewhere, if you wish. Now I have no idea of the provenance of your recording, but there is absolutely no indication on mine that it has remained inappropriately Dolby A encoded. It may well have been subjected to deliberate partial encoding to some extent, as was the fashion at that time to liven-up a track that was perceived to be a bit dull-sounding, but there is certainly no evidence in support of the idea that someone simply forgot to enable the decoding processor in the mastering suite.

ADD-ON: I just realized that some people didn't know, that I am starting with the hissy, hyper compressed common CD or digital download. Sometimes I use the decoder too strongly because I am always testing it. The decoder is EASY to use now, so if anyone doesn't like the results -- just grab a copy, and if you can deal with Windows or Linux command line, you can do a full remaster from a CD. The CDs have all of the info, it is just terribly compressed.

When mentioning 'piano', then after reviewing the examples again, might be the 'Carly Simon' song, nobody does it better. I have just uploaded the CD original 50seconds from where the example was made. The CD version is hissy and hyper compressed -- the decoder undoes that.

On the 'FEWERLAYERS' version, I only ran the decoder down to the -46, -36, -26 and -16 dB level, which leaves a kind of a 'normal consumer' level of compression. The scheme is a 'russian doll' thing, almost like an old fashioned MQA, but was done starting in the middle '1980s. The 'fewerlayers' version is still improved over the horribly hissy CD, but isn't studio quality -- it still has lots of compression.

Note that I do NO mastering, other than the adjustement necessary to undo the encoding process. When I do 'mastering', I make a clear statement -- I avoid it, because I have untrustworthy judgement.

All examples are shortened to fair length!!!

Here is the lesser cleaned-up version -- still has some compression: (totally unmastered)
FORGET THE FAKE-BOOSTED HIGHS on the CD original -- you are hearing something closer to a real voice on the decoded versions!!!
I actually added a lot of (standard) LPF EQ at 3kHz -> 21kHz, 9kHz -> 21kHz and 12kHz -> 21kHz to get the REAL frequency balance. I can turn loose the highs if you want, but it wouldn't be correct...


https://www.dropbox.com/s/fxz697xy17vo218/02. Nobody Does It Better-FEWERLAYERS-DECODED.flac.flac?dl=0

Here is the original CD:

https://www.dropbox.com/s/g44ftiusg28xw6l/02. Nobody Does It Better-rawcd.flac?dl=0

Here is the exhuastively decoded version (maybe a step too far, maybe not): -- most likely the next version is better (DECODED2)...

https://www.dropbox.com/s/jpws2rcl39r9g8g/02. Nobody Does It Better-DECODED.flac.flac?dl=0

Since the above was probably decoded too far, here is another version which is less decoded (maybe correct?)

https://www.dropbox.com/s/gsr5qh9f19hwrr0/02. Nobody Does It Better-FEWERLAYERS-DECODED2.flac.flac?dl=0

John
 
Last edited:

mansr

Major Contributor
Joined
Oct 5, 2018
Messages
4,685
Likes
10,705
Location
Hampshire
Here is the lesser cleaned-up version -- still has some compression: (totally unmastered)
FORGET THE FAKE-BOOSTED HIGHS on the CD original -- you are hearing something closer to a real voice on the decoded versions!!!
I actually added a lot of (standard) LPF EQ at 3kHz -> 21kHz, 9kHz -> 21kHz and 12kHz -> 21kHz to get the REAL frequency balance. I can turn loose the highs if you want, but it wouldn't be correct...


https://www.dropbox.com/s/fxz697xy17vo218/02. Nobody Does It Better-FEWERLAYERS-DECODED.flac.flac?dl=0

Here is the original CD:

https://www.dropbox.com/s/g44ftiusg28xw6l/02. Nobody Does It Better-rawcd.flac?dl=0

Here is the exhuastively decoded version (maybe a step too far, maybe not): -- most likely the next version is better (DECODED2)...

https://www.dropbox.com/s/jpws2rcl39r9g8g/02. Nobody Does It Better-DECODED.flac.flac?dl=0

Since the above was probably decoded too far, here is another version which is less decoded (maybe correct?)

https://www.dropbox.com/s/gsr5qh9f19hwrr0/02. Nobody Does It Better-FEWERLAYERS-DECODED2.flac.flac?dl=0
What's going on at 0:15-0:17?
 
OP
J

John Dyson

Active Member
Joined
Feb 18, 2020
Messages
172
Likes
90
What's going on at 0:15-0:17?

I wonder -- I listened to the 'FEWERLAYERS-DECODED.flac.flac' version and there was some raspiness on the playout when using the direct dropbox player... Is that what you are noticing? The Dropbox player is *really bad quality*, but is convenient for quick reviews.

There is another, KNOWN problem with decoding that song -- and it is strange. The sibilance seems to shift. I have tried using 'classical' mode (which is a straight M+S decode), also the normal 'pop' mode (which is M+2S), one of which usually fixes the stereo image and sibilance shift. I haven't been able to satisfactorily fix it. Such a shift someimtes appears to come from generation loss (multiple DolbyA encode/decode cycles without proper/accurate calibration levels.) I have noticed the sibilance problem on some ABBA disks, yet the exact same recording (but with some hints of having less generation loss) the sibilance problem doesnt' exist.

I have narrowed the problem down what they did to Carly's voice on that specific recording. There is a '--wof' switch (widen output final), which changes the width of the stereo image, and a few days ago (before doing these demos), I found a balance that straightened up the sibilance, but never really getting a satisfactory result. I don't believe that the decoder settings should be so very fragile to get a clean decode of a vocal.

'Nobody Does it Better' is one of my challenging test cases, like ABBA used to be (and now can get perfect results (A+) on all ABBA albums except SuperTrouper title song (B) and 'Arrival' (C).) The sibilance on the Carpenters albums is now very well controlled, except for 'Top of the World' getting a bit too aggressive at times. The 2nd track on the 1977 Carpenters album also used to have problems with the sibilance tripping up -- but it appears that the decoder now closely tracks that characteristic also.

* Note about using the decoder -- it is now much easier. The pre-decoding EQ over 99% of the time, has been simplified to an EQ code of 'fcx=G+' or 'fcx=G+*', where all of the real EQ differences are done AFTER decoding. So, except for the calibration threshold, and the number of decoding layers, there is NO EQ until after the long decoding process!!! This means that various EQ tests can be done super quickly after getting the basic decoded material. The decoder has an --equalizer mode, which disables both FA and DA decoding, supporting the 1st order, 2nd order, anti-sibilance, limiter and three styles of compressors for after decoding. So, the decoder *sort of* has two modes: 'FA/DA' decoding, and 'simplified mastering'.​
The beauty of doing the decoding first, then EQ afterwards is that running 4,5,6,7,8,9 DolbyA decoders at each 10dB band takes lots of CPU. The decoder starts having trouble with realtime at 7 layers (DolbyA decoding operations) at the same time. Once the decoding is done, then all the the EQ, tweaking, etc can be done afterwards. There is an apparent standard Eq that works 2/3s of the time, and it is 3 1st order EQ.​
I can get a 'sounds good' on the first attempt in most of the cases. 'Perfect' sometimes eludes me, because I have a bad 'ear' for mastering.​
A typical decoding command, with realtime play, on Linux or Windows just for decoding without any running status is this:​
~/ap/nrs/dabuild/da-avx --cdd --fcs="6,-49,fcx=G+" --pif3k=-1 --pix6k=-1 --pix9k=-1 --input=<infile.wav> --splay
(the --cdd means CD de-emphasis, the fcs command says '6 layers', 'base calibration of -49dB', 'decoding EQ -- defined in the docs', and the post decoding EQ, can be done after decoding: 3k 1st order LPF, 6k/21k 1st order LPF, 9k/21k 1st order LPF. Obvious input file, and 'splay' means play the result using 'SoX'.) * using CD deemphasis has nothing to do with the CD de-emphasis bit, but a ubiquitious emphasis scheme used on digital files and CDs. It happens to have the same characteristics of CD emphasis/de-emphasis though.​


Most normal albums, like the 2009 Beatles remasters, the Bread album, Yes (when carefully set-up), really see a major improvement -- even Linda Ronstadts recordings with the abuse of the Aphex phase distorter sound pretty darned good.

Since (as you know -- many others dont) I am an engineer -- I tend to 'test' my software instead of 'optimize my demos'. Most of my demos show are either 'worst cases' or 'something I really enjoy'. I very seldom cherry pick excellent decoding results, unless I happen to especially enjoy the music. None of these examples were cherry picked, instead were music that I enjoy.

THANKS FOR THE INTEREST. As you know, the FA mode of the decoder is always free to use!!!!
The DA mode is the most clean sounding decoder for DolbyA material -- and can be even more clean than normally possible. There are some real hard-core algorithms in it -- the Hilbert detector stuff is childs play in comparison... (There are also Hilbert detectors used in conjunction with precision detectors that calculate the DolbyA attack/release.) The anti-MD is a 'demodulation' method against the modulation distortion, and works wonders on true DolbyA recordings.

John
 

mansr

Major Contributor
Joined
Oct 5, 2018
Messages
4,685
Likes
10,705
Location
Hampshire
I wonder -- I listened to the 'FEWERLAYERS-DECODED.flac.flac' version and there was some raspiness on the playout when using the direct dropbox player... Is that what you are noticing? The Dropbox player is *really bad quality*, but is convenient for quick reviews.
At first, I thought it sounded like the decoder had cut out briefly. Then I listened to original again, and there is a similar effect there, though less pronounced. The same thing happens in a few more places too.
 
OP
J

John Dyson

Active Member
Joined
Feb 18, 2020
Messages
172
Likes
90
At first, I thought it sounded like the decoder had cut out briefly. Then I listened to original again, and there is a similar effect there, though less pronounced. The same thing happens in a few more places too.

If you hear a defect on the original, it will be magnified by any expansion, including the FA decoding. There are between 4 and 6 DolbyA equivalent decoders doing expansion, with gains flopping around 0 to -10dB in several bands up to 7-9kHz, then 0 to -15dB from 9kHz on up (approx.) This is on EACH of the 4 to 6 (more or less) decoders/expanders. The actual affect of an error on the material is hard to predict because of all of the EQ that is needed, and the state of each of the 4-6 (sometimes 2, sometimes 8) expanders.

(As I noted -- the Carly Simon, track 2 is one of the more irritating test material. It seems to me (could be wrong) that the material was encoded/decoded too many times/inaccurately previously -- copying from tape to tape, and some of the dynamics information built-up errors.) The DHNRDS DA decoding itself is extremely accurate, but is also carefully slowed down (without really slowing things down), bending the attack/release so that minimum intermod and modulation is created while also having the correct dynamics.) So, the DHNRDS on clean material will definitely be more clean than a DolbyA, but errors will cause trouble, no matter what is used. I truly (honestly) don't know what is fully wrong with the Carly Simon stuff, but I keep on testing with it, maybe someday finding a problem, maybe someday not.) I can show 100's of recordings that are very good also.... Very few fail like the CS stuff, but that is why it is in my testing group.

There is a lot of expansion activity going on, and each expander (decoder) has to match the original DolbyA mis-used as a compressor, both in dynamics (a given) and calibration level (make sure things are on the correct part of the curve.) Dropouts are especially heinous. I h

Any error can be magnified, and is why it took me so long to make the DA decoders (which process DolbyA material super accurately and much more clean than the HW), and also find the 'magic sauce' EQ needed in the connection between each one. The decoder can easily fill up 4 cores of a Haswell when running 6 layers of decoding. This is one reason why I am thinking about getting a i10900X so that I can run more decoding operations through the computer!!!​

John
 

Pluto

Addicted to Fun and Learning
Forum Donor
Joined
Sep 2, 2018
Messages
990
Likes
1,633
Location
Harrow, UK
I'm bemused by where we're going with this. In short, all your processed versions sound worse, in varying degrees, than the original; I really do not accept your assertion that there is something fundamentally wrong with the way the original sounds due to the misapplication of Dolby processing and that this can, and should, be undone.

I'm reasonably confident that my own listening environment operates within an acceptable tolerance of accuracy; it certainly does not stick out after a career spent in reasonably decent rooms with high quality speakers, so I would not readily accept an argument based on the premise that “your listening conditions are wrong”. This is the root of my bemusement. I cannot, right now, accept an argument that you are attempting to fix a common but stealthy problem. Of course the occasional un-decoded Dolby A master has escaped and but I always found the sound of un-decoded Dolby quite obvious so I don't think this is a widespread problem.

As I said before, I really believe the idea of a software Dolby (de)coder is great, but one that has possibly missed the boat. There are certainly specialized uses as ubiquitous Dolby 360-class units in good condition become less available, but I'm not sure that chasing the ghosts of old incorrectly Dolby'd masters is the way to go.
 
OP
J

John Dyson

Active Member
Joined
Feb 18, 2020
Messages
172
Likes
90
I'm bemused by where we're going with this. In short, all your processed versions sound worse, in varying degrees, than the original; I really do not accept your assertion that there is something fundamentally wrong with the way the original sounds due to the misapplication of Dolby processing and that this can, and should, be undone.

I'm reasonably confident that my own listening environment operates within an acceptable tolerance of accuracy; it certainly does not stick out after a career spent in reasonably decent rooms with high quality speakers, so I would not readily accept an argument based on the premise that “your listening conditions are wrong”. This is the root of my bemusement. I cannot, right now, accept an argument that you are attempting to fix a common but stealthy problem. Of course the occasional un-decoded Dolby A master has escaped and but I always found the sound of un-decoded Dolby quite obvious so I don't think this is a widespread problem.

As I said before, I really believe the idea of a software Dolby (de)coder is great, but one that has possibly missed the boat. There are certainly specialized uses as ubiquitous Dolby 360-class units in good condition become less available, but I'm not sure that chasing the ghosts of old incorrectly Dolby'd masters is the way to go.

What material sounds like to you is wha tyou are used to listening to. Most likely, you are used to listening to consumer recordings that sound like NOTHING that I have ever heard coming directly from a recording studio (without being excessively processed.) The decoding results come MUCH closer.

If you cannot hear the difference between what is on most CDs and what comes out of a mixing board -- might as well listen to a boom box. The DHNRDS FA (not DA) mode sounds closer to what comes before the extreme processing being done. The DA mode is literally better than perfect -- which you have not heard demoed.

*anyone not totally clueless knows that the online Dropbox player sucks -- if that is the problem that you are hearing. It is more difficult to distinguish the Dropbox disotortion while playing flac files -- without DHNRDS FA decoded material.

Almost every recording is very meticulously and methodically corrupted -- I had to quit listening to the so called 'better' digital recordings back in the late 1980s, because I had REALLY done recordings and know what they sound like. You are being cheated by the distributors, and don't even realize it -- obviouslly don't have the accurate hearing that you think.


You are VERY confused or expectedly misinformed if you think that I am doing simple DolbyA decoding... Sorry about not informing you -- but I'd suspect that you haven't read what I have written.


The DolbyA cat 22, A301, and other units are far inferior to the DHNRDS DA decoder, but that isn't what I am demoing. There is no way that a DolbyA unit can withstand doing the decoding that the DHNRDS FA is doing... What I am demoing is running equivalent to 4-6 DolbyA units in decoding mode set at different calibration levels... If you think that I don't use DolbyA units for reference, you are also clueless about that.

I am decoding material that was likely intended to be so corrupted as the typical consumer recording that the distributors didn't realize that properly complex software can decode the mess.

The results from a DolbyA would be so foggy as to totally miss any detail.
Anyone who really understands/knows DolbyA units - they don't decode the material very well AT ALL. The encoding is clean, but accurate decoding cannot be done with the design being used.


John
 
Last edited:
OP
J

John Dyson

Active Member
Joined
Feb 18, 2020
Messages
172
Likes
90
Later in the weekend, I'll be doing a new release of the decoder, with slightly improved DA decoding characteristics (still FAR FAR superior to the old hardware that could decode Dolby A material), and upgrading the limiter (it is still too soft), and the 1 band, 2 band and 3 band compressors meant for cleaning up decoding results that end up with dynamics to excessive for casual listening. We are all used to the heinous compressed material on almost every recording out there (including even some Telarcs -- just decoded some of them -- sad, squashed percussive blasts in the recordings. When decoding the material, the results can be extreme... The compressors, esp even my defective limiter are really helpful for those recordings.)

Been getting very good feedback on the V1.6.6H release, but I am trying to do final cleanups to get the licenseed professional/forensic quality DA decoder ready for its release. I keep the releases in sync because the codebase is the same, but during the documentation for the consumer FA mode, and during some rather extreme testing, found a few nits that I wanted to clean up.

I'd expect the new release will be ready tomorrow morning.

John
 

Pluto

Addicted to Fun and Learning
Forum Donor
Joined
Sep 2, 2018
Messages
990
Likes
1,633
Location
Harrow, UK
What material sounds like to you is what you are used to listening to
Well, having had a lifetime working in studios and mastering suites I hope that I do have a reasonable grasp on how things ought to sound. If you are not asserting that many records have been published with erroneous Dolby A encoding (? feral Dolby) which, presumably, implies that the appropriate Dolby A decoding was not carried out in the mastering shop, humor me and tell us again: exactly what error are you correcting with your processing?
 
OP
J

John Dyson

Active Member
Joined
Feb 18, 2020
Messages
172
Likes
90
Well, having had a lifetime working in studios and mastering suites I hope that I do have a reasonable grasp on how things ought to sound. If you are not asserting that many records have been published with erroneous Dolby A encoding (? feral Dolby) which, presumably, implies that the appropriate Dolby A decoding was not carried out in the mastering shop, humor me and tell us again: exactly what error are you correcting with your processing?

Most recordings that need DolbyA decoding are done with DolbyA HW units with the poor feedback design. Too much propagation delay and very inaccurate. A DolbyA decoding will result in significant loss of transient detail, and an extreme loss of detail in vocal chorus.

On the other hand, what you are hearing on consumer CDs is compressed by a mechanism that is easily decodable by DolbyA units (if they were accurate enough.) To do a first level result, requires 4 units set in 10dB increments of calibration, (using the DHNRDS scaling) at usually --44.5 or -46dB, then 10dB increments.

Did you do it in a studio? Probably not, you probably don't have the equipment configuration to easily do the compression. Somewhere the compression is done, and it is an algoorithmic technique that can be undone. That undoing of hte algorithmically designed compression is a secondary mode of the DHNRDS.

It would probably be impossible for the array of DolbyA units to get a reasonalbe result because of the morass of feedback delays and modulation distortions. The DHNRDS doesn't do either at the same extreme level as a true DolbyA, but matches the curves and dynamics very nicely.

I have some master tapes, and the DolbyA decoded versions are very mushy in the cymbals and the vocal chorus is mixed up into a big blob. The DHNRDS doesn't do that -- the result sare clean and distinct, but maintains the correct dynamics (as well as a unit to unit DolbyA does.)

John
 

Pluto

Addicted to Fun and Learning
Forum Donor
Joined
Sep 2, 2018
Messages
990
Likes
1,633
Location
Harrow, UK
OK – so you seem to be saying that the poor design of Dolby A units results in a suboptimal audio experience, which is a fair argument until you get to the point that, suboptimal or not, the Dolby decoding of the master has – like it or not – already been done by the time the material is delivered to the end user. Are you claiming to correct the alleged flaws of that Dolby decoding after the fact?
what you are hearing on consumer CDs is compressed by a mechanism that is easily decodable by DolbyA units
Here you have me perplexed. What is the compression mechanism to which you refer? There is no inherent compression in the CD mastering process. I know from experience that a CD is fully capable of being indistinguishable from its source material.
Did you do it in a studio? Probably not, you probably don't have the equipment configuration to easily do the compression.
What compression are you referring to? Everything I did was in a studio! I have probably spent, quite literally, 40% of my life in studios so I remain bemused about what, exactly, you are trying to achieve with this, undoubtedly, very clever project. It is worth bearing in mind that after automated switching of Dolby units via the console remotes became the norm from about 1975 onwards, it was decidedly difficult to get the Dolby switching wrong. Sure, the line-up could be out of whack but that is something most conscientious studios checked whenever a new reel went on the multitrack.

So I ask once again for you to explain in plain language the origin and nature of the flaws you believe you are correcting with your software, 20 years after the records in question were declared “done”.
 
OP
J

John Dyson

Active Member
Joined
Feb 18, 2020
Messages
172
Likes
90
OK – so you seem to be saying that the poor design of Dolby A units results in a suboptimal audio experience, which is a fair argument until you get to the point that, suboptimal or not, the Dolby decoding of the master has – like it or not – already been done by the time the material is delivered to the end user. Are you claiming to correct the alleged flaws of that Dolby decoding after the fact?

Here you have me perplexed. What is the compression mechanism to which you refer? There is no inherent compression in the CD mastering process. I know from experience that a CD is fully capable of being indistinguishable from its source material.

What compression are you referring to? Everything I did was in a studio! I have probably spent, quite literally, 40% of my life in studios so I remain bemused about what, exactly, you are trying to achieve with this, undoubtedly, very clever project. It is worth bearing in mind that after automated switching of Dolby units via the console remotes became the norm from about 1975 onwards, it was decidedly difficult to get the Dolby switching wrong. Sure, the line-up could be out of whack but that is something most conscientious studios checked whenever a new reel went on the multitrack.

So I ask once again for you to explain in plain language the origin and nature of the flaws you believe you are correcting with your software, 20 years after the records in question were declared “done”.

There is compression applied to almost every consumer digital release, and the it is very consistent and algorithmic.
I have massive existence proof.
The decoder undoes the compression, and it is not a normal expansion process.

Please refer to my examples... If you think that they are 'worse', then perhaps you are truly accomodated to the ubiquitious compression system.

I did recordings many many years ago, and hear the compression in the CDS back in the late 1980s (the CDs never got better.) It disgusted me, but after working at Bell Labs for decades, I had time to research the matter in the early 2010's.

I found that almost every CD had strange, approx 2:1 (or nearly so), and after a lot of study, assumed that it was DolbyA compression -- which it IS, but not quite the normal scheme used in tape recording. It is a segmented compression scheme, probably done at the distributors. I have NEVER heard the output of a console sound like (for example) the Al Stewart CD. It has taken a long time to reverse engineer the mechanism, but now I think I have the structure and algorithms VERY CLOSE to correct. (As an TRUE engineer, not just an audio guy calling themselves an engineer (like the trash pickup guys) -- I never assume perfection, but in casual discussion I am sometimes get overly positive emotions.)

If, as part of the industry, don't publically accept the fact of the poor, compressed quality of consumer digital recordings -- I understand. However, the problem exists, and the decoder comes very close to undoing the damage. The compression damage is NOT pure DolbyA, but uses DolbyA as part of the process. I choose not to expose all of the details except to those who I trust. Bottom line -- you can agree or disagree that the compression is ubiquitious, but that is YOUR opinion and ignoring the existence proof does damage your credibility.

I admit that I havent done adequate before and after, because I thought that with reasonable hearing memory that the improvement would be obvious. I added a before and after for an ABBA song (take a chance on me). Sadly, only a snippet.

Look for ABBAdemo for the examples. Remember, the Dorpbox player is poor quality -- it doesn't give a fair comparison:

https://www.dropbox.com/sh/mjmdfxu8gdweoc2/AACE7AQA1kZ0AIFNxar_sZoJa?dl=0

The Carpenters examples have a minor known flaw, but the improvement is still pretty darned profound.

John
 
Last edited:
OP
J

John Dyson

Active Member
Joined
Feb 18, 2020
Messages
172
Likes
90
Release message sent on other forum where project is more active:
New release V1.6.6P...

This release message is all about the FA mode, but the DA update is being sent to my project partner poste haste. He hasn't gotten an update in quite a while, but I don't pester him about this stuff unless something really good has happened. This is major quality transition for DA mode also.

Major command line bugfix for FA mode: fcx or fcd command without any arguments (like --fcs="4,-46,fcx") didn't work, but should have. Basically witihout any args, the fcx will rolloff less high frequencies and might be an alternative for classical recordings that don't need the 'classical' stereo image manipulation command. This bug inhibited doing something that should work correctly. This is the last minute motivation for this release. Would be nice to have a formal system test team (this would have likely been an integration test bug though) like back at Bell Labs!!! Stoopid bug!!!

Addition of a --pi4p5k, --pix4p5k and --pif4p5k, for 4.5kHz 1st order EQ. These work just like the 3k EQ, where the --pi4p5k stops at 9k just like --pi3k does. (higher frequencies at 9k or above stop at 18k for --pi9k.)

Better compressor behavior, a little better limiter behavior. Compressor sounds more like it should. (I previously didn't use the techniques that I had learned over the last 40yrs -- tried something new, but that was a bad decision.)

Added limiter running gains display, similar to the decoding running gains display and the compressor running gains display.

Another try at getting the HF0/HF1 cancellation more perfect. At this point, I cannot distinguish the tonality of our difficult test cases on our DolbyA master test cases. There is always more 'clarity', but the tonality is now totally identical. The decoder has always been MORE CLEAN sounding than a true DolbyA, but sometimes too clean. The difference is now so subtle that an A/B is needed, and even then the only way you can tell that the results are from a DolbyA is that there is less detail in vocal chorus, cymbals, a 'saturation' effect because of inability to track the dynamics. DHNRDS does a much better job. There is less varation between the DHNRDS and my DolbyA test cases than what I have heard between DolbyA units. Of course, the DA decoding level stuff is deep in the guts of FA decoding, so consumers don't really care about the intricacies of what happens in DolbyA access stuff.

The included 'Usage' guide on the repository will give some rough usage hints and give an idea about doing simple FA decoding and very minimal 'mastering' features in the decoder. The more complete manual has been delayed because of distractions, which are now hopefully over. Ask me direction (private messging if you want) for help installing/etc. The real manual hasn't been updated for so very long that I don't bother including it in the repo right now. Updates for the complete manual coming soon!!!

By default, I don't distribute a license file for decoding old DolbyA tapes, but I usually make a license file available to consumers gratis -- but very seldom (almost never) needed to decode DolbyA encoded tapes at home!!!

https://www.dropbox.com/sh/1srzzih0qoi1k4l/AAAMNIQ47AzBe1TubxJutJADa?dl=0

John
 

Pluto

Addicted to Fun and Learning
Forum Donor
Joined
Sep 2, 2018
Messages
990
Likes
1,633
Location
Harrow, UK
There is compression applied to almost every consumer digital release
No argument there: have you ever tried to manage anything beyond the most simplistic of recordings without some degree of compression/limiting?
and it is very consistent and algorithmic.
This I would dispute. The very best, well-controlled, well-mixed studio recordings require very little dynamic adjustment at the mastering phase. At the other extreme lies something that requires intense and careful control of dynamics at mastering in order to produce a result acceptable to the buyer. Naturally, there is every point in between these two extremes.

There are two major ‘strategies’ when it comes to dynamic control when making a record. One is to compress individual sources and mix them, the other is to leave the sources largely uncompressed and control everything at the mix by manipulating the faders of those individual sources and, of course, the master outputs. The chosen strategy will be the decision of the production team based on their collective experience, the musical genre and the calibre of the available equipment.

What I am trying to say here is that the approach to recording and mixing is so varied that I seriously doubt whether an algorithmic un-compressor is possible.

Furthermore, is it desirable? You may not like Monet's brushwork but what is the point of attempting to undo it so that it looks like a Van Gogh?
 
OP
J

John Dyson

Active Member
Joined
Feb 18, 2020
Messages
172
Likes
90
No argument there: have you ever tried to manage anything beyond the most simplistic of recordings without some degree of compression/limiting?

This I would dispute. The very best, well-controlled, well-mixed studio recordings require very little dynamic adjustment at the mastering phase. At the other extreme lies something that requires intense and careful control of dynamics at mastering in order to produce a result acceptable to the buyer. Naturally, there is every point in between these two extremes.

There are two major ‘strategies’ when it comes to dynamic control when making a record. One is to compress individual sources and mix them, the other is to leave the sources largely uncompressed and control everything at the mix by manipulating the faders of those individual sources and, of course, the master outputs. The chosen strategy will be the decision of the production team based on their collective experience, the musical genre and the calibre of the available equipment.

What I am trying to say here is that the approach to recording and mixing is so varied that I seriously doubt whether an algorithmic un-compressor is possible.

Furthermore, is it desirable? You may not like Monet's brushwork but what is the point of attempting to undo it so that it looks like a Van Gogh?

I can accept that it is reasonable to be incredulous that there is a phase of compression beyond that in the creative arts portion of development, but SOMETHING is happening. Out of ANY respect for me and the music that is being sold to people -- please be patient and read this email. I am being totally civil, and have my engineering history hat on -- not an enthusiast of any kind... The 'feelings' are all put aside, and I am chronicalling the history of this endeavor below:


There is NOBODY who has ANY responsible attitude about how music sounds that would produce the MUSH on consumer CDs. The compression on CDs is not the typical compression, but I did devine such things as tens of msec release times, but didn't sound incredibly distorted -- something special was going on. But, the typical 250msec though 10second release time is NOT being used on the CDs -- something VERY intense is being used, but NOT high tech.

Initially, I was very skeptical about my early findings -- but there was an unmistakable near 2:1 compression on almost every consumer recording and hyper fast release times. At first, I wrote a quick expander -- ON A LARK. This expander was far from studio quality, and wasn't even acceptable for any use except for testing. I adjusted the expander, changed its characteristics, etc -- and finally started settling on some aspect of 2:1 compression, mostly M+S stereo image, and really fast attack/release. Initially, I didn't believe the M+S stereo image, so re-discovered that much later on.

Once I had recognized that 'special sound' of the dynamics being partially corrected with my ad-hoc expander designs, I started chasing the combination of rabbits down rabbit holes and playing a frustrating game of 'whack a mole'. This started at about 2012 after getting REALLY irritated again with an ABBA CD in my posession. I had previously given up on the hi-fi hobby in the late 1980s recognizing that the sound of CDs would never get better!!!

I am a true super experienced programmer, DSP background OS background, Analog EE electrical engineer, etc -- so I had all of the tools immediately accesible to me to study the problem. There are VERY VERY few individuals with the intense mix of skills that I have -- that is why I got some really(mega) cool projects at AT&T Bell Labs forward looking products development -- there are few disciplines that I cannot deal with, or know how to access the technology. (BTW -- my ego is very mild, but strong -- not proving anything here, just statement of fact.) If I do have a weakness, is that I am a bit sloppy in my development. I have more of a forward looking attitude in my work -- not so much 'products', even though I have participated in doing some -- even the more recent Thomson/Technicolor Direct TV settop boxes have my fingerprints on them also.​
So, again, on a lark, I started putting together expanders which DID NOT ESPECIALLY TARGET DOLBYA -- but was trying to reverse the compression, not any enhancement. Off and on, made progress by spurts and starts -- but finally started realizing, after some research that there MIGHT BE DolbyA in there somewhere.

When I started, I was not sensitive to the precision needed for audio reproduction... There were so many compressors/expanders/phase twisters in the audio pathway that I wasn't sure what was important, and what wasnt. I started my DolbyA phase several years into the off/on project, reading the specs and looking at the schematis in a CURSORY fashion -- not really reading them in the detail needed.

In about 2015-2016, I started touting this DolbyA decoder could also make improvements in consumer CD recordings. It was never perfect, but did produce an aspect of cleanup. I was so happy with those results, I started talking about it publically. That got me some notice in professional circles, so a person in the industry started working with me. His standards for 'success' were more like doing better than a certain plugin (I forget the name -- but it is a plugin that fakes DolbyA decoding with EQ more than actual decoding) for decoding actual DolbyA master tapes. My standards for success were a little more lofty absolute accuracy, improved quality about cleaning up the consumer stuff, but still progress had been made.

We started getting really close to accurate DolbyA decoding once I realized that crazy diode array on the cat22 schematic wasn't just a diode detector, but was a nonlinear attack/release calculation scheme. This came a few years after my attempts at 'improved' decoding (and some of the results were really good sounding, but not clinically accurate) -- for DolbyA, but all along still seemed to do something REALLY GOOD for the consumer stuff.

The DolbyA tape decoding phase started becoming REALLY plausible about 1.5-2yrs ago, and dead-nuts accurate to the point of sounding precisely like a DolbyA except very noticeably more clean just recently. The decoder had almost always been very noticeably more clean sounding than DolbyA units, but not always clinically accurate.

The FA phase, which was always in the background, and the effort that has been seen publically started being near-perfection about 1yr ago, and dead-nuts perfect also fairly recently. Been getting help from very advanced consumer types in the last 6mos to 1yr. Recognize that the pure DolbyA decoding software now is mostly a tool that is always being incrementally perfected as I find nits in the basic decoding plus ideas for more advanced techniques. So, lets put aside talking about DolbyA per-se right now. If you have a tape, I can make it sound more clean and distinct, but still usable for subsequent encode/decode cycles.

FA ONLY now:
Up until 6mos ago, the decoder could only make progress on a portion of the compression, but always left a lingering compression. Also, it was only about 1yr ago that I had found that the audio channel arrangement is NOT L+R but is instead M+S as being processed by the compressor. Also, like your previous criticism and my own perception, there was always a left over pumping and stereo image distortion that came from other signal processing -- even after a partial recovery of the recording by an M+S configured decoding operation. Some of the worst damage might have been caused by my initial processing of the audio in L+R mode, when most pop was compressed in M+2S mode, classical and HD recordings often in pure M+S. (That is, ch1 on DolbyA is MID, ch2 is SIDE or SIDE+6dB).

Instead of making a long historical discussion about how the CDs/digital downloads are encoded -- here is the resulting block diagram (btw, I cannot draw at all -- so I am stuck with a limited ASCII art). To convert CD mush to music (normal pop music mode -- there is another mode often used for classical with a slightly different manipulation of the stereo image):



CD -> convert to M+2S -> EQ -> DolbyA Decode @-46dB -> EQ -> DolbyA decode @-36dB -> EQ -> DolbyA decode at -26dB -> EQ -> DolbyA decode at -16dB -> EQ -> convert to L+R from M+2S -> final steps of 1st order EQ -> music​
The calibration level of -16dB is about 3dB lower than normally on a properly level calibrated machine/DolbyA unit tape.​
Don't get turned off by the concatenated DolbyA decoding -- it doesn't work like one might initially think. Because of how the nonlinear detectors work, the result is a rather smooth attack/release behavior that concatenates very well. If you had 4 DolbyA units, and the correct EQ (which is much easier to wire together in SW than create a HW device), you could probably do SOME recovery of the signal, but DolbyA units themselves are very sloppy WRT dynamics timing and the concatenation would likely result in more mush.​
More complete decoding has another set of steps, and instead of 4 steps, apparently 6 or 7 steps are sometimes used. 5 or 6 steps give very plausible results, where the calibrations might be: -46,-36,-26,-16 and then -56, -46. There are all kinds of reasons for the multiple steps, but much better NR and more correction of the low level dynamics happen when you do the second set of layers. There is a 'compression of low levels only' effect if the second set of layers isn't used.​
The various EQs are HIGHLY standardized, and the calibration levels can start at -50, -46, 44.5dB -- and can also start at differing levels, but these are by far most common. THESE NUMBERS DO NOT WORK ON CDS THAT HAVE BEEN NORMALIZED.​

The previous mistakes I made: 1) trying to use 2nd order EQ instead of smart combinations of 1st order EQ. 2) initially not recognizing special EQ between 1kHz and 3kHz done on EVERY recording. 3) not recognizing that the staggerred decoding layers were needed. 4) numerous other foobars.

I think that the first time you heared the test demos a year or so ago (I sure wish more people helped me -- the really good results would have come more quickly) -- I was using 2nd order EQ, not recognizing the need for some special EQ in the 1k->3kHz range, only procesing ONE step instead of at least 4steps, and a less accurate pure DolbyA decoding capability (but was still plausible.)

As soon as I started making real progress, I kept getting push back from certain kinds of people, some might seem more defensive than skeptical -- but I am experienced enough to know reality -- I could probably teach most EEs about analog, and most programmers about programming -- I really KNOW this stuff. Industry people gave the most push back -- it is almost like they knew about this mess all along, preferring that people not know that they had been cheated... How can someone do a mix actually believe that the crap being sold as a CD wasn't badly distorted by the distribution world?

Once the multi-layer issue had been reverse-engineered -- something close to the original mixdown before the FA encoding (compression) can be recovered. If you want to ignore the idea of DolbyA methods are being used -- it is now one hell of a good expander/recording recovery software program.

Answering the question about being skeptical... Yes, it is hard to believe that the encoding is so standardized, but it REALLY IS!!!

John
 

audiopile

Active Member
Joined
Jun 27, 2019
Messages
162
Likes
127
This is the most interesting audio thread I've read in the last few YEARS! Could explain a lot about why we few are still hooked on old LP recordings.
 
Top Bottom