• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Decoder for material with latent DolbyA encoding

tmtomh

Major Contributor
Forum Donor
Joined
Aug 14, 2018
Messages
2,761
Likes
8,108
Keep in mind that there are two different solutions that John is working towards. One is the Dolby A decoder, for use in decoding true Dolby A. The other is to undo a lot of the compression ("damage") applied to (mainly pop/rock/jazz) music from the 80s onward, especially when remastering in later years.

It would be really useful to have available a selection of "original masters" and their released CDs to compare the decoding results to. I know John has access to some and has used them for this purpose. The hard part is determining the master provenance. It needs to be one that was approved for distribution, including any compression or effects used by the original mix and mastering engineers, but before any subsequent re-mastering.

I disagree with John on the amount of processing on early (80s) CD material. Granted, early CDs could sound harsh and grainy, but given the state of the technology at the time I would put this more down to tonal balance for vinyl mastering than to failing to Dolby A decode when making the CD master. I do feel that failing to decode Dolby A tapes would be more likely some years later when retrieving masters from the vault for a CD (re)release. This would be true Feral A. Then there is the EQ applied to try and make such tapes sound tolerable, which appears to be where John has been spending much of his efforts to undo. Finally, there is the deliberate (mis)use of Dolby A or other effects in the mixing/mastering to get a specific desired sound. He feels these last two were a widespread practice.

As for Taylor Swift and other modern recordings, I don't think John is saying that they've been Dolby A encoded. To me he seems to be saying that much of the (in many opinions excessive) processing has characteristics that remind him of the effects of Dolby A encoding (mis)used as a compressor and effects unit, and can be undone using a modified form of Dolby A decoding.

Thanks for this clarification, Don - makes sense.

Still, though, it would seem that John's "FeralA" tool is based on an underlying conviction that the compression and processing applied in the mastering stage of many recordings is sufficiently similar to the effects of Dolby A encoding that his Dolby A decoder can be used on non-Dolby A recordings, or at the very least that his "decompression/un-processing" solution can be closely derived from his Dolby A decoder.

I would respectfully disagree with that founding assumption, and I would further respectfully suggest that any software tool meant to be applied to non-Dolby A recordings in the way John says is not an "undo-er" tool but rather just another form of processing. Based on folks' comments about how his tool appears to significantly change the character of the recordings - and often not for the better - I would suggest that a basic first step would be to compare compressed recordings run through John's tool with those same recordings run through Izotope's - or even Audacity's - de-clipper tool. As a first step, John's tool should at least be able to produce better results than those de-clippers. If it can't, I'd question the point of it.
 
OP
J

John Dyson

Active Member
Joined
Feb 18, 2020
Messages
172
Likes
90
Thanks for this clarification, Don - makes sense.

Still, though, it would seem that John's "FeralA" tool is based on an underlying conviction that the compression and processing applied in the mastering stage of many recordings is sufficiently similar to the effects of Dolby A encoding that his Dolby A decoder can be used on non-Dolby A recordings, or at the very least that his "decompression/un-processing" solution can be closely derived from his Dolby A decoder.

I would respectfully disagree with that founding assumption, and I would further respectfully suggest that any software tool meant to be applied to non-Dolby A recordings in the way John says is not an "undo-er" tool but rather just another form of processing. Based on folks' comments about how his tool appears to significantly change the character of the recordings - and often not for the better - I would suggest that a basic first step would be to compare compressed recordings run through John's tool with those same recordings run through Izotope's - or even Audacity's - de-clipper tool. As a first step, John's tool should at least be able to produce better results than those de-clippers. If it can't, I'd question the point of it.

(For FA stuff, EVERY TIME that I mention DolbyA, I mean the DolbyA with slightly modified layout and some EQ.)

The decoder un-does a layer of processing that is explicitly done. I don't know how else to describe it. For example:

1) if I use any attack/release that is different than DolbyA -- result sucks. for HF bands , dynamic attack 2-40msec, dynamic release 40-80msec. exponential calculation based on signal level differences (very complex.) Believe me -- any slight error, the result is damaged. Anything significantly different -- doesn't sound good at all.

2) if there is different than 10dB increments in the calibrations -- result sucks.

3) the lowest calibration is always needs to be lower than the normal calibration used on tape (-13dB or so or below.) -13dB is magic, because that basically leaves 0dB extra headroom beyond the norm on DolbyA tapes. Anything below that helps with headroom, but decreases SNR. Anyway, that is the reason for the typical 'highest calibration of '-14.5dB'.

4) The ideal lowest calibration numbers are either -1.0, -1.5dB, -3.0dB, -6dB relative to the -13dB. Sometimes they start 10dB lower. This is only valid when the CD isn't normalized. Normalization screws things up. (Corrected previously -- initially I wrote -9dB above instead of -6dB... I was thinking about the exact calibration number, not the difference.)

Also, the decoder has a running gain display, and when starting with a calibraton of between -44dB and -50dB, the gains just start varying, much below that on recordings at normal signal levels, anything below that is wasteful. If starting between -44dB to -50dB, and incrementing by 10dB per step, the results are kind of normal. If you start with -14 to -20dB, and work downwards -- ugly stuff results.

There are so many things that point to *modified* DolbyA compression on consumer recordings, that it is the only answer that I can think of.
On the other hand, they might have a special jig with cat22 cards or maybe special built HW, but it is very DolbyAish.

One really good thing that has come of this -- the FA material has created good tests for my decoder for DolbyA material. A myriad of quality issues have been resolved by using the FA test material. The latest version is great (will be sending the latest/best version to my project partner soon.) Except for the immediate previous one -- it blows every previous DA decoder versions away.


John
 
Last edited:
OP
J

John Dyson

Active Member
Joined
Feb 18, 2020
Messages
172
Likes
90
I wanted
Thanks for this clarification, Don - makes sense.

Still, though, it would seem that John's "FeralA" tool is based on an underlying conviction that the compression and processing applied in the mastering stage of many recordings is sufficiently similar to the effects of Dolby A encoding that his Dolby A decoder can be used on non-Dolby A recordings, or at the very least that his "decompression/un-processing" solution can be closely derived from his Dolby A decoder.

I would respectfully disagree with that founding assumption, and I would further respectfully suggest that any software tool meant to be applied to non-Dolby A recordings in the way John says is not an "undo-er" tool but rather just another form of processing. Based on folks' comments about how his tool appears to significantly change the character of the recordings - and often not for the better - I would suggest that a basic first step would be to compare compressed recordings run through John's tool with those same recordings run through Izotope's - or even Audacity's - de-clipper tool. As a first step, John's tool should at least be able to produce better results than those de-clippers. If it can't, I'd question the point of it.


I wanted to follow up with another message -- slightly different topic. Many of the previous evaluations are based on mistakes +several months ago. This has been a long long walk.

Also, there is material that has been EQed or otherwise changed. For everyone who has had troubles using the decoder, I can find someone who has gotten good results. On the other hand, some material isn't FA, or is compressed differently.

John
 
OP
J

John Dyson

Active Member
Joined
Feb 18, 2020
Messages
172
Likes
90
*Please* indulge me on this comparison.
This is intended to show that this is a small amount of evidence towards the FA encoding being ubiquitious.
None of this is cherry picked, mostly some music that I like a little.

There is *varying* improvement -- I wanted to show that I am not trying to convert, but trying to show what I
have been seeing in my experiments.

This are snippet from the songs:

Riannon/Fleetwood Mac
Kind of a Hush/Carpenters,
'Thats Me'/ABBA.

Each one gives varying improvement -- perhaps Riannon the most, Kind of a Hush is middling (better image, less hiss), and
That's Me is just wierd. Thats me was only decoded up to -24dB instead of -14dB because the result was a bit rough sounding,
so there is still some FA left in the recording. The FA layering is like the proverbial Russian Doll.

The differences -- listen carefully to the grain in the rawCD versions, yet the DECODED version is clear/clean. It isn't perfect, and probably demands a bit of mastering/EQ, but I truly don't know how the DECODED version isn't an improvement, or at least headed to an improvement?

Whether one likes the sound of the DECODED version, isn't it apparent that the rawCD version has been terribly damaged? This damage is on almost EVERY consumer CD. Also, if the decoding process didn't come very close to what was needed, the DECODED result would be much worse.


https://www.dropbox.com/s/yb1pwwzsfz3mgns/04 - Rhiannon-rawCD.flac?dl=0
https://www.dropbox.com/s/1ynvbex5wkkcx5t/04 - Rhiannon-DECODED.flac?dl=0

https://www.dropbox.com/s/wdl1dtk4twkmk9k/01 - There's A Kind of Hush-rawCD.flac?dl=0
https://www.dropbox.com/s/v6j9ng0lvzmu6at/01 - There's A Kind of Hush-DECODED.flac?dl=0

https://www.dropbox.com/s/gbd4llv63s1kbtj/ThatsMe-rawCD.flac?dl=0
https://www.dropbox.com/s/p993xfusma64elb/ThatsMe-DECODED.flac?dl=0

John
 

krabapple

Major Contributor
Forum Donor
Joined
Apr 15, 2016
Messages
3,193
Likes
3,755
Keep in mind that there are two different solutions that John is working towards. One is the Dolby A decoder, for use in decoding true Dolby A. The other is to undo a lot of the compression ("damage") applied to (mainly pop/rock/jazz) music from the 80s onward, especially when remastering in later years.

It would be really useful to have available a selection of "original masters" and their released CDs to compare the decoding results to. I know John has access to some and has used them for this purpose. The hard part is determining the master provenance. It needs to be one that was approved for distribution, including any compression or effects used by the original mix and mastering engineers, but before any subsequent re-mastering.


What would be at least as useful if not more so would be the testimony of an actual 'name' audio engineer who did or does mixing and/or mastering of major label recordings of the era in question, verifying these speculations about industry-wide production practices that seem to have heretofore been kept secret.
 
OP
J

John Dyson

Active Member
Joined
Feb 18, 2020
Messages
172
Likes
90
What would be at least as useful if not more so would be the testimony of an actual 'name' audio engineer who did or does mixing and/or mastering of major label recordings of the era in question, verifying these speculations about industry-wide production practices that seem to have heretofore been kept secret.

II am VERY VERY curious what is happening also. I don't think that it is in 'mastering' because I have had some minor discussions, hinting around, with such people who might be doing remasters. My guess is that the damage happens between mastering and somewhere towards distribution. It IS real -- if I would try to run my process on material that wasn't so encoded, the result would be attrocious rather than sometimes imperfect.

So 1) Existence proof 2) where is it coming from?

I would see success as 1) the industry quits cr*pping on the recorings or 2) the process that I have truly, tediously reverse engineered *without prejudice, and with lots of mistakes* would be more available. I plan to make both the technology that decodes DolbyA material and the FA decoding available gratis, with simple attribution. It isn't 100% ready yet because of some cr*p code that I would be embarassed for people to see (the raw .wav file I/O code, the band splitting code and the .wav file parsers -- desperate hackerware.) The actual DA decoding is as clean as it can be.



The raw DolbyA deccoding is much better than even two months ago. Even though it previously worked reasonbly well, much better than any plugin, there were problems with the HF0/HF1 split, because of the overlapped HF0/HF1 bands being tricky to emulate in feed forward. The errors weren't in the architecture as much as precise calculations for the split. It has gotten to the point where I cannot tell the difference without an immediate A/B and listening for the differences in clarity on the DHNRDS/DA vs true DolbyA. Also, the testing with FA (the consumer stuff) has forced much better matching of the parameters for the DolbyA attack/release curve. The vast amount of CONSUMER test material has markedly forced an improvement in the DA side of things. (I am sending a new release with the improvements forced by FA and verified by true DolbyA master tapes in a few days -- Wednesday at the latest.)

The FA decoding is a different story, since it is derived from multiple passes of DA. The testing with multiple passes on the consumer material has forced a great increase in DA precision. This iterative improvement is another indicator of the FA encoding being ubiquitious. The result of the test iterations on FA has converged to DolbyA behavior precisely.

In recent months, the problem with FA decoding hasn't been with architecture (again), but with usage. I started with some prejudices from day one that were disproven. That is I am not pre-ordaining a solution, but definitely hear (and detect by test) similar processing on almost every consumer recording.

Here are my mistaken prejudices:

1) the 'compression' was leaking of EQed DolbyA: WRONG.
explanation: the encoding is ch1: Mid ch2: Side and not the normal L/R.
2) there is only one layer of modified DolbyA compression: WRONG
explanation: even though removing one layer helps, I kept getting complaints about left over compression.
3) there was no EQ before the compression process: WRONG
explanation: recordings from some sources have had significant EQ, esp sometimes depressing the 4.5k to 9k region
4) the steps between each DA compatible decoding were 2nd order: WRONG
explanation: each step required compensatory EQ between each step because of response biases in DolbyA
my initial experiments assumed 2nd order EQ, and even though came close, never sounded perfect.
It appears that the 1k to 3k EQ, the 3k on up EQ are all 1st order (1k to 3k is multiple 1st order)

ALL of the above prejudices had to be reversed -- my mind is open to facts, including the possibility that the encoding does NOT exist.

Once I had determined that there were multiple layers of encoding with DolbyA cards, I tried the following:

1) Multiple steps at the same calibration: WRONG -- caused over expansion
2) Start with the highest calibratoin (given normal DA tape is about -13dB: -14.5, -16, -19, -20 (or 10 dB less)) first :Wrong
doing a calibration sequence like -14.5, -24.5, -34.5, etc sounded very wrong.
3) Start with the lowest calibration first: -44.5, -34.5, -24.5 (sometimes stopping there), -14.5 :sounds best.
4) Try steps other than 10dB: 10dB is correct after lots of testing.
------
Before each decoding step, a compensatory EQ is needed -- anyone who knows DolbyA is that it creates frequency response biases, the EQ undoes those. I forget the exact numbers, need to refer to the code for more information. It is all 1st order, a dip between 1k and 3k is needed, with a boost/dip at 3k or 3.18k (depending), the tail-off for the EQ is either 9k, 10.5k, 18k, 22k or inf.

The diagram is approx this: (the calibration numbers are examples only -- usually a standard set if material not normalized)

CD -> EQ -> decode at -44.5dB -> EQ -> decode at -34.5dB -> EQ -> decode at -24.5dB (sometimes stops here) -> EQ -> at -14.5dB -> final EQ

The EQs in-between are usually same for each sttep. The final step uses a different scheme.

Some recordings have a 2nd pass, which starts at 10dB lower than the previous pass, and only goes for two steps max. Some even have 3 passes.

Except for still trying to get 'Brothers In Arms' to sound right, ABBA recordings have been very difficult, because their sound is not natural, but are EXCELLENT test material because they are so difficult.

Below are my best results so far on the soon-to-be-released FA decoder (using the latest DA technology) on an ABBA piece. Almost all consumer recordings see this kind of improvement.

1) DECODED has much much less hiss than rawCD.
2) DECODED is less grainy sounding than rawCD.
3) DECODED has more natural ambience decay than rawCD.


* if the material hasn't been 'encoded', then I have created the best possible expander, single ended NR that has ever existed (but that isn't true.)

The big problem that I can tell so far -- I suck at final EQ (and also choosing correct EQ choice.) God (respectfully) didn't intend for me to be able to mix or master recordings -- but I do program and research very well (if not slowly!!!)

All of the normal, Polar ABBA albums can be COMPLETELY decoded with the following command -- most often, you can get buy with the first '--fcs' to clean up most of the FA sound. This results from 7 passes through the DA decoding process -- if it wasn't compressed, it would be UNLISTENABLE -- anyone who knows DolbyA knows this!!! I do complete decodes for testing, and the ABBA recordings were just sitting around today:

export AGD=" --fcs="3,-40,fcf=classical" --fcs="2,-60,fcf=classical" --fcs="2,-70,fcf=classical" --wia=1.414 --woa=0.8409 "

This is one, typical (NOT EXTREME/CHERRY PICKED) example:
(dont use direct dropbox player for accurate playout.)


original/rawCD:
https://www.dropbox.com/s/vfzm9w27mycrajd/07 - Two For The Price Of One-rawCD.flac?dl=0
decoded:
https://www.dropbox.com/s/u4xb5wazbmokkxo/07 - Two For The Price Of One-DECODED.flac?dl=0

Currently working HARD on the next release!!!

John
 
Last edited:

krabapple

Major Contributor
Forum Donor
Joined
Apr 15, 2016
Messages
3,193
Likes
3,755
II am VERY VERY curious what is happening also. I don't think that it is in 'mastering' because I have had some minor discussions, hinting around, with such people who might be doing remasters. My guess is that the damage happens between mastering and somewhere towards distribution. It IS real -- if I would try to run my process on material that wasn't so encoded, the result would be attrocious rather than sometimes imperfect.

What is downstream'of mastering is the digital equivalent of 'disc cutting' the lacquer master for an LP.

Leaving aside proper head alignment and Dolby A decoding during cutting -- because AFAICT you are suggesting this is *not* necessarily the issue (nor is there testimonial evidence from engineers that improper Dolby A decoding of source tapes was widespread):

In the old LP days this was the art of 'fitting' the content of the master tape to the limits inherent in scraping valleys into a platter meant to be played back by a stylus/cartidge/preamp system. (so, e.g.: summing bass to mono, adjusting for sibiliance, etc...) It could take several tries to get it 'right'. There could be a digital stage (digital delay) involved, esp. in the 1980s onward. Here's a pretty in depth look at cutting lathe tech

http://pspatialaudio.com/lathes.htm

In addition, because master discs could wear out, requiring a new master disc being cut, the 'fitting' adjustments made for cutting were often captured on a tape -- a 'production master {tape)' or 'disc/cutting master (tape)' -- during the first cutting. So the next time, instead of need a cutting engineer there to carefully monitor the process, you could just play the production master -- i.e., an altered, compromised version of the original master tape, optimized for vinyl -- and cut that to disc.

None of that is necessary to make CDs. However, it is widely believed that production masters have been mistakenly used as sources some CD releases, especially early on when due diligence wasn't practiced when pulling source tapes. Companies wanted to get product out fast back then and the most handy tape copy, rather than the original master tape stored away or marked 'do not use', may have been used as a CD master. IT was this purported bad practice that justified the later wave of 'CDs from the original master tapes'. And who knows how often it may have happened since then. No one is even definitive on how often it ever happened in the first place.

The intricacies of CD 'cutting' (glass mastering) and replication themselves constitute another story, though there is extremely less scope for significant audible differences being introduced there, and nothing to my knowledge that would account for what you claim to hear in so very many releases.
 
Last edited:

tmtomh

Major Contributor
Forum Donor
Joined
Aug 14, 2018
Messages
2,761
Likes
8,108
In addition, because master discs could wear out, requiring a new master disc being cut, the 'fitting' adjustments made for cutting were often captured on a tape -- a 'production master {tape)' or 'disc/cutting master (tape)' -- during the first cutting. So the next time, instead of need a cutting engineer there to carefully monitor the process, you could just play the production master -- i.e., an altered, compromised version of the original master tape, optimized for vinyl -- and cut that to disc.

None of that is necessary to make CDs. However, it is widely believed that production masters have been mistakenly used as sources some CD releases, especially early on when due diligence wasn't practiced when pulling source tapes. Companies wanted to get product out fast back then and the most handy tape copy, rather than the original master tape stored away or marked 'do not use', may have been used as a CD master. IT was this purported bad practice that justified the later wave of 'CDs from the original master tapes'. And who knows how often it may have happened since then. No one is even definitive on how often it ever happened in the first place.

You've hit on a key, and sad, issue with digital masters used to make some (many?) CDs of albums originally released during the LP era. It is an unfortunate irony that the "do not use" tapes were marked as such because they were the true masters and therefore could not be cut to vinyl without first being modified (compromised, really) - and then that "do not use" notation became viewed as a rule, without context, by folks who were not aware of the circumstances behind it, and so many CDs were cut from EQ'd LP cutting tapes even though the true master was right there in the vaults.

It's particularly unfortunate that a portion of the more recent reissues/remasters of such albums, which have used the true master tapes, have been negatively impacted by digital peak-limiting and other forms of compression. Hence the common mastering dilemma for consumers concerned about the best sound: a dynamic, minimally processed digital version of a copy tape or LP cutting tape, or a dynamically compressed, more processed digital version of the true master tape.

For some people this is not a problem since they actually like and prefer the "LP" sound of the LP cutting master - it's what they're accustomed to from years/decades of listening to LPs, so they identify it as "organic," "mellow," and "like the original first-press LP, which is the best I ever heard this album." While it is true that some older CDs made from cutting tapes on the whole sound better than new remasters made from the master tape but ruined with peak-limiting, I think the balance is off among a lot of audiophiles - too much emphasis on full dynamics, not enough emphasis on source-tape quality.
 
OP
J

John Dyson

Active Member
Joined
Feb 18, 2020
Messages
172
Likes
90
You've hit on a key, and sad, issue with digital masters used to make some (many?) CDs of albums originally released during the LP era. It is an unfortunate irony that the "do not use" tapes were marked as such because they were the true masters and therefore could not be cut to vinyl without first being modified (compromised, really) - and then that "do not use" notation became viewed as a rule, without context, by folks who were not aware of the circumstances behind it, and so many CDs were cut from EQ'd LP cutting tapes even though the true master was right there in the vaults.

It's particularly unfortunate that a portion of the more recent reissues/remasters of such albums, which have used the true master tapes, have been negatively impacted by digital peak-limiting and other forms of compression. Hence the common mastering dilemma for consumers concerned about the best sound: a dynamic, minimally processed digital version of a copy tape or LP cutting tape, or a dynamically compressed, more processed digital version of the true master tape.

For some people this is not a problem since they actually like and prefer the "LP" sound of the LP cutting master - it's what they're accustomed to from years/decades of listening to LPs, so they identify it as "organic," "mellow," and "like the original first-press LP, which is the best I ever heard this album." While it is true that some older CDs made from cutting tapes on the whole sound better than new remasters made from the master tape but ruined with peak-limiting, I think the balance is off among a lot of audiophiles - too much emphasis on full dynamics, not enough emphasis on source-tape quality.

What does an un-encoded material (even if compressed non DolbyA) sound when trying to decode it with DolbyA? Listenable? Huh? I am doing it at least 4 times with a careful use of EQ -- and getting a BETTER result.

If this is from the LP era, then why is 'Shake it off' or 'Call Me Maybe' also FA? (Taylor Swift, Carly Rae Jepsen -- certainly not LP era)

Admittedly, I do have LP rips that are also FA, some that arent. Maybe the process was not coincedental with CDs, but might have been started in an earlier practice. Cat22 cards have been around for a long time, and putting together a small array of 4 of cat22s isn't very hard. to produce FA. It could be a simple, little final processing box in the corner somewhere? I mean, sounds stupid, but SOMETHING BAD IS HAPPENING, and I can produce a precise technical description of it.

With the extreme amount of expansion at multiple layers, the dynamics would be severely expanded -- even on this material -- listen, contemporary pop rock (ca 2010s), NOT LP:
https://www.dropbox.com/s/eifah5q773vyooo/CallMeMaybe-decoded-snippet.flac?dl=0
(My EQ might be in error -- but the dynamics would have been EXTREME if not encoded)

John
 
Last edited:

tmtomh

Major Contributor
Forum Donor
Joined
Aug 14, 2018
Messages
2,761
Likes
8,108
What does an un-encoded material (even if compressed non DolbyA) sound when trying to decode it with DolbyA? Listenable? Huh? I am doing it at least 4 times with a careful use of EQ -- and getting a BETTER result.

If this is from the LP era, then why is 'Shake it off' or 'Call Me Maybe' also FA? (Taylor Swift, Carly Rae Jepsen -- certainly not LP era)

Admittedly, I do have LP rips that are also FA, some that arent. Maybe the process was not coincedental with CDs, but might have been started in an earlier practice. Cat22 cards have been around for a long time, and putting together a small array of 4 of cat22s isn't very hard. to produce FA. It could be a simple, little final processing box in the corner somewhere? I mean, sounds stupid, but SOMETHING BAD IS HAPPENING, and I can produce a precise technical description of it.

With the extreme amount of expansion at multiple layers, the dynamics would be severely expanded -- even on this material -- listen, contemporary pop rock (ca 2010s), NOT LP:
https://www.dropbox.com/s/eifah5q773vyooo/CallMeMaybe-decoded-snippet.flac?dl=0
(My EQ might be in error -- but the dynamics would have been EXTREME if not encoded)

John

I don't want to go in circles here, John - and you are responding to my comment as if I were making an argument that I was not in fact making.

But since you brought it up, your claim that material from all eras, regardless of whether or not it was compressed with Dolby A, sounds better when decoded with your Dolby A decoder, gets to the heart of what I - and many others here - are trying to say to you, but which you seem intent on ignoring: The results sound better to you because... they sound better to you. Subjectively, it seems clear that you prefer the effects of the decompression your "FeralA"/Dolby A decoder applies to commercial masterings, regardless of how compressed they are, and regardless of whether or not that compression is actually Dolby A compression.

From the perspective of most folks at this forum, the problem with your claim is that if your decoder is not actually undoing Dolby A, apples to apples, then your "decoder" is not actually a decoder but rather an effects processor - and so the results you're getting might be more dynamic, but they are not necessary any more high fidelity in the sense of being more faithful to the original master tape/source than the commercial release is. In fact, if you are applying specific Dolby A decompression and EQ to a source that was compressed or EQ'd with a different scheme - or even worse, a source that was compress but not EQ'd at all as part of the compression - then you are producing an altered, lower-fidelity version (again with "fidelity" meaning "faithful to the source" rather than "what John thinks sounds good").

So as I noted before, if you're applying your tool to non-Dolby A-encoded sources, then it's just a version of a de-clipper filter with, apparently, some EQ built in that you find euphonic. That's fine - but it's not what you claim it is.
 
OP
J

John Dyson

Active Member
Joined
Feb 18, 2020
Messages
172
Likes
90
I don't want to go in circles here, John - and you are responding to my comment as if I were making an argument that I was not in fact making.

But since you brought it up, your claim that material from all eras, regardless of whether or not it was compressed with Dolby A, sounds better when decoded with your Dolby A decoder, gets to the heart of what I - and many others here - are trying to say to you, but which you seem intent on ignoring: The results sound better to you because... they sound better to you. Subjectively, it seems clear that you prefer the effects of the decompression your "FeralA"/Dolby A decoder applies to commercial masterings, regardless of how compressed they are, and regardless of whether or not that compression is actually Dolby A compression.

From the perspective of most folks at this forum, the problem with your claim is that if your decoder is not actually undoing Dolby A, apples to apples, then your "decoder" is not actually a decoder but rather an effects processor - and so the results you're getting might be more dynamic, but they are not necessary any more high fidelity in the sense of being more faithful to the original master tape/source than the commercial release is. In fact, if you are applying specific Dolby A decompression and EQ to a source that was compressed or EQ'd with a different scheme - or even worse, a source that was compress but not EQ'd at all as part of the compression - then you are producing an altered, lower-fidelity version (again with "fidelity" meaning "faithful to the source" rather than "what John thinks sounds good").

So as I noted before, if you're applying your tool to non-Dolby A-encoded sources, then it's just a version of a de-clipper filter with, apparently, some EQ built in that you find euphonic. That's fine - but it's not what you claim it is.

No -- I am not claiming that everything is ALL FA ENCODED. In fact *SOME* FA recordings DO sound better when left alone. I guess you (rightfully) don't know my entire context and thought process, and I SUCK BADLY at communicating sometimes.

I guarantee you -- I have really good reasons for giving up on Brothers In Arms, even though I tried and tried and tried. It isn't that I couldn't do a REALLY REALLY good decode, it just doesn't have the same sound that we all expect. The FA version sounds BETTER to me, even though I don't like the gritty, grimey FA sound in general.

I guess my vehemence has gotten in the way of my abiliity to communicate, and I apologize for creating the impression that I am arguing.

Frankly, my strongest emotion is frustration -- because I waited a long time in the 80's for the CDs to start sounding better, and that is the reason why I walked away from the 'hi fi' side of the hobby by 1990. Hi Fi is an anathema when starting with poor quality material. (In the decade before that, I REALLY used to do real stereo recordings -- with real paired condenser microphones, and got really good results -- no FA.) FA ruined it for me back in the late 80s'. It just so happens that I ran into an opportunity to research it -- and at least partially solved the technical side of the problem.

John
 
OP
J

John Dyson

Active Member
Joined
Feb 18, 2020
Messages
172
Likes
90
As you probably know, I have been working on the FA decoder for a LONG time. Finally, I am starting to get results that really do sound like master tapes instead of a simple improvement. I have gotten beat-up on the midrange on the output, I believe part of it is the overly strong midrange in most consumer recordings (with distorted, intermodulated highs.)

I have some demos that I just made, ready JUST BEFORE a new release. I should have done the release yesterday, but found a problem with the LF range (some recordings are done differently than others.) The new release is coming out on Thursday 17 Dec. The decoding controls are INFINITELY simpler than before, and normal require no extra EQ (you don't even need to think about frequencies or shelving EQ at all.) There are standard modes (two basic modes, and about 3-4 common variants each.) Even if you don't choose the perfectly correct settings, the results are still plausible. Previously, the processing didn't line up perfectly from step-to-step, and now -- I think that it does.

You have not heard 'Mrs Robinson' or even 'Reason to Believe' from the Carpenters as cleanly as these examples (unless you have the master tapes.) I have some other short examples -- even a copy of American of Paris (sorry,, truncated -- I can give access to full demos with a promise not to redistribute.)

The midrange is fully expressed, the stereo image is MUCH MUCH more stable than most consumer recordings (most of the distributed copies have fuzzy stereo image.) VERY LITTLE FUZZ in these examples.

https://www.dropbox.com/sh/mjmdfxu8gdweoc2/AACE7AQA1kZ0AIFNxar_sZoJa?dl=0

John
 

tmtomh

Major Contributor
Forum Donor
Joined
Aug 14, 2018
Messages
2,761
Likes
8,108
John, thanks for sharing these samples - very helpful.

I have listened to all of them, and I have to say, to my ears, on my system, they don't sound very good and they certainly don't sound like what I would imagine the master tapes sound like. Of course I have no idea what the master tapes sound like, but I have a very difficult time seeing how any mastering engineer could have produced most of the better-sounding commercial releases of these albums if the master tapes sound like what your samples sound like.

All of them sound excessively rolled-off in the treble, and most of them sound slightly bloated in the bass (although that could just be a perceptual effect of the rolled-off treble). In addition, there is quite audible distortion in the ABBA sample, and while I would certainly agree that the imaging is not fuzzy, it doesn't seem any more precise than on the better official releases, and the soundstage width appears to be slightly, but quite noticeably, narrowed compared to most officially released versions.

I don't know what you are doing, but I would be shocked if your decoder was not applying some treble-cutting EQ. I also see in the notes on several of the files some mention of a de-esser, which could perhaps be attenuating the treble too.

In any event, I'm sorry but these samples have only increased my prior suspicion that these sources are either not undecoded Dolby A, or if they are, your tool is not yet there as a decoder - it is functioning as a euphonic processor that seems to follow the sonic preferences of those who profess to seek "warm, analogue" sound - and with all respect it's not doing a particularly good job of producing that effect.

I'm truly sorry to be writing this - I can imagine how disheartening, or frustrating, or irritating it must be to get this kind of feedback. I considered not commenting at all - but given the ethos of this site and the fact that you have put these samples out there so that folks can listen and comment, I thought on balance it was better to post an honest impression. Thank you for your time and your work. I will be interested to see how the decoder develops in the future.
 
OP
J

John Dyson

Active Member
Joined
Feb 18, 2020
Messages
172
Likes
90
I just checked something else -- I did NOT master the examples, and there might need to be a -6dB 9kHz shelf (single pole.) I tend not to do those things, because I am more worried about delay distortions and intermodulation that sit in so much consumer material... Sorry about that -- I just noticed the Rumors examples needed the shelving, but I simply do NOT hear it. The decoder can remedy that problem with a SINGLE CHARACTER. I just didn't do it. (Again, I don't hear frequency balance issues very well -- I hear distortion though.)

I just added the missing mastering step. I normally enjoy the treble boost to hear distortion. Sorry about that. The recordings might have sounded a bit intense. (They were missing a 9k to 21k shelf -- it is built-in to the decoder -- but I disabled it for my testing.) I don't hear the error.

* THIS IS A CORRECTION TO A PREVIOUSLY FRUSTRATED -- PERHAPS RUDE RESPONSE. I KNOW that the recordings are extremely clean and couldn't understand the comments -- until I noticed the frequency balance issue.

John
 
Last edited:
OP
J

John Dyson

Active Member
Joined
Feb 18, 2020
Messages
172
Likes
90
All of the decoder major problems are fixed. I am a master of premature releases -- this isn't.

The examples are online (with before hissy 'RAW', and post decoding 'DEC'). On other groups I explained about accomodation, and it is easy to learn to suppress the hiss and ignore the compression on most digital materials. However, these examples should be proof to 'virgin' hearing, and those who have become accomodated will come around.

Note that the examples have not been 'mastered' or 'modified', and the results are now almost totally mechanical. Sometimes there are recordings with different POST DECODING EQ needed, but nothing really needs to be changed before decoding except the calibration level (where the gain curves line up.) The decoder has built-in facilites for turning off some of the EQ that is normally needed by most recordings, but not needed by some. The calibration is usually -42.675, which correlates reasonably with the usual -12.80 on DolbyA tapes. (The -42.675 comes from 30dB lower than -12.80.) The offset comes from a mistaken built-in offset, and a historical misalignment by 0.10dB on the decoder, but works well to use the DolbyA level on tapes. In some cases, some 'touch ups' can be helpful (like a 2nd order -1.5dB cut at 9kHz or 6kHz -- but I didn't make any changes.)

Yes, I know it seems like I am confusing DolbyA and the FA decoder, but they are highly inter-related. Rather than blather on, there are some examples in the repository (before and after.) There is also a video based on a slightly earlier version of the decoder, along with a pointer to the site. The next version will have simplified command line, since it has become ALMOST as easy to use as for DolbyA!!!

The examples include very recent recordings (well, now one -- my current space for the other recording is currenlty offline.) Still re-organizing for the upgrade to my Intel 'X' machine while still using the 4 core for UI.

Video: (Have explained it elsewhere, most of the values are gains -- you can see the huge -- sometimes 40dB, bouncing several times, within a second -- amount of distortionless processing)

https://www.dropbox.com/s/xy5oz7c1tywuacg/example4-2020-12-20_01.04.02.mkv?dl=0

Examples:
https://www.dropbox.com/sh/v90m7q56g64tfgo/AACao_I34J7x2ZJu91qpKG4wa?dl=0

Decoder:
https://www.dropbox.com/sh/1srzzih0qoi1k4l/AAAMNIQ47AzBe1TubxJutJADa?dl=0

John
 

Paianis

Member
Joined
Nov 2, 2018
Messages
25
Likes
19
Just a moment ago I discovered this thread, plus similar ones on other sites, and I'm rather startled because since 2018 I've been on a quest for a set of optimal filters for the 1992/1993 ABBA CD releases. This includes not only Gold and More Gold but some maxi-singles with tracks not on the comps: Eagle (album version) and Happy New Year. The Aussie Gold had Rock Me as well but rumour has it the master tapes older than Dancing Queen have been lost for years, so this track probably came from an album or single production master.

Anyway, I used the SoX program for the filters. They are intended to work on the CD audio resampled to 88.2kHz. I recall several tracks did not need the bandreject filter (Under Attack, I Am The City and Our Last Summer from memory), also several tracks didn't need the stereo channels to be swapped, but most will be improved with the command below:

Code:
sox "Eagle_88.2.wav" "Eagle_mm4.wav" bass -4 0.15k 0.8k bandreject 30k 100k equalizer 0.5k 1k +2 swap

I'm not a technical expert on analog tape recordings but this sounds good to me, how do they compare to your samples? Your links seem to have stopped working so I can't compare myself.
 

Caligari

Member
Joined
Apr 24, 2019
Messages
14
Likes
8
All of the decoder major problems are fixed. I am a master of premature releases -- this isn't.

The examples are online (with before hissy 'RAW', and post decoding 'DEC'). On other groups I explained about accomodation, and it is easy to learn to suppress the hiss and ignore the compression on most digital materials. However, these examples should be proof to 'virgin' hearing, and those who have become accomodated will come around.

Note that the examples have not been 'mastered' or 'modified', and the results are now almost totally mechanical. Sometimes there are recordings with different POST DECODING EQ needed, but nothing really needs to be changed before decoding except the calibration level (where the gain curves line up.) The decoder has built-in facilites for turning off some of the EQ that is normally needed by most recordings, but not needed by some. The calibration is usually -42.675, which correlates reasonably with the usual -12.80 on DolbyA tapes. (The -42.675 comes from 30dB lower than -12.80.) The offset comes from a mistaken built-in offset, and a historical misalignment by 0.10dB on the decoder, but works well to use the DolbyA level on tapes. In some cases, some 'touch ups' can be helpful (like a 2nd order -1.5dB cut at 9kHz or 6kHz -- but I didn't make any changes.)

Yes, I know it seems like I am confusing DolbyA and the FA decoder, but they are highly inter-related. Rather than blather on, there are some examples in the repository (before and after.) There is also a video based on a slightly earlier version of the decoder, along with a pointer to the site. The next version will have simplified command line, since it has become ALMOST as easy to use as for DolbyA!!!

The examples include very recent recordings (well, now one -- my current space for the other recording is currenlty offline.) Still re-organizing for the upgrade to my Intel 'X' machine while still using the 4 core for UI.

Video: (Have explained it elsewhere, most of the values are gains -- you can see the huge -- sometimes 40dB, bouncing several times, within a second -- amount of distortionless processing)

https://www.dropbox.com/s/xy5oz7c1tywuacg/example4-2020-12-20_01.04.02.mkv?dl=0

Examples:
https://www.dropbox.com/sh/v90m7q56g64tfgo/AACao_I34J7x2ZJu91qpKG4wa?dl=0

Decoder:
https://www.dropbox.com/sh/1srzzih0qoi1k4l/AAAMNIQ47AzBe1TubxJutJADa?dl=0

John
Congrats on making a program that makes every recording sound WORSE than an AM radio.
 

tmtomh

Major Contributor
Forum Donor
Joined
Aug 14, 2018
Messages
2,761
Likes
8,108
Congrats on making a program that makes every recording sound WORSE than an AM radio.

A bit harsh IMHO, but I do get what you're saying: I too have noticed that most of the audio samples he's shared that have been processed with his "decoder" are rolled-off in the highs, often to the point of being rather muffled-sounding; and a bit heavy in the midrange, with limited soundstage in some cases. So not necessarily worse than AM radio - but the sound does remind me a bit of AM stereo (briefly a think in the '80s), if you had perfect reception and played it back on hi-fi equipment.
 

sganti

Member
Joined
Dec 19, 2020
Messages
7
Likes
2
I just made a new release of the decoder, works better, easier and the --play command works on Windows. There are two realtime play options available, one that uses SOX and one that uses a simple little adjunct program.
The previous demos are gone, but there have been major improvements since then. The decoder is on the final polishing phase, as perfection is coming close. NO matter what, even when not perfect, getting rid of the 'woody' sound and HF compression is nice.

Here is a pointer to the forum where there is a pointer to the location along with some helpful info. Usage of the decoder is free, and it really does work. The 'avx' version though really needs AVX2 like a Haswell, Zen or Excavator. The 'win' version is slow because it is limited to SSE3 and the program is very highly optimized with SIMD code.

https://audiophilestyle.com/forums/topic/58668-ferala-decoder-free-to-use/

John
I just started ripping vinyl using rme dac2 pro, and was looking at posts the sampling rate and came across this post on the decoder. Is there a current link for downloading the decoder and trying it out?
thanks
Sridhar
 
Top Bottom