Finally, the 'FA' decoder really is working... Well, it recently has 'worked', but the final EQ was always botched -- but not so much now.
The magic answer was the 'descrambler', which really does some magic associated with EQ, HF signal expansion and things like that. The 'descrambler' allows sane presentation of the HF above about 4.5kHz.
Status: working reasonably well, sorely needs documentation update, perhaps 8-10 people know how to use it in current form. SW available, source will soon be available without asking. More useful docs coming in days, not weeks. Might even come in 'day'. If you try to use the decoder, remember to use the '--fa' switch, or, well, DolbyA mode doesn't sound very good for most recordings.
Location: URL:
https://www.dropbox.com/sh/i6jccfopoi93s05/AAAZYvdR5co3-d1OM7v0BxWja?dl=0
All URLs given don't generally change, and haven't changed in probably over 1yr.
(ALL DEMOS ARE SNIPPETS -- IF YOU EVER SEE A FULL RECORDING IN MY PUBLIC REPOS, PLEASE TELL ME ASAP!!!)
Subdir for snippet demos: flacsnippet-REL9-V7.0B+5-0
Equvalent input snippets: same directory as above, but with INPUT in the name.
Decoder binary (soon matching uptodate source):
https://www.dropbox.com/sh/5xtemxz5a4j6r38/AADlJJezI9EzZPNgvTNtcR8ra?dl=0
Speed: not fast, easily faster than realtime on the highest quality mode on my 10core. (Lots of precision Hilbert stuff going on for fog reduction.)
CAVEAT ABOUT THE DEMOS: don't expect anything major. the decoding result is subtle, direct A/B comparisons notwithstanding.
What are the final results of the working SW model?
Similar, almost indistinguishable response balance difference, freq balance short term sometimes a little different (esp LF).
More clean, precise sound. (HF, etc) Corrects 'fuzz' and 'blur' on consumer recordings.
Associated with the precise sound temporally with less 'wobble'/'fuzz', the stereo images esp on classical are less 'blurred', better locality.
Less hiss, esp on older recordings.
Different bass -- it is often different, but it appars to be actually correct. LF mixes differently with MF -- changes higher freq sound assoc with LF.
If there is a calibration offset (error), sometimes sibilance a little twisted/distorted, not usually horrible. In extreme cases, errors can cause a little 'garble'
How come it sounds so similar to the input, yet so much gain control & processing?
Whoever designed the scheme was a genius. Produces just enough damage to protect IP, but not so bad that most listeners are irritated from it.
Caveats:
The threshold of the layered gain curve is somewhat critical for the absolutely best, clean sound. Most recordings are created with threshold close to the correct settings, however, for example, some Supertramp uses a different/uncommon calibration level (e.g. 6 or 10dB difference, which is huge.) Most normal deviations of calibration are like: 0.005 or similar. My recent unmastered ABBA decodes were most happy with a calibration offset of approx '-0.008dB'. (On the other hand, the associated EQ on my attempt was ludicrous and embarassing. I should NOT have even tried to do any EQ. )
Is the decoder worth the trouble? Maybe...
Less so that I had hoped, but much of the time it can be worthwhile... Even Taylor Swift 'Shake it Off' is improved in a technical sense, but otherwise the recording is not worthwhile
. Whoever designed the 'FA' scheme, I assume R Dolby, was truly a real genius to do such a precise kind of damage, without the result irritating most people. This 'FA' encoding is most likely a component of the 'Digital Sound' from the 1980s'. One can think of this 'FA' scheme, used since the 1980s, similar to MQA today, but more egregious and apparently totally secret. I have gotten nothing but pushback from those industry people who should know about it, but finally the decoder *IS* existence proof.
Will John (myself) be able to benefit fromusing the decoder?
Yes, because most of my hearing problem is associated with varying response balance. I can easily hear the damage that is corrected by the decoder. Much of the time, in the past during 'demos' , I would easily hear the corrections in the dynamics & hiss. Unfortunately for me, most other people were more distracted by the crazy resulting response balance. This is one reason in the past, I was very unpleasantly embarassed. I could hear the corrective actions, many other people were focused on the crazy response balance on the output. It would have been better to be able to communicate better with others, but the project has so much context and a lot of built-up frustration.
-----
Followup blather...
I don't think that too many people will be disgusted by the results now.
Traditional development tools & methods not always very applicable. This is NOT a simple fixed gain device nor has a fixed freq resp. Sine waves don't usually give useful results. This created quite the challenge. Near the end, started figuring out methods for 'estimating' the response curves, better avoiding the dependency on my lousy hearing. Still, mastering attempts are crazed folly, just that I sometimes am over-optimistic.
If I could hear well, this would have been working 2-3yrs ago AT LEAST.
Recently after finding that some kind of 'descrambler' step was needed, things got a lot easier. I could tell you long stories about how tricky this project was/is. At first, I didn't even know that DolbyA was involved, and THIS PROJECT is what spurred on the DolbyA project. Until recently, I did't even think about 'descrambling' and had to do a lot of research to understand a reasonable solution to the HF EQ problems. Simple EQ techniques can handle about 90% of the solution, but not much farther. Eventually figuring out the need for a 'descrambler' came from studying the spectral density and using my strange hearing.
BTW, the decoder for DolbyA encoded recordings is now MUCH MUCH more accurate. The FA project would still border on embarassing without the internal DA decoder fixes.
The source will be available in days, but it would take a person with double my IQ to figure it out. With the significant complexity and truly unique math/algorithms, the source needs to be cleaned-up and documented. To get past the 1st page or two, one would need to be a C++ expert also. My estimate is that 75% of the lines of code are SIMD operations in the guise of 'looks like normal math to me'.
Have at it...