• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

More MQA Controversy

Fitzcaraldo215

Major Contributor
Joined
Mar 4, 2016
Messages
1,440
Likes
634
By formulating a strategy for arriving at a minimal filter, they did sidestep the issue whether complex filters are harmful or not.



But they are very clear on this: the temporal extent of the filter has to be minimal, while the resulting aliasing in the audible band has to be below the programme's innate noise distribution.

Any competent mastering engineer now can apply the same strategy.

As I tried to say earlier, I believe they are saying strongly, but implicitly, that for filters to be better, they need to be more sophisticated and, inevitably, more complex. But, I do not think they or anyone are saying that complexity in and of itself is a virtue, any more than simplicity is. They are not in any way denying that a filter or anything should be engineered to be as simple as possible to get the job done. The question is how do you define the job to be done? A more comprehensive definition of the scope of that might inevitably require a more complex implementation than a simpler one.

As I also tried to say, the history of digital audio has shown a definite trend toward increasingly more complex filters and other aspects accompanying the obvious improvements in digital's ability to deliver much better sound. That is just as true in other fields. The Model T was a great, simple car in its day. But, today's mind-bogglingly more complex cars are much more reliable and, of course, offer performance, comfort and durability that is several light-years better. Ergo, complexity itself is not the enemy. It is an inevitable part of what is necessary for technological progress that offers benefits to users. It is the path of civilization, though we like in our minds to wishfully think they don't make 'em as good as they used to. Our minds are wired to resist change and greater complexity quite naturally.

Greater conceptual simplicity, especially in filtering, can be had with DSD vs. the PCM that MQA relies on. But, if you look inside a PS Audio Directstream DAC, for example, it is one very complex piece of circuitry. And, is converting a PCM stream to a DSD stream really ultimately an improved answer with fewer downsides that is better than a more sophisticated native PCM filter?

My other question is if the temporal issues are so easily addressed by any engineer, why has that not already been done universally with PCM? Why is there still considerable pre- and post-ringing in measurements of the output of most any PCM ADC or DAC? Some may say, no problem, because it has already been reduced enough to inaudibility, even if not totally eliminated measurably. Others say, no problem, we will just inefficiently continue to step up the sampling rates further for new recordings into the ultrasonic range to let it ring where we think no one can hear it. And, we will just remaster all our old CDs in ever higher hi Rez, ignoring the ringing already there in the old digital masters from the original ADC.

MQA is saying they have a better solution based on their research, so they offer an answer that they believe corrects these issues. They propose doing so very efficiently without clogging global communications bandwidth in the transmission of mostly otherwise useless ultrasonic noise to meet the growing demand of potentially gazillions of future users in the gradual, but inevitable shift to hi Rez via Internet distribution methods. They are gambling really huge sums invested in demonstrating that to listeners, recording producers and to web music providers. In sheer, mind-numbing scale and in technical ideas, this has absolutely no parallel, of course, to what tiny, cottage industry audio snake oil purveyors do with nice, simple, easily fabricated cables, cones, boxes of dirt, etc.

I do not know for sure whether or not MQA's ideas have merit, but I see a good chance that they do. We probably will have the opportunity in the marketplace to decide whether their approach is worthwhile to us individually or not. Time, not rushes to judgement before much more of the evidence is in, will tell. I think it is still way too early in the life cycle to dismiss it.
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,747
Likes
37,560
It's very annoying to have the massive disparity in HF energy in recordings, they seem to be affected by how damped the recording/ mastering environments were.

This should not be imo. Of course sometimes it might be artistic licence..

You then have the issue the recording engineers have to 'guess' a approximation of the listeners acoustic environment for playback... If they could operate with a 'known' value surely this would revolutionise recording in the studio and playback in the home...

We would all then have what was intended... After that it's just up to a matter personal taste, but at least it's intentional not random as is the case now.

Much more important than digital format and infinity sampling rates imo

In my amateur recording when multi miking is used, I find I can do a real nice mix for speakers and a real nice mix for headphones. Each sounds rather poor on the other playback transducer. So I do something in between which subjectively I rate as 80% as good as one optimized for the two ways of playback. It works about equally well on either phones or speakers. That is a generalization compared to making choices about more specific playback gear/locations/rooms. There really is no universal answer.

Off topic here, but that's why I am surprised by Chesky using binaural recording for everything now. He is applying some other processing so it still sounds okay over speakers. All of which points out there is no universal answer (yes I am repeating myself).
 
Last edited:

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,747
Likes
37,560
As I tried to say earlier, I believe they are saying strongly, but implicitly, that for filters to be better, they need to be more sophisticated and, inevitably, more complex. But, I do not think they or anyone are saying that complexity in and of itself is a virtue, any more than simplicity is. They are not in any way denying that a filter or anything should be engineered to be as simple as possible to get the job done. The question is how do you define the job to be done? A more comprehensive definition of the scope of that might inevitably require a more complex implementation than a simpler one.

As I also tried to say, the history of digital audio has shown a definite trend toward increasingly more complex filters and other aspects accompanying the obvious improvements in digital's ability to deliver much better sound. That is just as true in other fields. The Model T was a great, simple car in its day. But, today's mind-bogglingly more complex cars are much more reliable and, of course, offer performance, comfort and durability that is several light-years better. Ergo, complexity itself is not the enemy. It is an inevitable part of what is necessary for technological progress that offers benefits to users. It is the path of civilization, though we like in our minds to wishfully think they don't make 'em as good as they used to. Our minds are wired to resist change and greater complexity quite naturally.

Greater conceptual simplicity, especially in filtering, can be had with DSD vs. the PCM that MQA relies on. But, if you look inside a PS Audio Directstream DAC, for example, it is one very complex piece of circuitry. And, is converting a PCM stream to a DSD stream really ultimately an improved answer with fewer downsides that is better than a more sophisticated native PCM filter?

My other question is if the temporal issues are so easily addressed by any engineer, why has that not already been done universally with PCM? Why is there still considerable pre- and post-ringing in measurements of the output of most any PCM ADC or DAC? Some may say, no problem, because it has already been reduced enough to inaudibility, even if not totally eliminated measurably. Others say, no problem, we will just inefficiently continue to step up the sampling rates further for new recordings into the ultrasonic range to let it ring where we think no one can hear it. And, we will just remaster all our old CDs in ever higher hi Rez, ignoring the ringing already there in the old digital masters from the original ADC.

MQA is saying they have a better solution based on their research, so they offer an answer that they believe corrects these issues. They propose doing so very efficiently without clogging global communications bandwidth in the transmission of mostly otherwise useless ultrasonic noise to meet the growing demand of potentially gazillions of future users in the gradual, but inevitable shift to hi Rez via Internet distribution methods. They are gambling really huge sums invested in demonstrating that to listeners, recording producers and to web music providers. In sheer, mind-numbing scale and in technical ideas, this has absolutely no parallel, of course, to what tiny, cottage industry audio snake oil purveyors do with nice, simple, easily fabricated cables, cones, boxes of dirt, etc.

I do not know for sure whether or not MQA's ideas have merit, but I see a good chance that they do. We probably will have the opportunity in the marketplace to decide whether their approach is worthwhile to us individually or not. Time, not rushes to judgement before much more of the evidence is in, will tell. I think it is still way too early in the life cycle to dismiss it.

I have one ADC that uses conventional 'ringing' filters. One that uses apodizing filters with post 'ringing' only. I know it doesn't mean no one can, but I cannot hear a dam*ed bit of difference between them. And that has been on some gear with the bandwidth even out to the speakers/headphones well into the ultrasonic regions. I also know that isn't the only thing different about MQA. When I have heard apodizing filters be mildly audible is the kind that let a bit more aliasing through and rolled off the top octave. I would guess it is mainly the frequency rolloff being heard and not the difference in when the 'ringing' occurs.
 

Werner

Active Member
Joined
Mar 2, 2016
Messages
109
Likes
135
Location
Europe
As I tried to say earlier, I believe they are saying strongly, but implicitly, that for filters to be better, they need to be more sophisticated and, inevitably, more complex.

Simple filters are short. Complex filters are long. Industry tradition uses long filters. MQA and Meridian call for short filters. Do the math yourself.

And to answer you 'why not' question ... there is today not a single shred of objective and accepted evidence that any of this matters a jot with real music, real systems, and real ears. So why would one bother?

Years ago I distributed a set of files to a bunch audiophile listeners. One 96k source, two 44.1k subsampled versions. One with massive pre-ringing (and I mean MASSIVE), one with massive post-ringing. No-one could tell the difference. It is not proof, but really, this puts the night-and-day reports from the press and the fans in some perspective. Not?
 

Don Hills

Addicted to Fun and Learning
Joined
Mar 1, 2016
Messages
708
Likes
464
Location
Wellington, New Zealand
... Then drop an apodiser on the recording. ...

Bob Stuart was quite emphatic in his Q&A responses that MQA does not use apodising filters. He's probably using the same sort of defintion of "apodising" as he uses for "DRM" as applied to MQA.
 

Werner

Active Member
Joined
Mar 2, 2016
Messages
109
Likes
135
Location
Europe
Bob Stuart was quite emphatic in his Q&A responses that MQA does not use apodising filters. He's probably using the same sort of defintion of "apodising" as he uses for "DRM" as applied to MQA.

There are many filters in MQA. The most notorious one is the one used in downsampling from quad rate to dual rate. This is the filter that should give us magic deblurring. It is leaky, which is the opposite of apodising. So BS was right.

Another aspect is the claimed removal of the ADC's sins. This entails filtering. Under specific conditions this may mean apodising.

There is no contradiction.
 

Fitzcaraldo215

Major Contributor
Joined
Mar 4, 2016
Messages
1,440
Likes
634
Simple filters are short. Complex filters are long. Industry tradition uses long filters. MQA and Meridian call for short filters. Do the math yourself.

And to answer you 'why not' question ... there is today not a single shred of objective and accepted evidence that any of this matters a jot with real music, real systems, and real ears. So why would one bother?

Years ago I distributed a set of files to a bunch audiophile listeners. One 96k source, two 44.1k subsampled versions. One with massive pre-ringing (and I mean MASSIVE), one with massive post-ringing. No-one could tell the difference. It is not proof, but really, this puts the night-and-day reports from the press and the fans in some perspective. Not?
Werner - it is clear you may know a lot more about Stuart's filtering and filtering in general than do I. Also, there may be some semantics differences between you and I involved in "complex" vs. "simple" and "long" vs. "short". I do not think we know all the details, which seem mostly proprietary. But, circumstantially from what I have seen, I agree, Stuart's emphasis on the time domain suggests he must likely be using short filters. At the same time, they may be complex, in the sense of not easily implemented. For example, there may be an array of different techniques with parametric variability to enable adaptation to a variety of other devices built by licensees or to masters obtained from existing, legacy ADCs.

I may also be overemphasizing pre and post ringing elimination in the MQA scheme. I am a little surprised at Don's comment about there being no apodizing filters in MQA, since Meridian were among the pioneers of that in audio, to much critical acclaim if that is worth anything. Perhaps they have moved on from that or they are just doing a dance to avoid that tag and simple comparisons to existing technology.

I do not think there is really clear objective evidence yet available to us, pro or con. JA just did some measurements recently in Stereophile, positive I believe, but I have not read it thoroughly yet. However, now that actual product is starting to appear we might start to see more actual and useful measurements, for better or for worse. At the same time, the hand waving may increase. I plan to follow the controversy with interest.
 

Werner

Active Member
Joined
Mar 2, 2016
Messages
109
Likes
135
Location
Europe
Stuart's emphasis on the time domain suggests he must likely be using short filters.

That short filters are used is stated explicitly in one of the papers. The rules used to obtain these filters are hinted at in quite some detail.

At the same time, they may be complex, in the sense of not easily implemented.

The filter kernels are a handful of coefficients. Even while you need a strategy for generating these, evaluating them in the frequency and time domain is trivial. I don't see much potential for complexity.

For example, there may be an array of different techniques with parametric variability to enable adaptation to a variety of other devices built by licensees or to masters obtained from existing, legacy ADCs.

Yes, but all of these boil down to few archetypes. Where is the ringing? Where is the aliasing? How much of it?


there being no apodizing filters in MQA, since Meridian were among the pioneers of that in audio,

Stuart says as much himself. And rightly so, in a full MQA flow there is no need for apodising ('apodising' in the Meridian sense: undercutting all previous filters with a steep minimum phase filter).
 
Last edited:

Ken Newton

Active Member
Joined
Mar 29, 2016
Messages
190
Likes
47
The following comments of mine are mostly conjecture based on my reading of an 2014 MQA related AES paper authored by Stuart and some others. I have absolutely no insider or industry based knowledge regarding MQA. The paper presents the basics for an integrated system (recording through playback) which, among a number of other things, minimizes transient blurring or spreading. Stuart makes quite clear in the paper his belief that brickwall digital decimation/anti-alias and anti-image filters audibly blur transient information. Such transient blurring or spreading becomes more severe as an FIR filter SINC impulse response is lengthened (meaning, as the filter more idealy approaches a brickwall frequency response) and/or as a signal transient duration becomes shorter. Objectively, this is an understandable and measurable phenomena. Subjectively, I don't know whether this effect has yet been demonstrated as audible.

What Stuart proposes is the utilization of non-ringing anti-alias and anti-image filters. The example shown in the paper is a filter having a triangular impulse response, which has no ringing. The triangular impulse response of the decimation/anti-alias filter at recording will inherently convolve itself with the triangular impulse response of the anti-image filter at playback to produce (according to the paper) an overall 3rd order B-spline function non-ringing impulse response (essentially, a quasi-Gaussian response). The problem with this is that a triangular function filter is not nearly as frequency selective as a SINC function filter. Which means that the MQA filters will pass some alias products if not afforded sufficient extra bandwidth.

In order to utilize the non-ringing filters MQA does, with their inherently slow rolloff slope, the system bandwidth has to be dramatically expanded so that those filters have spectrum to reach their stop band frequency in order to sufficiently suppress any would-be alias products. Which may be why the extra ultrasonic bandwidth can be encoded with low dynamic range and folded in to the noise floor of the audio band, as it very probably isn't captured for the fidelity of any ultrasonic content there but rather simply to prevent aliasing products until the slow slope filter reaches it's stop band. In this way, MQA adds little to no temporal smearing in while also effectively suppressing recording aliases and playback images (which are opposing system constraints), all without wasting bandwidth and file storage space.

I do not know how MQA proposes to reduce the 'transient blurring' of existing music files. I do, however, have an conception for how they might go about doing it. MQA might apply deconvolution to an pre-existing digital music file resulting from a conventional ringing brickwall FIR anti-alias filter. A DSP technique called deconvolution apparently can reproduce the original signal samples essentially as they were prior to having been run through the recording side FIR filter - essentially, being an inverse process. Apparently, this is very possible to do if one knows the kernel coefficients of that FIR filter. Which may be among the recording side details the system optimally requires to 'correct' an pre-existing music file. Once the original (prior to brickwall filtering) samples of been recovered, a different anti-alias filter function can be applied - such as MQA's non-ringing filters. My highly superficial knowledge of deconvolution pretty much ends here.
 
Last edited:

Fitzcaraldo215

Major Contributor
Joined
Mar 4, 2016
Messages
1,440
Likes
634
Thanks, Ken for an excellent analysis. I am not sure I can quite get my arms around all of it. But, to my earlier points, it does not appear to be "simple" and it relies on a lot of psychoacoustic assumptions, which are not traditional and might not be generally accepted, not yet at least.

Others might not share my view at all, but I think Bob Stuart has overwhelmingly demonstrated in the past that he is no charlatan. He has gone his own way on a lot of things at Meridian, often boldly defying tradition, usually with considerable success. Personally, I think he is one of the great minds and original thinkers in audio today. Again, that does not guarantee the technical or commercial success of this audacious, far reaching, outside the box MQA idea.
 

Werner

Active Member
Joined
Mar 2, 2016
Messages
109
Likes
135
Location
Europe
I do not know how MQA proposes to reduce the 'transient blurring' of existing music files. ... MQA might apply deconvolution to an pre-existing digital music file resulting from a conventional ringing brickwall FIR anti-alias filter. A DSP technique called deconvolution apparently can reproduce the original signal samples essentially as they were prior

Think this through and you'll see it won't fly. Remember that the existing recording is the result of first filtering and then decimation (or sampling, in case the input was analogue).

For a first approach let's ignore the decimation/sampling step, pretending we have access to the signal after the filtering. In other words we have the pass band, the transition band, and the stop band. For a good filter the stop band is essentially empty (that's an AA filter's job). So in order to deconvolve we would have to apply a very high gain to a very feeble (~zero) signal. Failure number one.

No factor in decimation/sampling. We don't even have access to the stop band anymore. Any non-zero signal in the stop band has been aliasing into the passband, where it is covered by the payload signal. We want to deconvolve a filter whose stop band, and part of the transition band, are missing? Failure number two.

The one thing that can be done is compensating for AA filter shortcomings in the pass band. Some older filters might have ripple. This could be fixed. Some might have significant non-linear phase distortion. Idem. But you can't undo and then redo the anti-aliasing.
 

JoeWhip

Active Member
Joined
Mar 7, 2016
Messages
150
Likes
32
Location
Wayne, PA
Just read Ron Resnick's The Show show report on Monoandstereo. He notes that one of the exhibitors told a friend that they were barred from doing AB comparisons between MQA and any CDs or digital files. Makes one wonder.....
 

RayDunzl

Grand Contributor
Central Scrutinizer
Joined
Mar 9, 2016
Messages
13,250
Likes
17,182
Location
Riverview FL
If I remember correctly, John Atkinson sent his own files to be MQA'd, so maybe there will be some direct comparisons forthcoming.

I guess somewhat correctly:

"The only way of testing MQA's time-domain correction is through listening, so I sent Bob Stuart the 24/88.2 masters of some of my recordings, for him to produce MQA versions. I will be reporting on the sound-quality differences I hear between the original and MQA files in a future issue."

Read more at http://www.stereophile.com/content/inside-mqa#trSpdX4XkvFQrZmh.99"
 

Fitzcaraldo215

Major Contributor
Joined
Mar 4, 2016
Messages
1,440
Likes
634
Thanks very much.

I do not claim to understand every word, but I find it most impressive in its scope, depth and very sophisticated original thinking, extending even beyond traditional psychoacoustics into neuroscience. I remain cautiously optimistic that there may be something of lasting value here, as opposed to the traditional "brute force" methods using ever higher "lossless" sampling rates that have been employed in traditional implementations of hi rez audio.

It sure does not sound like BS or snake oil to me.
 

NorthSky

Major Contributor
Joined
Feb 28, 2016
Messages
4,998
Likes
945
Location
Canada West Coast/Vancouver Island/Victoria area
Top Bottom