• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

SINAD Measurements

OP
March Audio

March Audio

Master Contributor
Audio Company
Joined
Mar 1, 2016
Messages
6,378
Likes
9,321
Location
Albany Western Australia
Absolutely. With one caveat. I get to pick the amplifiers under test. I say this because with some amplifiers, there is no detectable difference. With others, there are marked sonic signatures. And the test will be done at our lab / studio using our blinded AB technique and reference chain. One neutral observer attending.

Amir, I'm not sure why you think this test will fail. You've already presented evidence of an AES amplifier listening study in which at least some ABX differences were in fact repeatably detected by blinded listeners. This is all the evidence you should need. The answer is in your lap. Differences were detected. Mission accomplished, Done. Science wins. The premise that "IC differences cannot be detected" was proven invalid by one lone peer-reviewed AB test. Why do you continue to insist otherwise in the face of peer-reviewed evidence?

What would be even more instructive is to put together 2 or 3 top-end -130EIN / -120dBTHD industry micamps and do some DPA-based recording of difficult sources (rattling keys, bell tree, etc.), align the tracks on AB test DAW (Sequoia), and do the same blinded comparison testing. Easily detected differences, likely due to input capacitor transient behavior rather than periodic / harmonic distortions of the active stages.

In fact, we recently did a similar test for the most advanced micamp we've ever developed (premiering at NAMM in 2 weeks!). The test was to AB the sonic signature of various candidate input capacitors. At least two or three were rejected almost immediately. 10/10. A number were in the middle, OK but iffy. And two parts were clearly more transparent to the source, at least 8/10. We are using those qualified parts in the new product. (keep in mind, none of the capacitors under test impacted the baseline noise or THD performance of the signal path -- the source of detectable distortion is something other than THD -- which is another conversation that goes to root of why -120dB THD does not "guarantee transparency")

Anyway, if you let me pick the amplifiers myself, the AB differences will be even more pronounced than those detectable IC's in the AES Journal test. It's sort of not fair (to you).

But I'm puzzled about the $1,000 reward. If I do get a reward, I'll donate it back to the ASR community.

Now to the more important issue.

None of this is about me. Testing my AB chain, claims, ears ... is a silly side-show. Audio claims aren't settled by one test on one person. Rather, it takes a large population of samples and trials and participants to achieve peer-review acceptance (like the AES paper you highlighted). But you know that.

So, I'll make this deal. If I can score minimum 8/10 on a simple blinded comparison of two amplifiers, you will make my suggested changes to the ASR community site.

Night night. I'll address all your other recent comments tomorrow.


...and there are the conditions...........

No chance. :)

The test is to compare op amps, similar performing op amps which you said you could hear the difference between.

Test circuit to be designed collectively by the various experts here (or existing design approved). Then tested on Amirs AP.

Tested using a collectively agreed technique, not yours (unless of course its seen as OK by the collective).

Your listening room/equipment - OK but it has to be fully detailed and OKd by the collective.

You need to be able to identify which op amp is which to statistically significant levels.


The test is not to compare complete mic amp designs.
 
Last edited:

solderdude

Grand Contributor
Joined
Jul 21, 2018
Messages
16,059
Likes
36,457
Location
The Neitherlands
Just null the various op-amps (using another opamp), amplify the null results and record the differences 24/192.
This way you can use real music and can hear (and see/analyze) results pretty easily.
If there are any distortions (phase, amplitude, frequency) they will all be made audible/visible.

You can even use the null circuit to check interlink cables and with some attenuation even on speaker amps with actual loads.

The differences you hear from a null result will not be the same as the actual heard differences though. The actual audible differences will be smaller/less than the null because small and gentle phase shifts aren't as audible. Amplitude and distortion products will be audible.

To asses the null results and compare them to the actual signal all one needs to do is undo the amplification.
 
OP
March Audio

March Audio

Master Contributor
Audio Company
Joined
Mar 1, 2016
Messages
6,378
Likes
9,321
Location
Albany Western Australia
Just null the various op-amps (using another opamp), amplify the null results and record the differences 24/192.
This way you can use real music and can hear (and see/analyze) results pretty easily.
If there are any distortions (phase, amplitude, frequency) they will all be made audible/visible.

You can even use the null circuit to check interlink cables and with some attenuation even on speaker amps with actual loads.

The differences you hear from a null result will not be the same as the actual heard differences though. The actual audible differences will be smaller/less than the null because small and gentle phase shifts aren't as audible. Amplitude and distortion products will be audible.

To asses the null results and compare them to the actual signal all one needs to do is undo the amplification.

This is about his signalpaths ability to hear the differences, so all that takes us away from the objective and increases variables.
 

solderdude

Grand Contributor
Joined
Jul 21, 2018
Messages
16,059
Likes
36,457
Location
The Neitherlands
His claim is he hears differences in blind tests with music (and even claims obvious ones) and is unrelated to distortion levels or other measurements.

What is puzzling is that 'they' have been at this for 30 years yet have not produced any papers nor evidence and have not said which sound better to them and which ones sound poor. With so many tests done one would expect they would have figured out that what is 'not measurable' or found correlation. After all it is just 2 voltages changing over time.

The claim one sees everywhere is people can hear differences (even when there are none) yet measurements using tones suggests otherwise.
The claim is there is no (or little) correlation.

The solution is to use music and nulling. Then there is no debate and we do not need 'expert ears'. This nulling can do well.
Scientific as hell (all amplifiers with overall negative feedback, which is often blamed) work like this so is well proven.
The only 'counterargument' would be that negative feedback would be the cause can be negotiated by nulling using a feedforward design.

I agree with the original statement from signalpath (about audibility levels) but disagree on the engineering bit and desire for reproducing the signal as best as possible.
Also S Wurcers remarks about preference makes a lot of sense to me (share the idea).

a higher SINAD is not equal to better audible performance but does say something about noise level and signal intergrity but not whether one prefers it or not.
 
Last edited:
OP
March Audio

March Audio

Master Contributor
Audio Company
Joined
Mar 1, 2016
Messages
6,378
Likes
9,321
Location
Albany Western Australia
His claim is he hears differences in blind tests with music (and even claims obvious ones) and is unrelated to distortion levels or other measurements.

What is puzzling is that 'they' have been at this for 30 years yet have not produced any papers nor evidence and have not said which sound better to them and which ones sound poor. With so many tests done one would expect they would have figured out that what is 'not measurable' or found correlation. After all it is just 2 voltages changing over time.

The claim one sees everywhere is people can hear differences (even when there are none) yet measurements using tones suggests otherwise.
The claim is there is no (or little) correlation.

The solution is to use music and nulling. Then there is no debate and we do not need 'expert ears'. This nulling can do.
Scientific as hell (all amplifiers with overall negative feedback, which is often blamed) work like this so is well proven.
The only 'counterargument' would be that negative feedback would be the cause can be negotiated by nulling using a feedforward design.

I agree with the original statement from signalpath (about audibility levels) but disagree on the engineering bit and desire for reproducing the signal as best as possible.
Also S Wurcers remarks about preference makes a lot of sense to me (share the idea).

a higher SINAD is not equal to better audible performance but does say something about noise level and signal intergrity but not whether one prefers it or not.

Technical exercises such as nulling wont convince audiophiles. For that matter blind tests wont convince audiophiles, however this specifically addresses his claims. it should be easy for him from his statements so far, so there is little wriggle room.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,684
Likes
241,205
Location
Seattle Area
You've already presented evidence of an AES amplifier listening study in which at least some ABX differences were in fact repeatably detected by blinded listeners.
There was no ABX test in that report. Here is the type of test they used:

1578383019048.png


Translating, the listener played A and then B and voted which one sounded better. We have no idea of whether one did or not.

In sharp contrast, when you run an ABX test, we know with 100% certainty whether you are able to detect A or B as being X. We keep changing the roles of X and with it eventually can see how accurately you can determine which opamp is which.

How have you run ABX tests so far? What is the fixture? Or are you just running the AB preference test above in the paper and thinking that is ABX?
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,684
Likes
241,205
Location
Seattle Area
Amir, I'm not sure why you think this test will fail.
Because I have tested countless people on detecting small non-linearities. Unless you have formal training, which doesn't seem like you do, you will have no ability to hear small differences. Being an audiophile or pro-audio does not help you. Neither does designing electronics. You either know how to search for small differences, or you don't. If you don't, your threshold of detection will be quite high and preferences will NOT be based on distortion levels.

What makes me even more sure is that we are not even talking about small differences that can be shown to be audible. You are saying distortions that below any threshold of audibility, are audible to you. This is impossible.

I reckon once distortions get below 40 dB or so, you will not be able to tell a source from a buffer that distorts it such. Imagine what shape you will be in when are talking about 120 dB!

Here is my suggestion. Set up to amps that have identical frequency response and levels but with different distortions which you think sounds different. Then have someone else test you at least 10 times and see if you can identify them. You need to get at least 8 right. Given what you are saying in this thread, I actually suggest you better get 10 out of 10 right. Make sure whoever is running the test sometimes switches A and B, sometimes pretends to do so but keeps them the same. Make sure there is no glitch or "tell" that can give away the answer.

I think once you run the simple test above, this discussion will end.
 

solderdude

Grand Contributor
Joined
Jul 21, 2018
Messages
16,059
Likes
36,457
Location
The Neitherlands
Technical exercises such as nulling wont convince audiophiles. For that matter blind tests wont convince audiophiles, however this specifically addresses his claims. it should be easy for him from his statements so far, so there is little wriggle room.

The only thing subjective audiophiles look for is confirmation. They find this in other audiophiles 'finding' the same (not surprisingly).
There is no technical explanation that will satisfy them unless a device is created that spews out a number corresponding to what they find to be true at that point in time.

Nulling is the only technical way that uses actual recordings and should be closest to their reality.
Of course we know already that nulling won't show what they would have liked it t show (for obvious reasons) either.

It would be nice of mr. signal path when he would make a list of good and poor sounding op-amps (to him/his company) so we know what he is talking about. So far only claims and nothing substantial (and telling how good sounding but poor measuring) his 50V discrete JFET amp sounds but no measurements (also for obvious reasons)
 

vitalii427

Senior Member
Forum Donor
Joined
Dec 19, 2017
Messages
386
Likes
531
Location
Kiev, Ukraine
117 dB every day of the week and twice on sunday. :) No one knows what to do with a microvolt number. But everyone can understand that if you play at peak of 120 dB, your noise floor will be -3 dB SPL so at threshold of audibility. What on earth can you do with a 2.82 microvolt?

For psychoacoustic analysis, dB is the right number. This is why I don't like to use percentages for THD+N either. Who is going to remember 0.05% versus 0.009%?

For the same reason I hate using dBV, dBu, etc.
I support @restorer-john, because my JBL 4367 hisses with almost any amp. Damn Campfire Andromeda too.

I really appreciate that hint from Benchmark:
NOISE VOLTAGE
Output noise voltage, A-weighted, inputs shorted
  • -103 dBV, -101 dBu, 7.1 uVrms, Stereo Mode
  • -100 dBV, -98 dBu, 9.8 uVrms, Mono Mode
Use dBV to calculate the SPL of the noise produced by your speaker/amplifier combination. Use the following formula: Amplifier output noise voltage in dBV + speaker sensitivity at 2.83V - 9 dB. Example: Mono mode driving very high efficiency speakers: (-100 dBV) + (104 dB SPL @ 2.83V 1m) - 9 dB = -5 dB SPL at 1 meter. This means that the system noise will be 5 dB below the threshold of hearing when driving speakers with a very high 104 dB efficiency.
 

restorer-john

Grand Contributor
Joined
Mar 1, 2018
Messages
12,730
Likes
38,942
Location
Gold Coast, Queensland, Australia
I really appreciate that hint from Benchmark:

They (to Benchmark's credit), throw in referenced (dB) numbers, in addition to the raw uV values. It's about as good as it gets these days. In studio situations with very high near-field levels, the residual will be patently obvious and differentiate the quiet amplifiers, preamplifiers, and D/As from the also rans.
 

Schackmannen

Active Member
Forum Donor
Joined
Aug 6, 2018
Messages
167
Likes
225
Location
Tucson, Arizona
If I may throw in my novice opinion, I think SNR tests are quite useless other than confirming manufactures specs, and SINAD isn't a perfect indication of a device's performance either. If a device has a high output voltage it will of course lead to a higher SNR, without necessarily meaning that it's quieter than a device without a lower SNR. For example the ADI-2 DAC was measured to have a SINAD of just under 112 dB at a 4 volt output and a SNR of 120 dB @ 0 dBFS, while excellent, clearly not SOTA compared to many of the recent DACs reviewed which seem to perform better at a lower price, like the SONCOZ SGD1. However, if you look a bit more carefully you will notice you will notice that the ADI-2 DAC has a SNR of 117 dB referenced to a output level of 1 dBu (balanced output) or a noise floor of 1.2 uV, compared to the SGD1 that has a noise floor of 3.3 uV (122 dB SNR referenced to 4.15 Vrms).

In this case the ADI-2 DAC is almost 9 dB quieter than the SGD1, which is very hard to notice unless you dig in and do the math yourself. It's also why I think a lot of people think digital volume control is worse than an analog one since if a DAC has an SNR of 120 dB at 0 dBFS and you lower the volume digitally by 30 dB, it's true that the SNR gets lowered to 90 dB, but it doesn't matter since if the noise wasn't audible at 0 dBFS it won't be audible if you lower the volume digitally either. Otherwise the ADI-2 DACs IEM port wouldn't sit on top of the 50 mV SNR chart, despite using a digital volume control. I guess all I'm trying to say is I would very much welcome an unweighted 20 Hz - 20 kHz test of the noise floor of the DUT to become a standard test (like in the HPA4 review), even if it's possible to work back from the current ones. Sorry if my comment is OT (or if any of my math was incorrect), I thought it was fitting giving the discussion above.
 

scott wurcer

Major Contributor
Audio Luminary
Technical Expert
Joined
Apr 24, 2019
Messages
1,501
Likes
2,822
IMO the challenge is still poorly specified. There has been no discussion of stimulus, levels, loads, associated equipment. etc. I feel the Randi cable challenge was doomed as soon as the claimant was allowed to pick all the equipment, it is known that some amplifier, cable, speaker combos oscillate.

My idea of a challenge would their FET "op-amp" vs 4 or more IC op-amps of our choosing in a simple (maybe 20dB) line stage at standard line level signals. The associated power supplies, chassis, build quality are to be the same for all.

On another point I don't think the -60dB SINAD is a smart idea. If the distortion is fully concentrated in a discontinuity such as crossover distortion IRCC Earl Geddes found this level easily audible. Just another door to a cheat. No one would ever use and op-amp with notable crossover distortion in an audio circuit aiming for SOTA performance.
 

signalpath

Active Member
Forum Donor
Joined
Aug 1, 2019
Messages
126
Likes
109
And that is all it says. It is so common to post graphs online and leave it to the poor reader to figure out if taller is better, or shorter. The indication on the graph says exactly that: as a number, higher SINAD is better than lower SINAD. It always is. It is a metric just like voltage is.

So you're now saying "better" is used as a purely quantitative description? We're moving in the right direction. I like the moving target approach :). Whatever gets us closer to real science.

Would suggest that most readers, especially those less technically equipped, assume "better" to be a qualitative statement of "transparency," especially in light of ASR's other stated THD assumptions and review anecdotes. For these readers, the designation "better" is misleading. Would suggest removing the word "better" and replacing with a scientifically-valid and unassailably objective description, such as "higher number = lower THD+N" without editorializing your opinion that lower THD necessarily indicates "better assured transparency."

Bottom line, the foundational premises of a science-based forum should be grounded in good science and objectively clear language, not personal opinion. If you really care about being scientifically accurate, this simple change will be a no-brainier, and in fact partially fulfills one of my two simple requests Thank you.
 
Last edited:

pozz

Слава Україні
Forum Donor
Editor
Joined
May 21, 2019
Messages
4,036
Likes
6,827
So you're now saying "better" is used as a purely quantitative description? We're moving in the right direction. I like the moving target approach :). Whatever gets us closer to real science.

Would suggest that most readers, especially those less technically equipped, assume "better" to be a qualitative statement of "transparency," especially in light of ASR's other stated THD assumptions and review anecdotes. For these readers, the designation "better" is misleading. Would suggest removing the word "better" and replacing with a scientifically-valid and unassailably objective description, such as "higher number = lower THD+N" without editorializing your opinion that lower THD necessarily indicates "better assured transparency."

Bottom line, the foundational premises of a science-based forum should be grounded in good science and objectively clear language, not personal opinion. If you really care about being scientifically accurate, this simple change will be a no-brainier, and in fact fulfills one of my two simple requests Thank you.
In the next version of the graph, all colours and other elements will remain, but the explanatory subtitles "Higher/Lower Better" will be removed. A major victory for science.
 
OP
March Audio

March Audio

Master Contributor
Audio Company
Joined
Mar 1, 2016
Messages
6,378
Likes
9,321
Location
Albany Western Australia
."

Bottom line, the foundational premises of a science-based forum should be grounded in good science and objectively clear language, not personal opinion. .

Said the man who provided not one jot of scientific evidence to back up his claims that similarly performing op amps sound radically different.
 

signalpath

Active Member
Forum Donor
Joined
Aug 1, 2019
Messages
126
Likes
109
Because I have tested countless people on detecting small non-linearities. Unless you have formal training, which doesn't seem like you do, you will have no ability to hear small differences.

I've made no claims to being an expert listener, or even that my AB testing is accurate. I have simply stated my findings as a designer / manufacturer of top-end audio recording gear over the last 30 years. The fact that we are the leading analog front-end for the world's most critical recording applications (film scoring, classical music, sample houses, archiving, etc.) could be a wild coincidence. I really don't know. Think what you wish.

This might be helpful. I'll share the story of how the company got started. It's tightly aligned with AB testing.

I started building audio gear when I was 12 years old. Guitar amps and pedals mostly (have played instruments since 5 years old). By the time I left high school, I had built a near-complete recording rig: preamps, EQ, mixers, etc.. with mentors such as the Burr Brown OpAmp handbook and Walter Jung. After Cal Poly, I spent close to 10 years in the Silicon Valley. Was one of the 3 original startup employees at Multitech USA, and grew my division to be the world leader in OEM PCs -- licensed to IBM, NEC, etc. We changed the name to Acer America, and I left the company in 1989.

After Acer, I wanted to return to my audio passion, so I started producing classical music recordings around Northern California. Did this for 25 years, mostly symphony orchestra and chamber music recordings. Easily 500 symphony recordings (I lost count), probably closer to 1000 total recordings. Very early on (1990), I realized that my purchased recording equipment was not giving me the highest sonic performance, so I set about to design my own.

I spent about 1.5 years experimenting with micamp designs, constantly iterating, trying new active and passive devices, layout topologies, architectures, and so forth. This is when I first developed an AB listening test strategy. I used a B&K high voltage microphone (mono) and recorded "difficult sources" such as bell tree, jingling keys, and my own voice. I recorded each trial test into my well-aligned multit-track tape machine at 30ips, no noise reduction. Its usable FR extended to 40kHz. I then played back each trial thru a box that I designed that allowed me to "self-blind" the test. I didn't know which signal I was listening to w/o looking back at the box. I learned fast that AB levels must be -precisely- aligned (no more than 0.02dB differential) to do a valid AB test.

After 1.5 years of circuit iterations and AB testing, I got the circuit as neutral and timbre-correct as I could make it. It turned out to be an extensive variation on Grahame Cohen's seminal "double balanced" design at Philips Microelectronics. I've shared his papers with countless souls over the years -- his was a brilliant breakthrough in audio signal path. He recently passed away.

Anyway, I kludged up 8 channels of my final design and started using it for my remote classical recording work. It was like night and day compared to my former "store bought" signal path. Just gorgeous. I was elated.

Around this time, my friend who recorded the San Francisco Symphony (Jack Vad) heard about my micamp design. He asked to borrow some channels, so I lent them. He called back a couple weeks later, asking me to build him 8 channels. So I did, in my garage. Not long after, I got another call from an L.A. classical music producer asking to build him some channels, simply on Jack's advice. Around this same time, Jack told the editor of a top pro audio magazine (Nick Batzdorf) about my micamp design. The editor called me and said, "we're doing a comparison shootout of top outboard mic preamps and Jack said we should check yours out." He continued, "don't worry, if it doesn't do well in the test, we'll not mention it."

So a couple months went by, and then he called back. "You know, your mic preamp was, hands down, the best sounding preamp of the 8 others we tested." That review was written up in Recording Magazine, and I immediately started getting orders, opening dealers, and starting a real company. Our first dealer was Sam Ash Music in NYC (David Prentice). Today we have over 100 dealers and distributors, worldwide. That same HV-3 microphone preamplifier was recently inducted into the Technical Hall of Fame (2019) ...... Write-up in Forbes.

That's the foundational story, and that's how I learned to do blind AB testing. I've continued to refine my art of AB testing over the last 30 years. Today's method is far better than the old multi-track days. And, like you, I'm still learning, still growing, often being humbled by my lack of knowledge, or being proven wrong by real science. Heck, just the other day I was adjusting a 14k EQ bump for a little extra air (DAW). I turned the knob slightly and heard the air I wanted. Then I looked down at the screen and realized the EQ band was OFF.

Audio can fool you. Science never lies.
 
Last edited:

flipflop

Addicted to Fun and Learning
Joined
Feb 22, 2018
Messages
927
Likes
1,240
I've made no claims to being an expert listener, or even that my AB testing is accurate. I have simply stated my findings as a designer / manufacturer of top-end audio recording gear over the last 30 years. The fact that we are the leading analog front-end for the world's most critical recording applications (film scoring, classical music, sample houses, archiving, etc.) could be a wild coincidence. I really don't know. Think what you wish.

This might be helpful. I'll share the story of how the company got started. It's tightly aligned with AB testing.

I started building audio gear when I was 12 years old. Guitar amps and pedals mostly (have played instruments since 5 years old). By the time I left high school, I had built a near-complete recording rig: preamps, EQ, mixers, etc.. with mentors such as the Burr Brown OpAmp handbook and Walter Jung. After Cal Poly, I spent close to 10 years in the Silicon Valley. Was one of the 3 original startup employees at Multitech USA, and grew my division to be the world leader in OEM PCs -- licensed to IBM, NEC, etc. We changed the name to Acer America, and I left the company in 1989.

After Acer, I wanted to return to my audio passion, so I started producing classical music recordings around Northern California. Did this for 25 years, mostly symphony orchestra and chamber music recordings. Easily 500 symphony recordings (I lost count), probably closer to 1000 total recordings. Very early on (1990), I realized that my purchased recording equipment was not giving me the highest sonic performance, so I set about to design my own.

I spent about 1.5 years experimenting with micamp designs, constantly iterating, trying new active and passive devices, layout topologies, architectures, and so forth. This is when I first developed an AB listening test strategy. I used a B&K high voltage microphone (mono) and recorded "difficult sources" such as bell tree, jingling keys, and my own voice. I recorded each trial test into my well-aligned multit-track tape machine at 30ips, no noise reduction. Its usable FR extended to 40kHz. I then played back each trial thru a box that I designed that allowed me to "self-blind" the test. I didn't know which signal I was listening to w/o looking back at the box. I learned fast that AB levels must be -precisely- aligned (no more than 0.02dB differential) to do a valid AB test.

After 1.5 years of circuit iterations and AB testing, I got the circuit as neutral and timbre-correct as I could make it. It turned out to be an extensive variation on Grahame Cohen's seminal "double balanced" design at Philips Microelectronics. I've shared his papers with countless souls over the years -- his was a brilliant breakthrough in audio signal path. He recently passed away.

Anyway, I kludged up 8 channels of my final design and started using it for my remote classical recording work. It was like night and day compared to my former "store bought" signal path. Just gorgeous. I was elated.

Around this time, my friend who recorded the San Francisco Symphony (Jack Vad) heard about my micamp design. He asked to borrow some channels, so I lent them. He called back a couple weeks later, asking me to build him 8 channels. So I did, in my garage. Not long after, I got another call from an L.A. classical music producer asking to build him some channels, simply on Jack's advice. Around this same time, Jack told the editor of a top pro audio magazine (Nick Batzdorf) about my micamp design. The editor called me and said, "we're doing a comparison shootout of top outboard mic preamps and Jack said we should check yours out." He continued, "don't worry, if it doesn't do well in the test, we'll not mention it."

So a couple months went by, and then he called back. "You know, your mic preamp was, hands down, the best sounding preamp of the 8 others we tested." That review was written up in Recording Magazine, and I immediately started getting orders, opening dealers, and starting a real company. Our first dealer was Sam Ash Music in NYC (David Prentice). Today we have over 100 dealers and distributors, worldwide. That sames HV-3 microphone preamplifier was recently inducted into the Technical Hall of Fame (2019) ...... Write-up in Forbes.

That's the foundational story, and that's how I learned to do blind AB testing. I've continued to refine my art of AB testing over the last 30 years. Today's method is far better than the old multi-track days. And, like you, I'm still learning, still growing, often being humbled by my lack of knowledge, or being proven wrong by real science. Heck, just the other day I was adjusting a 14k EQ bump for a little extra air (DAW). I turned the knob slightly and heard the air I wanted. Then I looked down at the screen and realized the EQ band was OFF.

Audio can fool you. Science never lies.
 

pozz

Слава Україні
Forum Donor
Editor
Joined
May 21, 2019
Messages
4,036
Likes
6,827
I've made no claims to being an expert listener, or even that my AB testing is accurate. I have simply stated my findings as a designer / manufacturer of top-end audio recording gear over the last 30 years. The fact that we are the leading analog front-end for the world's most critical recording applications (film scoring, classical music, sample houses, archiving, etc.) could be a wild coincidence. I really don't know. Think what you wish.

This might be helpful. I'll share the story of how the company got started. It's tightly aligned with AB testing.

I started building audio gear when I was 12 years old. Guitar amps and pedals mostly (have played instruments since 5 years old). By the time I left high school, I had built a near-complete recording rig: preamps, EQ, mixers, etc.. with mentors such as the Burr Brown OpAmp handbook and Walter Jung. After Cal Poly, I spent close to 10 years in the Silicon Valley. Was one of the 3 original startup employees at Multitech USA, and grew my division to be the world leader in OEM PCs -- licensed to IBM, NEC, etc. We changed the name to Acer America, and I left the company in 1989.

After Acer, I wanted to return to my audio passion, so I started producing classical music recordings around Northern California. Did this for 25 years, mostly symphony orchestra and chamber music recordings. Easily 500 symphony recordings (I lost count), probably closer to 1000 total recordings. Very early on (1990), I realized that my purchased recording equipment was not giving me the highest sonic performance, so I set about to design my own.

I spent about 1.5 years experimenting with micamp designs, constantly iterating, trying new active and passive devices, layout topologies, architectures, and so forth. This is when I first developed an AB listening test strategy. I used a B&K high voltage microphone (mono) and recorded "difficult sources" such as bell tree, jingling keys, and my own voice. I recorded each trial test into my well-aligned multit-track tape machine at 30ips, no noise reduction. Its usable FR extended to 40kHz. I then played back each trial thru a box that I designed that allowed me to "self-blind" the test. I didn't know which signal I was listening to w/o looking back at the box. I learned fast that AB levels must be -precisely- aligned (no more than 0.02dB differential) to do a valid AB test.

After 1.5 years of circuit iterations and AB testing, I got the circuit as neutral and timbre-correct as I could make it. It turned out to be an extensive variation on Grahame Cohen's seminal "double balanced" design at Philips Microelectronics. I've shared his papers with countless souls over the years -- his was a brilliant breakthrough in audio signal path. He recently passed away.

Anyway, I kludged up 8 channels of my final design and started using it for my remote classical recording work. It was like night and day compared to my former "store bought" signal path. Just gorgeous. I was elated.

Around this time, my friend who recorded the San Francisco Symphony (Jack Vad) heard about my micamp design. He asked to borrow some channels, so I lent them. He called back a couple weeks later, asking me to build him 8 channels. So I did, in my garage. Not long after, I got another call from an L.A. classical music producer asking to build him some channels, simply on Jack's advice. Around this same time, Jack told the editor of a top pro audio magazine (Nick Batzdorf) about my micamp design. The editor called me and said, "we're doing a comparison shootout of top outboard mic preamps and Jack said we should check yours out." He continued, "don't worry, if it doesn't do well in the test, we'll not mention it."

So a couple months went by, and then he called back. "You know, your mic preamp was, hands down, the best sounding preamp of the 8 others we tested." That review was written up in Recording Magazine, and I immediately started getting orders, opening dealers, and starting a real company. Our first dealer was Sam Ash Music in NYC (David Prentice). Today we have over 100 dealers and distributors, worldwide. That sames HV-3 microphone preamplifier was recently inducted into the Technical Hall of Fame (2019) ...... Write-up in Forbes.

That's the foundational story, and that's how I learned to do blind AB testing. I've continued to refine my art of AB testing over the last 30 years. Today's method is far better than the old multi-track days. And, like you, I'm still learning, still growing, often being humbled by my lack of knowledge, or being proven wrong by real science. Heck, just the other day I was adjusting a 14k EQ bump for a little extra air (DAW). I turned the knob slightly and heard the air I wanted. Then I looked down at the screen and realized the EQ band was OFF.

Audio can fool you. Science never lies.
You know, one of the topics that's been discussed in the past is how more than just the end-listener is susceptible to thinking that this or that piece of equipment sounds better. Pros have to be included, too.

One of our members, @Fluffy, who does broadcast recording if I remember correctly, very clearly pointed out that when dealing with recording or playback equipment the we should be discussing the effect on signal, not sound. Sound is far down the line in an undefined playback chain and listening circumstances.

Given your experience I'd like to know your perspective on gear fetishism in the professional industry, especially whether or not you think that it has done more harm than good. Harm being that many have been convinced to spend more money than they should, or that many have simply given up because they feel that great sound is out of reach for those without the capital or access.

Do you also mind identifying the gear you designed?
 

signalpath

Active Member
Forum Donor
Joined
Aug 1, 2019
Messages
126
Likes
109
Here is my suggestion. Set up to amps that have identical frequency response and levels but with different distortions which you think sounds different. Then have someone else test you at least 10 times and see if you can identify them. You need to get at least 8 right. Given what you are saying in this thread, I actually suggest you better get 10 out of 10 right. Make sure whoever is running the test sometimes switches A and B, sometimes pretends to do so but keeps them the same. Make sure there is no glitch or "tell" that can give away the answer. I think once you run the simple test above, this discussion will end.

Let me describe our current AB technique (we generally don't do ABX -- I don't think it's necessary for valid perceptual testing at the individual, non-peer-review level). If you think our test is valid, that's great. We'll go for it. If you think it's invalid, then perhaps you can help us with any scientifically fatal errors you perceive. You've set yourself up as an AB authority here. I do not make that claim.

These days, I no longer use analog tape. We use a DAW called Sequoia (we license our bit-reduction software "POW-R" to Sequoia, but that's another conversation). We keep our reference program on DAW (if there's interest, I could put our reference reel on a Dropbox). We use both headphone and room speakers for monitoring. We have a world-class reference room designed by George Newburn and built from the ground up with 9" concrete walls, wall / ceiling decoupling, around 15dB SPL, flat to 24Hz, ideal RT60 (Dunlavy SC5 + Nelson Pass 350). We monitor both mono and stereo sources. We use three different AB paths, depending on the type of circuit we're testing -- DAC, ADC, or pure analog path. We do both reference-to-DUT testing and DUT-against-DUT testing, the latter being qualitatively more difficult.

Our reference material is recorded in 10s snippets, each of which repeats over and over about 10X. We've chosen reference material that we have come to know intimately. The material was chosen for an ultra-wide range of sonic gymnastics: wide spatial, narrow center, fast transients throughout the spectrum, ultra-low freqs, ultra-high freqs, high DR, low DR, voice, pop, acoustic, big band, large hall ambience, dry room, dense, sparse, and so forth. We have around 25 different program sources, but only tiny, repeating snippets of the most critical aspect of each program are employed.

Each snippet is repeated over and over so that the brain can begin to focus on just one very particular sonic aspect. AB snippets are adjusted for better than 0.02dB level matching (which is challenging). Once the listener has suitably focused, the AB selection begins. We use the DAW's "solo" and "mute" functions to facilitate a self-blinded AB selection, using mouse clicks. The mouse is clicked repeatedly until the user does not have a clue if the source is "A" or "B". My personal method is to click the mouse rapidly while counting out numbers randomly. After 3-4 seconds of that, there is zero self-correlation. I guess you could argue otherwise. Until you try it, I suggest suspending judgement. It works. It's random.

But first we do it sighted. The user clicks back and forth, between reference and DUT (or DUT-recorded program, or DUT-DUT), to simply detect any timbre shift, transient variation, etc.. If a difference is detected sighted, then we go blinded. If detectable differences remain, we try to consistently identify a specific tell and stop there. We look at the screen and note the track. Wash, rinse, repeat.

If our "tell" is identified 8/10, we consider it a positive, and do it again. Another 8/10 is qualified. It means that the DUT is not giving us a correct image. Anything less than 2X 8/10 is a null.

Then we move to the next program material and do it all over again. We do this throughout the entire program cycle. What's interesting is that different types of DUTs may perform perceptually identical for most of the program sources, but when you get to that one "particular gymnastic routine" -- a tell is detected. This is why it's important to use the broadest range of program material possible, covering every aspect of sonic hurdle a circuit will be asked to deliver.

Wurcer mentioned doing AP tests of any DUTs in advance, which is of course de rigueur (without it, the test is effectively invalid and a waste of time). And I'm happy to have the "neutral party" clicking the mouse, in fact I would love that. It's a bother. Beyond that, I think our test methodology is scientifically sound.

Thoughts?
 
Last edited:
Top Bottom