• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Do we crave distortion?

That aside; I would think that the ONLY humans who can unequivocally answer that question (re: good distortion) are those in the production-side of the music. Anything else becomes a subjective criteria of added personal "preference".
I like what you said there.

I know a great deal about the production side of music, (without that being my profession), for a number of reasons, but I can tell you on the production side that very little, if anything, is chosen based on science, objectively or measurements (except possibly monitors). With the best of the best it was trial and error, watching and learning and their own individual preferences. Time and time you will hear the top in production say things like “I like a Neumann this, for that I always use the Shure that.” “Why do you like [fill in the blank with microphone, compressor, limiter, reverb, technique]” and the answer nearly always “well I heard a [fill in the blank] when I recorded/mixed [previous project] and I like the way it sounded.” Sometime followed by “audiophile” type adjectives.

For distortion, the good ones will know specially how each type of distortion alter the waveform, at what harmonics, and they have been blessed with ears that what they think sounds good is generally thought to sound good by the artist and the public.

It certainly is salt/pepper preference, assuming it isn’t analogous to what is measured on amps.
 
Is that a G? Open string, or fretted?
1688000922243.png
1688000858137.png


Open string (so I could use my left hand to take the screenshot) on a fretted inexpensive 5 string Ibanez bass.

Plucked with the meat of my finger relatively far from the bridge to emphasize the fundamental. Second harmonic wanted to dominate.

 
It is safe to say H4, H5, H6 and H7 can be set to -200 as these are all inaudible.
Leaves 0.2% H2 and 0.0035% H3 where the H3 is not really considered audible so you could probably set that to -200dB as well.

You can test this by creating 2 files, one with only 0.2% and the other one as described above and run that through an ABX test in foobar.
What impact/significance of the THD and f his amps have on any of this.

Hypothetically, let’s say that ABX testing shows that for OP and the 10 tested, 0.2% H2 is the be all, end all number with the vast majority of classical recordings, or even all recordings. That’s the magic number.

Is that cumulative to the distortion that his amps are generating? In other words, if there were such a magic number, does it still work with average SINAD rated amps, and the best SINAD related amps? That the number will vary, more for lower distortion amps, less or none with high distortion amps?
 
Plucked with the meat of my finger relatively far from the bridge to emphasize the fundamental. Second harmonic wanted to dominate.
The second harmonic dominates the fundamental of open strings on Double Bass, and I think they do as well on electric. On an open E (41) H2-H6 should dominate on an electric bass.
 
Some sources:

Thank you for posting these. Wasn’t familiar with the Lofft paper, that has answered a lot questions f questions in my mind about all of this:

Conclusion​

Axiom's tests of a wide range of male and female listeners of various ages with normal hearing showed that low-frequency distortion from a subwoofer or wide-range speaker with music signals is undetectable until it reaches gross levels approaching or exceeding the music playback levels. Only in the midrange does our hearing threshold for distortion detection become more acute. For detecting distortion at levels of less than 10%, the test frequencies had to be greater than 500 Hz. At 40 Hz, listeners accepted 100% distortion before they complained. The noise test tones had to reach 8,000 Hz and above before 1% distortion became audible, such is the masking effect of music. Anecdotal reports of listeners' ability to hear low frequency distortion with music programming are unsupported by the Axiom tests, at least until the distortion meets or exceeds the actual music playback level. These results indicate that the where of distortionat what frequency it occurs is at least as important as the how much or overall level of distortion. For the designer, this presents an interesting paradox to beware of: Audible distortion may increase if distortion is lowered at the price of raising its occurrence frequency.

1FBA47EE-A3A0-47B2-BAC0-D5C2C4795305.jpeg
 
The interesting part about the OP, to me, was it raises some very interesting questions. The most fundamental question: Is this correlation correct? For a significant majority, 90+percent correlation? To get to that point other things have to be answered.

It is easy to perform the experiment yourself. In fact, I am hoping that others on ASR will try to replicate it. Although I have 10 samples, ultimately it's on one sound system and who knows what other deficiencies on my system are present, or what errors I have made with my methodology which may cause people to prefer the added distortion?

Download PKHarmonic (it is donationware, you can use it as many times as you like) or Thrillseeker XTC (also free). Note that Thrillseeker XTC has an output volume adjustment so you can volume match if you need to. Tune it by ear, then invite some friends over. Then switch the VST's on and off and ask your friends if they prefer A or B. Report back.
 
0.2% 2nd harmonic should not alter tonality much, let alone 0.004% 3rd harmonic.
Edit: SPL or voltage is physical. Loudness is the sensation. A louder sound has more energy than a quieter sound, and that energy can come from nonlinear distortion or from higher output, and can be compensated by offsetting one or the other. You've controlled for one factor (output, assuming you've been careful about the signal chain), not the other (subjective loudness). I would bet that the qualitative comments will no longer have place once the level adjustment is made.
Agreed. It would be interesting to continue these tests, but with the distorted signal slightly audibly quieter to see if the results flip.

Something I've wondered about, but don't have the training to answer.
  1. How much of an Amps available power is consumed when it has to "amplify" non-audible and (relatively) high-level 2nd, 3rd and 4th order harmonics?
  2. Can the "diversion" (for a lack of a better word) of an Amps available power, in the amplification of non-audible distortion, impact the sound of the audible spectrum?
 
Thank you for posting these. Wasn’t familiar with the Lofft paper, that has answered a lot questions f questions in my mind about all of this:

Conclusion​

Axiom's tests of a wide range of male and female listeners of various ages with normal hearing showed that low-frequency distortion from a subwoofer or wide-range speaker with music signals is undetectable until it reaches gross levels approaching or exceeding the music playback levels. Only in the midrange does our hearing threshold for distortion detection become more acute. For detecting distortion at levels of less than 10%, the test frequencies had to be greater than 500 Hz. At 40 Hz, listeners accepted 100% distortion before they complained. The noise test tones had to reach 8,000 Hz and above before 1% distortion became audible, such is the masking effect of music. Anecdotal reports of listeners' ability to hear low frequency distortion with music programming are unsupported by the Axiom tests, at least until the distortion meets or exceeds the actual music playback level. These results indicate that the where of distortionat what frequency it occurs is at least as important as the how much or overall level of distortion. For the designer, this presents an interesting paradox to beware of: Audible distortion may increase if distortion is lowered at the price of raising its occurrence frequency.

View attachment 295533
I hadn't seen it either, but what it reports chimes with a comment that I've heard from three different speaker designers over 30 years: that the most important part of any speaker is the tweeter.
 
The second harmonic dominates the fundamental of open strings on Double Bass, and I think they do as well on electric. On an open E (41) H2-H6 should dominate on an electric bass.
This is broadly the result for most instruments in the bass. Instruments have evolved on the basis of being louder and having more consistent tone, and we hear more of that from harmonics of the very lowest notes (though we may feel the fundamental with big instruments like organs). One of the most important innovations in all of the history of Western music was the invention of winding wire round bass strings, and guess what that does?
 
But isn't the tweeter the least likely to cause distortion products that speaker measurements attest to?
Yes, but we're more sensitive to what a tweeter does. That's what the paper says. Thinking further, it's not just the tweeter, but the general high frequency behaviour of the speaker, of course. And there's also ensuring that you have a fairly closely matched pair. I vaguely remember some manufacturers in the 1980s being reported as taking tweeter matching as much more important than matching bass/mid in their budget speakers.

It's also possible that tweeter quality is higher now than it used to be. I still say "source first" (guess where and in which decade I first took notice of hifi), but digital sources are largely solved. If tweeters are largely solved now, then it matters less that they are important, in the same way.
 
I am skeptical that so many people are even able to hear a difference much less all have the same preference. Since the 2 versions are not level matched the most likely reason is the DSP high distortion version is louder.
 
Let's take a simple example:
A Jazz record, including double bass.
Some notes are down to 40 Hz.

If you try to play the music back on small speakers, of course, you won't hear the fundamental

Add some serious level of H2 (80Hz) and H3 (120Hz), and you'll hear more bass. For real.

So:
Of course you'll prefer the sound with distortion (in that case) !
View attachment 295443

Play the same music on a full range system, and the distortion will bother you.

That's why we want distortion settings to be optional.
To taste.
Just like bass, trebble or (lack of) loudness compensation.
Nice angle. Gave me an "aha moment". Thanks..:)
 
Thank you for posting these. Wasn’t familiar with the Lofft paper, that has answered a lot questions f questions in my mind about all of this:

Conclusion​

Axiom's tests of a wide range of male and female listeners of various ages with normal hearing showed that low-frequency distortion from a subwoofer or wide-range speaker with music signals is undetectable until it reaches gross levels approaching or exceeding the music playback levels. Only in the midrange does our hearing threshold for distortion detection become more acute. For detecting distortion at levels of less than 10%, the test frequencies had to be greater than 500 Hz. At 40 Hz, listeners accepted 100% distortion before they complained. The noise test tones had to reach 8,000 Hz and above before 1% distortion became audible, such is the masking effect of music. Anecdotal reports of listeners' ability to hear low frequency distortion with music programming are unsupported by the Axiom tests, at least until the distortion meets or exceeds the actual music playback level. These results indicate that the where of distortionat what frequency it occurs is at least as important as the how much or overall level of distortion. For the designer, this presents an interesting paradox to beware of: Audible distortion may increase if distortion is lowered at the price of raising its occurrence frequency.

View attachment 295533
This is a controlled, published study showing the contary: Subwoofer Performance for Accurate Reproduction of Music https://secure.aes.org/forum/pubs/journal/?elib=5147

1688007736512.png


There is no perceptual model for distortion. The results of subjective reports such studies are rarely explanable, and I've really tried to get into the LF literature out there.

For 50Hz, 80dB, 100dB, 110dB SPL is equivalent to 40dB, 80dB, 100dB SPL at 1kHz, to show the reasonability of the levels in the above chart.

1688008430446.png
 
Again, timbre perception is a mix of the spectral content, whether harmonic or enharmonic, the internal phase relationships, and envelope. It is not just harmonics.
Well you are correct, it would be more accurate to say “inin part” the harmonics of an instrument play a part (probably a large part) in the perception of timbre. I focused on that because it was the focus of the thread. Buts it’s all over t there for anyone who wants to make the dive into the other factors.

But they don’t know exactly why we can say “that’s a violin, and that’s a piano, . . .” When they are playing the exact same fundamental note. Or when three vocalists sing the exact same note. We know instantly, that’s Joan Sutherland. They tried to see timbre would fit into a model of how we perceive color, subtle hues or changes, (amount of each of the three primary colors), the tristimulus model but it doesn’t quite fit (yet). They know many of the factors, first is the frequency spectrum (fundamental note and overtones) and yes they have identified other important factors (brightness, bite, synchronicity, rise time, etc.) but what they have identified doesn’t explain it fully.

What most of the research comes from interestingly enough, grew out of a desire to know how an infant can recognizable its mother’s voice from a dozen other female voices.
 
There is no perceptual model for distortion. The results of subjective reports such studies are rarely explanable, and I've really tried to get into the LF literature out there.
Thanks for posting that, those are the studies I’m most familiar with, and I have gotten to participate in controlled blind testing of lower distortion of low frequency sound and my preference was always the lower distortion speaker in those tests. But there was a sizable percentage who preferred the higher distortion system a quarter to a third. There was always a clear preference. The explanation was partially attributed what people had grown up on, what they were used to, You tend to prefer what you know. That was consistent with the study of Henry Olson on movie sound going back to the 40s when most theaters, movies, had a high-end cut off of 6009 Hz. People preferred that for awhile because of the distortion above those levels could be 25%.

Highly recommend Olson if you haven’t come across it yet.
 
Back
Top Bottom