• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Time domain and frequency domain relation and measurements

Is it clear that it is a response of the amplifier with frequency response of 3Hz - 100kHz (-3dB)?

  • Yes

    Votes: 4 12.9%
  • No

    Votes: 7 22.6%
  • I do not know

    Votes: 20 64.5%

  • Total voters
    31
  • Poll closed .
The confusion in such threads usually comes with messing up engineering with audibility.
I know we use it as a trick at threads like the bin one but at technical ones that's plain wrong.

Audibility should not have a place here (and it's me talking which is the first that matters after reliability, etc)

Such threads should be easy:
We measured it - we verified it as a phenomenon - can be fixed?
Purely engineered-wise.

Cost, audibility, younameit are irrelevant.
It's been a long-long time since we have gear that can be faithful, mixing it with engineering search for better charts is a whole different thing.
Why fix only one (inaudible) aspect when we can fix more?
 
The confusion in such threads usually comes with messing up engineering with audibility.
I know we use it as a trick at threads like the bin one but at technical ones that's plain wrong.

Audibility should not have a place here (and it's me talking which is the first that matters after reliability, etc)

Such threads should be easy:
We measured it - we verified it as a phenomenon - can be fixed?
Purely engineered-wise.

Cost, audibility, younameit are irrelevant.
It's been a long-long time since we have gear that can be faithful, mixing it with engineering search for better charts is a whole different thing.
Why fix only one (inaudible) aspect when we can fix more?

I’m sorry but this is a grossly overstated opinion. Audibility is the basis upon which our standards for engineering are based.

Of course we don’t want “it’s barely below the audibility threshold” or “it’s above the audibility threshold but will probably be fine in most cases” to be the benchmark. I - and I assume everyone else here - agree with you on that.

But linear bandwidth out to 100kHz vs, say, 45kHz is by itself an irrelevant distinction for audio gear, precisely because both are well beyond human audibility. And if the wider ultrasonic bandwidth is claimed to be a proxy or secondary indicator for higher build/engineering quality, then whoever makes that claim must be able to point to what the primary indicator would be - in other words, something that would make an audible difference, and/or a likely difference in reliability/ longevity.

Absent some basic check referenced to audibility, you end up with certain folks saying certain entire amp topologies are junk because you can’t run an unfiltered 20kHz square wave into them.
 
Last edited:
That’s just an example, but the point is that “more tests, more robustness” is a good thing only to the extent that the additional and/or more demanding tests are designed and carried out in a way that avoids what Levimax describes.
Agreed - it’s fine to call out flaws in each specific case, but, and I am trivializing here, making sweeping claims in the style of “which is superior, time or frequency domain” and then dismissing one or another doesn’t make much sense.
 
Agreed - it’s fine to call out flaws in each specific case, but, and I am trivializing here, making sweeping claims in the style of “which is superior, time or frequency domain” and then dismissing one or another doesn’t make much sense.
True.
 
I’m sorry but this is a grossly overstated opinion. Audibility is the basis upon which our standards for engineering are based.

Of course we don’t want “it’s barely below the audibility threshold” or “it’s above the audibility threshold but will probably be fine in most cases” to be the benchmark. I - and I assume everyone else here - agree with you on that.

But linear bandwidth out to 100kHz vs, say, 45kHz is by itself an irrelevant distinction for audio gear, precisely because both are well beyond human audibility. And if the wider ultrasonic bandwidth is claimed to be a proxy or secondary indicator for higher build/engineering quality, then whoever makes that claim must be able to point to what the primary indicator would be - in other words, something that would make an audible difference, and/or a likely difference in reliability/ longevity.

Absent sine basic check referenced to audibility, you end up with certain folks saying certain entire amp topologies are junk because you can’t run an unfiltered 20kHz square wave into them.
I'm on both boats, for entirely different reasons.
First is music, everything is solved there for decades, high SPL linear speakers is probably the last frontier (and I'm talking size, purely) .

The other boat, that hobby, is like anything else.
I see my nephew and his friends building $10k PC for absolutely none practical reason, they don't even play these heavy damn games on them (I will put one of them to the test though running multitone 4M FFT DSD512 tests :) )

And by that the barely minimum won't cut it.
When Halcro did that miracle amp he did was its first, he didn't knew scrap about audio, he was just making the best metal detectors in the world.

But he said "hey! ,let's see what can be done" and he did it entirely thinking out of the box.
My kind of guy, 100%
 
My brain hurts after trying to comprehend most of the above....
 
My brain hurts after trying to comprehend most of the above....
Square waves are often misunderstood as well as misrepresented. The link in post #35 above explains thing clearly and in layman's terms with lots of pictures and can really help make this thread comprehendible.... at least some of it anyway.
 
LOL, yes, it got sorted but a long strange trip from post #1.
 
When the input waveform is significantly faster than the amplifier stage, the leading and trailing edges will no longer be vertical, because the amplifying circuit has a limited bandwidth. It is very easy to perform a square wave test and end up with an entirely wrong answer if you're not careful. Much of the brouhaha that developed regarding TIM (transient intermodulation distortion) and/or SID (slew induced distortion) were due to the very fast risetime of the test signal. When testing any audio device, you must be aware of the simple fact that music does not contain very fast risetime signals, and most media (vinyl, CD, etc.) are actually not very demanding. This is because the amplitude of the musical harmonics is reduced by at least 6dB/octave from no higher than 2kHz or so. This means that the actual level at 20kHz will typically be 20dB lower than the level at midrange frequencies.

Therefore, an amplifier that can provide ±35V peaks will only be required to provide around ±3.5V peaks at 20kHz when operating just below full power with music as the input. This dramatically changes the required slew rate, but it's very common (and advisable) to ensure that an amplifier can reproduce no less than 50% output voltage at 20kHz to ensure an acceptable safety margin. TIM may have been discredited (along with its siblings), but it doesn't make any sense to limit an amplifier if it's not necessary. It also doesn't make sense to go to a great deal of additional effort to design an amplifier that can reproduce full power at 100kHz (or even 20kHz), because it will never be needed.

This is a great start! But I would go further:

Similar to your example, assume a ±35 V waveform with a 6 dB/octave sloping amplitude spectrum. The 6 dB/octave slope gives us that the amplitudes of the harmonics are simply A(n) = A1/n, where n is the harmonic and A1 is the amplitude of the fundamental.

It also follows that a 6 dB/octave slope results in the fundamental and all the harmonics having exactly the same slew rate, 2piA1*f, where f is the fundamental.

So, if I have up to 10th harmonic, then the slew rate requirement for the sum of perfectly phase-correlated sinusoids at zero crossing, at least initially, is 10× the slew rate of any of the harmonics or of the fundamental.

Let's compute the slew rate. A = sum (A1/n) for n = 1..10, so A ≈ A1 * 2.9.
Therefore, A1 = 35 V / 2.9 ≈ 12 V. And the slew rate S1 ≈ 2pi12*2000 = 150 kV/s = 0.15 V/µs.

So the slew rate for the waveform at the initial zero crossing is 1.5 V/µs - again, the required slew rate is 10× that of the 20 kHz harmonic alone.

And, interestingly, if I wanted to test my amp and drive it to exercise the same rate with just a 20 kHz sinusoid, it would need to be ±12 V, it's not quite the initial ±35 V, but it's not small either (it is even more for a waveform with 6db/octave slope power spectrum, but it's a bit more involved, so I won't share it as I am not sure I did it correctly).

So, the "no less than 50% output voltage at 20kHz" rule of thumb clearly holds.

I’ve checked my math a couple of times, but if I messed up anywhere - please don't be mad :)

Takeaway: even moderate high-frequency content can demand an order-of-magnitude more slew rate when phase-aligned with lower harmonics.
 
Last edited:
Yes they are, but it may not be easy to realize what it means in the real world. To me, time domain is primary, we live in the 3D world + time, we live in a space-time and not in a space-frequency. Frequency domain was mathematically derived to make our view of certain phenomenae easier. Frequency domain is definitely secondary to time domain.
Oh, c'mon.
  1. Energy and time spread are complementary, linked by the quantum uncertainty relation: ΔeΔt ≥ ℏ/2.
  2. On the other hand Energy and Frequency are related by: e=ℏω.
  3. Substitution gives: ΔωΔt ≥ 1/2.
QED - Frequency and Time are Siamese twins.
 
QED - Frequency and Time are Siamese twins.
Precisely. The only real question is what particular perspective best shows the phenomena we care about. Declaring the time domain as "primary" in relation to frequency is a weird statement it seems to me.
 
we live in a space-time
Maybe not the best analogy... space-time is one thing. If you think that one is at rest and time is only passing, think about the planet spinning and moving through space-time. ;)


JSmith
 
This is because the amplitude of the musical harmonics is reduced by at least 6dB/octave from no higher than 2kHz or so. This means that the actual level at 20kHz will typically be 20dB lower than the level at midrange frequencies.

Therefore, an amplifier that can provide ±35V peaks will only be required to provide around ±3.5V peaks at 20kHz when operating just below full power with music as the input.
This is not correct, in terms of short-term peak levels. On modern digital productions, high-frequency content above the tonal range (> 4kHz) is never sustained, but shorts bursts on the order of milliseconds can be just as loud as in any other frequency range and actually reach full scale. Of course it depends on genre. With a string ensemble playing chamber music there is no relevant HF content, but with modern music, especially EDM produced for max loudness,, that's different.

It's only for the integrating nature especially of FFT-based measurements that make high frequency content appear to drop significantly in level vs low and mid levels.

This tendency already starts in the tonal range, btw. Take piano notes, for example, the highest notes decay very fast, faster than typical measurement windows whereas low frequencies often extend way beyond the window which gives the false readings for the high frequencies. Several notes within a window may even cancel when they are opposite phase wrt to FFT window start point, for a given frequency.

In other words, lower average levels in the HF come mostly from content being more sparsely spaced in time, not actually lower levels.

How do I know, you may ask? As a former active speaker designer I had to analyze what amplifiers are needed for each range to avoid clipping. If you normalize required voltage to driver sensitivity it turns out you need about the same voltage headroom for any octave, with only small reduction of a dB or two in the top two octaves, 5..10kHz and 10..20kHz.
 
This is not correct, in terms of short-term peak levels. On modern digital productions, high-frequency content above the tonal range (> 4kHz) is never sustained, but shorts bursts on the order of milliseconds can be just as loud as in any other frequency range and actually reach full scale. Of course it depends on genre. With a string ensemble playing chamber music there is no relevant HF content, but with modern music, especially EDM produced for max loudness,, that's different.

It's only for the integrating nature especially of FFT-based measurements that make high frequency content appear to drop significantly in level vs low and mid levels.

This tendency already starts in the tonal range, btw. Take piano notes, for example, the highest notes decay very fast, faster than typical measurement windows whereas low frequencies often extend way beyond the window which gives the false readings for the high frequencies. Several notes within a window may even cancel when they are opposite phase wrt to FFT window start point, for a given frequency.

In other words, lower average levels in the HF come mostly from content being more sparsely spaced in time, not actually lower levels.

How do I know, you may ask? As a former active speaker designer I had to analyze what amplifiers are needed for each range to avoid clipping. If you normalize required voltage to driver sensitivity it turns out you need about the same voltage headroom for any octave, with only small reduction of a dB or two in the top two octaves, 5..10kHz and 10..20kHz.

Apologies if this is a dumb question, but to your final paragraph here, are you saying that the lower wattage required for tweeters in multi-amp actives is mostly because of the high sensitivity of tweeter drivers rather than lower power-output requirements for high frequency content?
 
This is not correct, in terms of short-term peak levels. On modern digital productions, high-frequency content above the tonal range (> 4kHz) is never sustained, but shorts bursts on the order of milliseconds can be just as loud as in any other frequency range and actually reach full scale. Of course it depends on genre. With a string ensemble playing chamber music there is no relevant HF content, but with modern music, especially EDM produced for max loudness,, that's different.
Thanks a lot for that, I did not know. Even worse, I actually thought the same about high frequency content being low energy not only in average but in respect to peaks, too. It did not occur to me that this is an effect of sparse signal (only).
Good to have people to learn from!
 
Last edited:
This is not correct, in terms of short-term peak levels. On modern digital productions, high-frequency content above the tonal range (> 4kHz) is never sustained, but shorts bursts on the order of milliseconds can be just as loud as in any other frequency range and actually reach full scale. Of course it depends on genre. With a string ensemble playing chamber music there is no relevant HF content, but with modern music, especially EDM produced for max loudness,, that's different.

It's only for the integrating nature especially of FFT-based measurements that make high frequency content appear to drop significantly in level vs low and mid levels.

This tendency already starts in the tonal range, btw. Take piano notes, for example, the highest notes decay very fast, faster than typical measurement windows whereas low frequencies often extend way beyond the window which gives the false readings for the high frequencies. Several notes within a window may even cancel when they are opposite phase wrt to FFT window start point, for a given frequency.

In other words, lower average levels in the HF come mostly from content being more sparsely spaced in time, not actually lower levels.
This is a useful insight, but I’d expect it to be a routine problem with a well-established solution? Yes, a standard fixed window FFT will inevitably smooth over such bursts, and it’s not the ideal tool for real-time analysis in any case. I would think there should be a commonly available tool that, for any frequency band - regardless of its bandwidth - can accurately capture the true maximum total amplitude and the maximum slew rate observed, whether the content is broadband or extremely narrow, and display it as part of the spectrum. In other words, something that shows, for each band, the actual peak demands placed on the system, no matter whether the burst lasts only a few cycles or extends over many.
 
After reading https://sound-au.com/articles/squarewave.htm I have the following questions (so I marked the poll I don't know) that I have added to the image of the impulse response in the OP for clarity.

Image.jpeg
 
After reading https://sound-au.com/articles/squarewave.htm I have the following questions (so I marked the poll I don't know) that I have added to the image of the impulse response in the OP for clarity.

View attachment 469838
The artefact (peak) is a result of the blue scale (input, 0.6V) and red scale (amp output, 33V)) not matching exactly the gain of the amp.
The downward slope is a result of the second order high pass filter with 3.6Hz corner frequency that @pma described.
The overshoot is (another) result of this filter, from filter order and Q (a first order filter does not have this overshoot, AFAIK)
The downward peak is asymetrical relative to the upward one, because the "overshoot" is still negativ in the moment the downward step happens.
 
Last edited:
Back
Top Bottom