• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Yes, anyone, even old people can hear 21 kHz (test attached)

Status
Not open for further replies.
Honest question: Do really mean this or do you mean any wave can be reconstructed from sine waves? The accuracy of such would seem to be limited by rise time, which cannot be zero. Your statement (which I'm honestly trying to understand) also seems to presume that any waveform exists only as the sum of the harmonics, from which they might be re-synthesized, rather than being sui generis.

Be kind in your answer, my mathematical skill are more than rusty.
I think he's referring Fourier series where Fourier demonstrated that any periodic waveform can be constructed from multiple sine waves.

So a square wave with zero rise time is possible with an infinite bandwidth of sines! In practice, real world systems are bandwidth limited which is why we get the Gibbs effect.
 
Do really mean this or do you mean any wave can be reconstructed from sine waves?
Is there any practical difference between "wave is made up of sines" and "wave can be reconstructed from sines"?

The accuracy of such would seem to be limited by rise time, which cannot be zero.
Can a wave with zero rise time physically exist? To my knowledge, no.
 
Can a wave with zero rise time physically exist? To my knowledge, no
In the real world, no, since bandwidth is never infinite.
 
It took me some time to realise a 1Hz square wave with zero rise time is not possible in the real world where bandwidth is not infinite.
 
Maybe he changed the file in the meantime, but it doesn't look so bad here (gain 0, range 120):

View attachment 430857


Your "square" wave seems to have 2nd harmonic (and quite a lot of DC):

View attachment 430858


What I can hear are the discontinuities at the moment of switching. When the test file is prepared properly, like the one in attachment, I can hear only a single tone:

View attachment 430860

I also included 4 kHz so you can hear what it sounds like when you can hear the higher harmonic (well, at least those who are still able to hear 12 kHz :) )
Having (near) perfect files and analyzing them is one thing.
Reproducing them at loud enough levels just as perfectly and the drivers behaving perfectly (and not produce other sounds) is totally another thing. :)
Chances are that most people are not using 'good enough' transducers for these test and might be hearing 'tells' other than just the 21k + 7k.
 
Last edited:
any wave can be reconstructed from sine waves?
Any steady / periodic wave, yes, according to Fourier.

Rise time in any band-limited system is never zero, (this requires infinite bandwidth) even though you can kinda represent it as such in a digital waveform, that's not what the waveform will look like.

In this context, talking about pure, synthetic square vs. sine tones, it's pretty absurd to be talking about the waves consisting of anything other than harmonics. It's pretty trivial to produce an honest-to-god (band limited) square wave just by adding the harmonics up by hand in a synthesizer.

As for whether "a sum of harmonics" is a good way to think of natural sound waves in general, I'm not so sure. It works well for some things, less well for others. But it is sort of uncanny how our ear seems to work like a little FFT analyzer sometimes...

For me, I consider the fact that lossy audio codecs pretty much operate on the principle of being able to break any sound down into sine waves, and I start to wonder if maybe that isn't actually the physical reality of it, too. They seem to work pretty well. And as we know, singularities / infinities don't really occur in nature. (let's leave black holes out of this one) - so if nature doesn't have infinite bandwidth for instantaneous rise time or changes in slope either... maybe reality really can be broken down into harmonic series? I'm out of my depth here, it's just me wondering.
 
Last edited:
As for whether "a sum of harmonics" is a good way to think of natural sound waves in general, I'm not so sure.
I'm fairly sure it isn't for most musical instruments. There are just too many sounds not directly related to the fundamental, and, if you ignore such weakly correlated sounds, the harmonics for some instruments (the piano most notably) are not simple ratios of the fundamental -- too many exceptions to build a strong generalization. Computational models are better, but they have their own problems (or benefits*).

* Shaggy dog story omitted for the common good.
 
I'm fairly sure it isn't for most musical instruments. There are just too many sounds not directly related to the fundamental, and, if you ignore such weakly correlated sounds, the harmonics for some instruments (the piano most notably) are not simple ratios of the fundamental -- too many exceptions to build a strong generalization.
You are just getting multiple fundamentals, each with multiple harmonics.
 
I'm fairly sure it isn't for most musical instruments. There are just too many sounds not directly related to the fundamental, and, if you ignore such weakly correlated sounds, the harmonics for some instruments (the piano most notably) are not simple ratios of the fundamental -- too many exceptions to build a strong generalization. Computational models are better, but they have their own problems (or benefits*).

* Shaggy dog story omitted for the common good.
Yes, sorry, should be saying "partials" here, i.e. pure tones but not at any particular frequency or in relation to any particular frequency.

Anything you can put in an MP3 file is represented with a series of partials, that's how it works. So I am not sure what sound can't be represented with a series of tones one way or another.
 
You are just getting multiple fundamentals, each with multiple harmonics.
Not really. Look at spectrograms of a real world instruments. Or if want a nightmare, look at a spectrogram of the human voice.

I mentioned the piano, because its overtones (harmonics) become progressively sharp owing to real world string gauge and the manner in which strings are terminated.
 
Not really. Look at spectrograms of a real world instruments. Or if want a nightmare, look at a spectrogram of the human voice.

I mentioned the piano, because its overtones (harmonics) become progressively sharp owing to real world string gauge and the manner in which strings are terminated.
Yes really. Nothing you wrote above describes anything other than multiple fundamentals (each with their own harmonics) being played from multiple sources, set in motion at different times, all set in motion by the notional act of 'playing a single note'.
 
Yes, sorry, should be saying "partials" here, i.e. pure tones but not at any particular frequency or in relation to any particular frequency.

Anything you can put in an MP3 file is represented with a series of partials, that's how it works. So I am not sure what sound can't be represented with a series of tones one way or another.
Accurate modeling a real world piano that way, even if you ignore the non-string sounds, would be monstrous, requiring multiple samples per string, each one consisting of a fundamental and a collection of increasingly sharp partials. Then, you'd need to compute sympathetic excitation while tracking the positions of the dampers. OY! Really non-trivial stuff.
 
Yes really. Nothing you wrote above describes anything other than multiple fundamentals (each with their own harmonics) being played from multiple sources, set in motion at different times, all set in motion by the notional act of 'playing a single note'.
No! The partials of a real-world terminated string are increasingly sharp, owning to the effect of string stiffness close to the tie down points. In effect, the string becomes shorter with each higher partial. Look up stretched tuning.

 
Accurate modeling a real world piano that way, even if you ignore the non-string sounds, would be monstrous, requiring multiple samples per string, each one consisting of a fundamental and a collection of increasingly sharp partials. Then, you'd need to compute sympathetic excitation while tracking the positions of the dampers. OY! Really non-trivial stuff.
Well, I think we're mixing ideas about different technologies when you say 'modeling'...

MP3 doesn't model anything, it represents a recording.

Likewise, ROMPler pianos are mostly like you say, a whole bunch of recordings per key, but they don't (at least, last time I looked) bother with a lot of resynthesis or models. They're basically a library of hundreds or thousands of individual samples that play more or less appropriately depending on MIDI input.

Even white noise can be reproduced with enough sine waves.

Adding up partials by hand is tedious and was mostly left behind decades ago. But there are plenty of automated ways to do it, that's what FFT-based tools are all about.
 
Last edited:
No! The partials of a real-world terminated string are increasingly sharp, owning to the effect of string stiffness close to the tie down points. In effect, the string becomes shorter with each higher partial. Look up stretched tuning.

Thanks, I looked at it, and also at 'Inharmonics', and I don't see anything there other than what I said, ie, the complex physical structure of a musical instrument causes a number of fundamentals (and their associated harmonics) to be kicked off at various times and at various frequencies. Whatever starts happening at (say) 2.17f after a notional note f is initiated, represents new sources of vibration interacting with earlier sources and creating a net result.

If your point is that a musical instrument's tone is not just the result of fundamentals and perfect harmonic resonances of each fundamental, that's fine at the macro level.

But you are introducing this into a discussion of capture and playback of music, and if you are doing so in order to suggest that the complexity of musical waveforms escapes capture via a process built on deconstruction and reconstruction principles, then I don't think you have established that.

cheers
 
Thanks, I looked at it, and also at 'Inharmonics', and I don't see anything there other than what I said, ie, the complex physical structure of a musical instrument causes a number of fundamentals (and their associated harmonics) to be kicked off at various times and at various frequencies. Whatever starts happening at (say) 2.17f after a notional note f is initiated, represents new sources of vibration interacting with earlier sources and creating a net result.

If your point is that a musical instrument's tone is not just the result of fundamentals and perfect harmonic resonances of each fundamental, that's fine at the macro level.

But you are introducing this into a discussion of capture and playback of music, and if you are doing so in order to suggest that the complexity of musical waveforms escapes capture via a process built on deconstruction and reconstruction principles, then I don't think you have established that.

cheers
This came up peripherally to a response from kemmler3D. Sorry for the digression.

BTW, You might look at stretched tuning focussing on the behavior of a single string, comparing its output with that of an idealized string, where the 1st partial is always equal to twice the fundamental pitch, rather than than slightly higher, as it is in physical instruments. Each subsequent partial is sharper still. All of which impacts tuning: thus you tune strings not to the immediately lower notes of the same nominal pitch but to the overtone of some lower string based on the scale (size of the piano frame or cast iron plate) and the gauge of the strings. Now, consider the interaction between 88 or so of those puppies - map out all the potential resonant traps with the damper lifted. The piano is not as simple and straightforward as it appears at first glance.
 
Maybe he changed the file in the meantime, but it doesn't look so bad here (gain 0, range 120):

View attachment 430857


Your "square" wave seems to have 2nd harmonic (and quite a lot of DC):

View attachment 430858


What I can hear are the discontinuities at the moment of switching. When the test file is prepared properly, like the one in attachment, I can hear only a single tone:

View attachment 430860

I also included 4 kHz so you can hear what it sounds like when you can hear the higher harmonic (well, at least those who are still able to hear 12 kHz :) )

No, this 7 kHz I cannot hear as alternating. Anyone tried?
 
Your "square" wave seems to have 2nd harmonic (and quite a lot of DC):

View attachment 430858

Yes, I see now, well done,its because I took the 'aliased' square wave in audacity. But even then I can not hear the difference, I cant hear >12KHz haha.
 
I also included 4 kHz so you can hear what it sounds like when you can hear the higher harmonic (well, at least those who are still able to hear 12 kHz :) )

Your files are perfect! smart to to do a fade in/out of the harmonics....
 
Yes, in fact, the video host actually shows us the spectrum:-

View attachment 430708

No wonder it is easily distinguishable from a pure 7 kHz sinusoidal tone! This is not the native spectrum of a 7 kHz square wave. This is what happens when we sample high levels of ultrasonic content at a low Nyquist of only 22 kHz, ie sampling rate 44 kHz. The host used a 7 kHz square wave to generate the necessary ultrasonic to create lots of aliasing at audible frequencies, and make his point. His actual point being as I explained in post #16.

Some people are misinterpreting what is really happening.
The above post explains what is going on perfectly. But another way to look at it without much mathematical thinking:

It is not possible to generate a square wave at 7000Hz with a 48KHz sample rate. Instead, the software thinks you want a signal with only 2 values where polarity flips every 1/7000 sec, and so it generates the following, represented by the dots in the plot. This is not a square wave; really it is far from it. You can see that the individual 7000Hz cycles are not symmetric timewise. When interpolated by a DAC, the proper waveform that is created will be as in the bottom curve. You can easily see the 1000 Hz component (clearly heard), or notice how the pattern repeats every 0.001 sec. Even with a filterless DAC, which would square off the output at each sample, you would still have the lower frequency components because these squared + and - pulses would have varying durations.

So you hear the lower frequency tones because the software put them there to begin with. All is as it should be except the software is calling this a square wave when it is not.

1740315779660.png


Edit: Interestingly, Online Tone Generator restricts its 7000 Hz "square wave" to the 7000 and 21000 components. So if you use Online Tone Generator and toggle between 7000 Hz sine and square, there is no lower frequency stuff and all you hear in both cases is the 7000 Hz tone. You don't hear the 21000Hz unless you have exceptional ears that can hear that high.
 
Last edited:
Status
Not open for further replies.
Back
Top Bottom