This definition comes from a Nobel Prize winner in Physics.WTF. That's probably the strangest definition of science I've ever heard.
you are confusing live sound with reproduced sound.Find a piano or a keyboard. Play your favorite chord 10 times in a row as identically as you can. Same attack, same timing, same sustain. It should sound remarkably similar.
Now realize that every time you played that chord you heard a "coherent" result, but the relative phases between your notes were COMPLETELY random because you don't have enough control over the timing to get the same result twice. If our brains were sensitive to the type of phase distortion you're concerned with, then the relative phase of the notes in a chord would make the chord sound different every time we heard it, and songs would be unpredictable because musicians would be incapable of consistently generating the sound they want.
I don't see how that's relevant in the context of what they said.you are confusing live sound with reproduced sound.
simply that the example of the played musical note is not illustrative to demonstrate the importance of phase coherence in the reproduced soundI don't see how that's relevant in the context of what they said.
My understanding of the example is that the phase relationship between different sounds — like notes in a piano chord — is basically random. It's impossible to strike keys with perfect timing, so the frequency components from different notes don’t have any consistent phase relationship. And since we don’t hear a noticeable difference when the same chord is played again, it seems like faithfully reproducing that randomness isn’t critical.simply that the example of the played musical note is not illustrative to demonstrate the importance of phase coherence in the reproduced sound
That discrimination of science versus engineering, a funny self-ironical piece. Regarding your above made observation, true. The boss chimes in, mathematics. What is the concept of a tone, frequency? It is not ‚natural‘ to begin with. Frequency as such doesn‘t exsist, but it is a descriptive simplification, that holds only under strict limitation of its use. Fourier‘s theorem was proven recently, even human perception has to obey, and so has the recording industry. Keywords ‚integration time‘, ‚information content‘. As you‘re talking about a tone ‚beginning‘, it is not. Rather than a tone, ‚beginning’ says, it is a continuous spectrum.My understanding of the example is that the phase relationship between different sounds — like notes in a piano chord — is basically random. It's impossible to strike keys with perfect timing, so the frequency components from different notes don’t have any consistent phase relationship. And since we don’t hear a noticeable difference when the same chord is played again, it seems like faithfully reproducing that randomness isn’t critical.
But it’s a different story for each individual piano key. The harmonics within a single note start out with a well-defined phase relationship. That can drift over time — due to slight detuning or string dispersion — but the initial alignment is always there. If a playback system disrupts that, it might change how the note sounds.
Of course, practical playback systems can’t tell whether frequencies belong to the same sound or not. If they alter phase in a frequency-dependent way, they affect everything. But that doesn’t change the basic idea.
exactly, the sound of a note, live music is always in phase. reproduced music should be in phase as well. some loudspeaker designers have always studied this phenomenon and their first goal is to limit the damageThat can drift over time — due to slight detuning or string dispersion — but the initial alignment is always there. If a playback system disrupts
Look, that‘s the two points; a wild speculation and stating that if it wasn‘t followed some damage is done, of course to some perfectly unknown original. Proof is, there’s some third person following your science.exactly, the sound of a note, live music is always in phase. reproduced music should be in phase as well. some loudspeaker designers have always studied this phenomenon and their first goal is to limit the damage
The boss chimes in, mathematics. What is the concept of a tone, frequency? It is not ‚natural‘ to begin with. Frequency as such doesn‘t exsist, but it is a descriptive simplification, that holds only under strict limitation of its use. Fourier‘s theorem was proven recently, even human perception has to obey, and so has the recording industry. Keywords ‚integration time‘, ‚information content‘. As you‘re talking about a tone ‚beginning‘, it is not. Rather than a tone, ‚beginning’ says, it is a continuous spectrum.
You say, the human auditory system identifies the origin of overtones/harmonics by their phase relation to the basetone.
But there must be something in the auditory system that is somehow sensitive to phase? Otherwise what is the explanation for the "binaural beats" illusion?Keep in mind, the ear is an amplitude sensor of frequency. It is not a phase sensor. It is the same with light sensors, biological, or physical. The only way to detect phase with an amplitude sensor is through interference with another signal. I have spent a lot of time in photonics, and the principle is the same.
There is a lot of work today and there are products that separate instruments in a mix with software. They might be based on harmonics. I would look at patents and scientific papers on that. It is plausible the auditory cortex can do something like that, like it can to an extent follow more than one voice. There may be some prediction at work in the brain trained on music as well.
sorry but I don't understand what I should explainThat discrimination of science versus engineering, a funny self-ironical piece. Regarding your above made observation, true. The boss chimes in, mathematics. What is the concept of a tone, frequency? It is not ‚natural‘ to begin with. Frequency as such doesn‘t exsist, but it is a descriptive simplification, that holds only under strict limitation of its use. Fourier‘s theorem was proven recently, even human perception has to obey, and so has the recording industry. Keywords ‚integration time‘, ‚information content‘. As you‘re talking about a tone ‚beginning‘, it is not. Rather than a tone, ‚beginning’ says, it is a continuous spectrum.
Your speculations are lacking on different fields. First, there are (regularly) two ears, connected by the Head Related Transfer Function (see Dr Gunther Theile, his phd thesis). Your central precondition is preemptive, the following inference shows wide gaps. You could fill those with exp/ data, but that‘s neither provided, nor is it put into prospect in that you tell how they could be taken. Not any, may it be on a sheer physical level.
You say, the human auditory system identifies the origin of overtones/harmonics by their phase relation to the basetone.
Seems you quoted another post of mine. How could the claim you made be proven wrong or right, see post #30.sorry but I don't understand what I should explain
I’m not pedantic …If you want to be pedantic, …
The operation of the colchea is still not understood. Some say it‘s a waveguide with self amplification. To see it as a filterbank is a simplification that doesn‘t hold. At least so I’m told.However — despite lacking expertise in psychoacoustics — even with my rudimentary understanding of ear anatomy …
No argument here. Still, it's always nice when technology can create an ever more convincing illusion of being there—for our sensory apparatus, and without much expense. (Symphony tickets are expensive, especially if you want to go regularly, not just once in a while. And don’t even get me started on opera ticket prices.)Thing is, who cares, if we see the recording not as a mirror of an original, but as an artifact of human decision making, more a pencil sketch than a photograph? More some collected hints on what happened, than virtual reality. It‘s not ‚the brain‘, automated, to listen, but the mind, some say the soul even. We, as consumers impose our understanding onto the physical sensoric input. We need more education in that regard, me thinks.
I was excited at first, but the more I learned, the more disappointed I became. They do work—but hardly for music. Hopefully, that will improve over time.But thanks, I‘ve learned we‘ve got artificial colcheas, great!
And what should I explain to you? That the sound captured by a microphone must be reproduced with timing and phase? Please read post #4 I realized that they had already written the same thing!Seems you quoted another post of mine. How could the claim you made be proven wrong or right, see post #30.
That was post #4, in the, for me, most relevant part.I lean towards thinking there's some truth to it.
No argument here. Still, it's always nice when technology can create an ever more convincing illusion …
and do you agree with him or have you done scientific studies in neurobiology and discarded everything?That was post #4, in the, for me, most relevant par
How well High-Fidelity audio aligns with human perception?Coming from a background in physical acoustics and engineering - though not specifically in audio - I’ve been wondering why high-fidelity audio still seems so focused on signal accuracy - "waveform fidelity," for lack of a better term. By signal, I mean what microphones capture and what gets delivered near the ears.
To be fair, modern high-fidelity systems already account for a lot of the basics: that we hear with two ears, the limits of human hearing in frequency and dynamic range, how we perceive loudness, acceptable levels of various distortions, and so on. Perceptual insights have clearly shaped things like lossy compression, spatial audio, and room acoustics. But beyond that, I haven’t seen much that really engages with the more subtle ways we actually perceive sound.
Take harmonics, for example. Real-world sounds aren’t just single tones - harmonics are a universal property of oscillations, and any hearing system that evolved to make sense of sound likely developed around this structure. From very early on, brains - ours and those of many other animals - are wired to hear harmonically related frequencies as one sound, or as belonging together. It plays a huge role in how we recognize voices, distinguish instruments, and perceive musical harmony. So I can’t help but wonder - could that be used more directly in how we design audio systems?
I’d be curious to learn if there are any interesting efforts along those lines, or what the main challenges are.
Thanks.