Coming from a background in physical acoustics and engineering - though not specifically in audio - I’ve been wondering why high-fidelity audio still seems so focused on signal accuracy - "waveform fidelity," for lack of a better term. By signal, I mean what microphones capture and what gets delivered near the ears.
To be fair, modern high-fidelity systems already account for a lot of the basics: that we hear with two ears, the limits of human hearing in frequency and dynamic range, how we perceive loudness, acceptable levels of various distortions, and so on. Perceptual insights have clearly shaped things like lossy compression, spatial audio, and room acoustics. But beyond that, I haven’t seen much that really engages with the more subtle ways we actually perceive sound.
Take harmonics, for example. Real-world sounds aren’t just single tones - harmonics are a universal property of oscillations, and any hearing system that evolved to make sense of sound likely developed around this structure. From very early on, brains - ours and those of many other animals - are wired to hear harmonically related frequencies as one sound, or as belonging together. It plays a huge role in how we recognize voices, distinguish instruments, and perceive musical harmony. So I can’t help but wonder - could that be used more directly in how we design audio systems?
I’d be curious to learn if there are any interesting efforts along those lines, or what the main challenges are.
Thanks.
To be fair, modern high-fidelity systems already account for a lot of the basics: that we hear with two ears, the limits of human hearing in frequency and dynamic range, how we perceive loudness, acceptable levels of various distortions, and so on. Perceptual insights have clearly shaped things like lossy compression, spatial audio, and room acoustics. But beyond that, I haven’t seen much that really engages with the more subtle ways we actually perceive sound.
Take harmonics, for example. Real-world sounds aren’t just single tones - harmonics are a universal property of oscillations, and any hearing system that evolved to make sense of sound likely developed around this structure. From very early on, brains - ours and those of many other animals - are wired to hear harmonically related frequencies as one sound, or as belonging together. It plays a huge role in how we recognize voices, distinguish instruments, and perceive musical harmony. So I can’t help but wonder - could that be used more directly in how we design audio systems?
I’d be curious to learn if there are any interesting efforts along those lines, or what the main challenges are.
Thanks.