edahl
Senior Member
- Joined
- Jan 18, 2021
- Messages
- 398
- Likes
- 328
I'm interested in what "detail" is in sound reproduction. On the one hand there's a qualitative and subjective aspect to it, but what interests me right now is how it might relate to something measurable. One candidate is that detail should relate to information retention (inversely, entropy) along a signal chain: if a detail is not reproduced, we could not possibly hear it. We can look at information divergence at any given point in the chain. Roughly, knowing the "true" signal (the audio source), how much does the current signal diverge from the original source? Harmonic distortion is a measure of the extent to which higher harmonics are added to a signal, muddying the signal, and can likely be interpreted as a form of entropy (I haven't done the work to show how, so please let me know if you know how). I’d be interested in research pointing to how notions of entropy might be useful to evaluating high fidelity audio reproduction. E.g., combinatorial notions of entropy through a dac/amp/headphone/mic/ad-converter chain, and so on. Is frequency response and THD+N enough to capture everything we need to evaluate high fidelity audio reproduction, or are there concepts of information theory we’re yet to fully employ? These thoughts are rather fresh to me, and though my background is in mathematics there's no point in me redeveloping at a snail's pace what must already be a rather well understood subject. So I ask rather naïvely and simply: what do we know?