• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Entropy and information; the devil and details

edahl

Senior Member
Joined
Jan 18, 2021
Messages
398
Likes
328
I'm interested in what "detail" is in sound reproduction. On the one hand there's a qualitative and subjective aspect to it, but what interests me right now is how it might relate to something measurable. One candidate is that detail should relate to information retention (inversely, entropy) along a signal chain: if a detail is not reproduced, we could not possibly hear it. We can look at information divergence at any given point in the chain. Roughly, knowing the "true" signal (the audio source), how much does the current signal diverge from the original source? Harmonic distortion is a measure of the extent to which higher harmonics are added to a signal, muddying the signal, and can likely be interpreted as a form of entropy (I haven't done the work to show how, so please let me know if you know how). I’d be interested in research pointing to how notions of entropy might be useful to evaluating high fidelity audio reproduction. E.g., combinatorial notions of entropy through a dac/amp/headphone/mic/ad-converter chain, and so on. Is frequency response and THD+N enough to capture everything we need to evaluate high fidelity audio reproduction, or are there concepts of information theory we’re yet to fully employ? These thoughts are rather fresh to me, and though my background is in mathematics there's no point in me redeveloping at a snail's pace what must already be a rather well understood subject. So I ask rather naïvely and simply: what do we know?
 

JeffS7444

Major Contributor
Forum Donor
Joined
Jul 21, 2019
Messages
2,363
Likes
3,546
I'm interested in what "detail" is in sound reproduction. On the one hand there's a qualitative and subjective aspect to it, but what interests me right now is how it might relate to something measurable.
Mostly frequency response I think. In the case of maximizing intelligibility of the human voice, you can actually have a sound which is low-fidelity yet sufficiently "detailed" to convey the meaning of the spoken word. Here, a narrow bandwidth with a peak at around 5 khz may be very useful, and indeed, this is what you may find in many a small AM radio which has a "crisp" sound. This can be taken to even further extremes when tuning into single sideband shortwave and ham radio broadcasts.
Harmonic distortion is a measure of the extent to which higher harmonics are added to a signal, muddying the signal, and can likely be interpreted as a form of entropy (I haven't done the work to show how, so please let me know if you know how). I’d be interested in research pointing to how notions of entropy might be useful to evaluating high fidelity audio reproduction.
I'm not aware of entropy figuring into sound reproduction.
 

dc655321

Major Contributor
Joined
Mar 4, 2018
Messages
1,597
Likes
2,235
I was looking into the information-theoretic content of music signals around a year ago (?)...
At the time, I was looking at power spectral estimation as a tool for comparing musical energy distributions.
Found this thesis somewhat useful, as well as this formulation of spectral entropy at the bottom of the linked page.
@j_j may know more about this topic.
 

j_j

Major Contributor
Audio Luminary
Technical Expert
Joined
Oct 10, 2017
Messages
2,281
Likes
4,787
Location
My kitchen or my listening room.
I'm not aware of entropy figuring into sound reproduction.

It does in several ways. First the actual signal entropy can be measured. Look at "Spectral Flatness Measure" as a measure of redundancy, for instance. This involves determining redundancy and eliminating that by smart processing. For instance, for most waveforms, something like LPC or transform coding attempts to get close to the actual signal entropy. (No, you can't ever get all the way there, nope, without loss.)

There are things like 'zip' and such that can also compress an audio signal (inefficiently, by exploiting some of the redundancy), etc.

Second, it is possible to measure the "perceptual Entropy". Wow that goes back. https://ieeexplore.ieee.org/document/197157 Way back.

But in general, everything has an information rate or total content. And it's often quite counterintuitive how that works out.

Which signal do you think has higher entropy, a sine wave or white noise?
 

pozz

Слава Україні
Forum Donor
Editor
Joined
May 21, 2019
Messages
4,036
Likes
6,827
If we're talking gear, frequency response captures areas of energy storage (resonances, either as peaks or cancellations), which could be called entropic.
 

j_j

Major Contributor
Audio Luminary
Technical Expert
Joined
Oct 10, 2017
Messages
2,281
Likes
4,787
Location
My kitchen or my listening room.
If we're talking gear, frequency response captures areas of energy storage (resonances, either as peaks or cancellations), which could be called entropic.

Um, nonlinear modifications reduce information content. Frequency shaping, with noise floor involved, likewise. But information entropy is a very precise measure, please.
 

paulraphael

Active Member
Joined
Dec 18, 2020
Messages
262
Likes
367
Location
Brooklyn, NY
Which signal do you think has higher entropy, a sine wave or white noise?

Well, you've already announced this as a trick question. Let's skip my answer, so you can get straight to the more interesting business of explaining why I'm wrong? :)
 

Inner Space

Major Contributor
Forum Donor
Joined
May 18, 2020
Messages
1,285
Likes
2,938
To me, detail is coherent low-level information intended for reproduction, but which can be masked or obscured by system noise or distortion, and especially by the listening room's noise floor. Thus in itself it's measurable by inference - the lower the cumulative SINAD, and especially the lower the room noise, the more chance you have of hearing it, if it's there. Uneven FR can either help or hurt, by either pushing or burying the region in question. I know nothing about entropy, except the general nature of my life.
 

pozz

Слава Україні
Forum Donor
Editor
Joined
May 21, 2019
Messages
4,036
Likes
6,827
Um, nonlinear modifications reduce information content. Frequency shaping, with noise floor involved, likewise. But information entropy is a very precise measure, please.
So the term can't be used loosely in that sense? Don't informational outcomes describe entropy, there being a clear relationship for example between (known) electrical signal and final (variable) acoustic measurement? Or is that too imprecise?
 

Wes

Major Contributor
Forum Donor
Joined
Dec 5, 2019
Messages
3,843
Likes
3,790
There's a Maxwell's Demon joke in here somewhere...
 

Alice of Old Vincennes

Major Contributor
Joined
Apr 5, 2019
Messages
1,426
Likes
920
Same here. I never thought about it in the context of this thread. Four decades in the practice, and, I suppose, a significant contribution. Physicists should include our efforts in their calculations.
 

posvibes

Senior Member
Joined
Jul 4, 2020
Messages
362
Likes
490
Does the entropy extend as far as our brain and consciousness?
 
OP
edahl

edahl

Senior Member
Joined
Jan 18, 2021
Messages
398
Likes
328
Last edited:

j_j

Major Contributor
Audio Luminary
Technical Expert
Joined
Oct 10, 2017
Messages
2,281
Likes
4,787
Location
My kitchen or my listening room.
White noise is high entropy I do believe.

High entropy passwords are most random, most like noise.

A sine wave is very ordered.

Indeed. A very clean sine wave is very low entropy, because it is very, very redundant. A flat spectrum maximizes the entropy.
 
Top Bottom