• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

God of SINAD vs. reality we get from most available music files

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,767
Likes
37,628
How do you change the minimum when displaying vertical axis in dB? Default for me was -60dB and I can't figure out how to change it. Probably an obvious thing but I can't find the setting.
Under Preferences, change the meter range. Hitting the drop down gives several choices.
1660083942362.png
 

j_j

Major Contributor
Audio Luminary
Technical Expert
Joined
Oct 10, 2017
Messages
2,282
Likes
4,790
Location
My kitchen or my listening room.
Quite illuminating, shows that most likely by far the most limiting factor in our audio systems are the recordings.
Well, you have to include the production and the demands of "make it loud" forces.

But there is a great deal of obscenity involved.
 

Palladium

Addicted to Fun and Learning
Joined
Aug 4, 2017
Messages
666
Likes
816
Usability and price are far more important criteria than the last few dB of SINAD, IMO.
 

j_j

Major Contributor
Audio Luminary
Technical Expert
Joined
Oct 10, 2017
Messages
2,282
Likes
4,790
Location
My kitchen or my listening room.
Usability and price are far more important criteria than the last few dB of SINAD, IMO.

Reality of modern listening rooms:

A quiet room nowadays may hit 30dBa or so.

Most speakers might reach 120dB SPL if you're trying to go deaf.

120-30=90 dB available.

Now, this is a bit misleading because what one must consider, instead, is the noise level vs. frequency. PCM is either flat or noise-shaped to fit the upper part of the absolute threshold curve. Room noise is often a lot of LF plus white noise.

In the best of all possible worlds, I would have data available. Does anyone want to fund the research project?
 

danadam

Addicted to Fun and Learning
Joined
Jan 20, 2017
Messages
994
Likes
1,545
I don't know how valid this is, but I compared spectrum of quiet passages with spectrum of 16 bit dither. (Actually, it's spectral density, but I think it doesn't matter much, as long as we keep the same parameters of FFT. Relative differences is what's interesting.)

I took the first track of BIS Sibelius - The Seven Symphonies and converted it from 24/96 to 24/44.1 (because other examples are from CDs and I wanted the same sampling rate). Here's 1 second at 0:47.7 vs 1 second of 16 bit and 15 bit dither:
d.png


In the attachment you can find 5 seconds excerpt with that 1 second in the middle and a version that was converted to 16/44.1 (default SoX dither) and back to 24/96. If you crank up the volume, then you can hear the difference. I also included 7 seconds of crescendo from 7:50, so you can check, if you could comfortably listen to the whole track at that volume ;) .

Btw, the whole track reaches 0 dBFS:
Code:
Sample peak,          RMS
 -0.00 dBFS,  -24.55 dBFS - the whole track
-63.59 dBFS,  -73.18 dBFS - 1 second at 0:47.7
sibelius.png


For comparison, some CD recordings (well, Hilary Hahn mostly :) ):
a.png

b1.png

b2.png

c.png

e.png
 

Attachments

  • sibelius.zip
    4 MB · Views: 139

j_j

Major Contributor
Audio Luminary
Technical Expert
Joined
Oct 10, 2017
Messages
2,282
Likes
4,790
Location
My kitchen or my listening room.
I don't know how valid this is, but I compared spectrum of quiet passages with spectrum of 16 bit dither. (Actually, it's spectral density, but I think it doesn't matter much, as long as we keep the same parameters of FFT. Relative differences is what's interesting.)

I took the first track of BIS Sibelius - The Seven Symphonies and converted it from 24/96 to 24/44.1 (because other examples are from CDs and I wanted the same sampling rate). Here's 1 second at 0:47.7 vs 1 second of 16 bit and 15 bit dither:
View attachment 223375

In the attachment you can find 5 seconds excerpt with that 1 second in the middle and a version that was converted to 16/44.1 (default SoX dither) and back to 24/96. If you crank up the volume, then you can hear the difference. I also included 7 seconds of crescendo from 7:50, so you can check, if you could comfortably listen to the whole track at that volume ;) .

Btw, the whole track reaches 0 dBFS:
Code:
Sample peak,          RMS
 -0.00 dBFS,  -24.55 dBFS - the whole track
-63.59 dBFS,  -73.18 dBFS - 1 second at 0:47.7
View attachment 223376

For comparison, some CD recordings (well, Hilary Hahn mostly :) ):
View attachment 223379
View attachment 223380
View attachment 223381
View attachment 223382
View attachment 223383
The Sibelius is all bass! :)

Now do you have the PSD of the room?
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,767
Likes
37,628
I don't know how valid this is, but I compared spectrum of quiet passages with spectrum of 16 bit dither. (Actually, it's spectral density, but I think it doesn't matter much, as long as we keep the same parameters of FFT. Relative differences is what's interesting.)

I took the first track of BIS Sibelius - The Seven Symphonies and converted it from 24/96 to 24/44.1 (because other examples are from CDs and I wanted the same sampling rate). Here's 1 second at 0:47.7 vs 1 second of 16 bit and 15 bit dither:
View attachment 223375

In the attachment you can find 5 seconds excerpt with that 1 second in the middle and a version that was converted to 16/44.1 (default SoX dither) and back to 24/96. If you crank up the volume, then you can hear the difference. I also included 7 seconds of crescendo from 7:50, so you can check, if you could comfortably listen to the whole track at that volume ;) .

Btw, the whole track reaches 0 dBFS:
Code:
Sample peak,          RMS
 -0.00 dBFS,  -24.55 dBFS - the whole track
-63.59 dBFS,  -73.18 dBFS - 1 second at 0:47.7
View attachment 223376

For comparison, some CD recordings (well, Hilary Hahn mostly :) ):
View attachment 223379
View attachment 223380
View attachment 223381
View attachment 223382
View attachment 223383
So your examples don't look much different than Stuarts. It would appear to me around that 3-5 khz range you might get down to what 16 bit dither is or just below in quiet passages. And of course at all other frequencies our ear is less sensitive. So okay, maybe with state of the art recordings, you need something like 18 bits to just get that 3-5 khz range above the noise. I would not be able to say such a thing is inaudible, but I would think it is going to be a minor, small, not large qualitative difference. Also very few recordings will match something like the BIS example you have used. A different BIS is also the quietest one I have.
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,767
Likes
37,628
An example of a different kind of recording. One from Water Lilly acoustics. Ry Cooder and V.M. Bhatt. Minimalist with a pair of ribbon microphones, into an EAR modified all tube reel tape machine. A silent portion just before the music with no filtering is around -55 dbFS. I am showing it filtered with 3-5 khz left in. A little above 80 db at that point. It is an excellent sounding recording.

1660102259627.png
 

sam_adams

Major Contributor
Joined
Dec 24, 2019
Messages
1,001
Likes
2,444
@j_j, thanks for your contributions to the ASR community. Always a pleasure to have someone onboard with such impeccable credentials. I found the discussion over here quite enlightening and was wondering if you could share your MatLab scripts with us.
 

jhaider

Major Contributor
Forum Donor
Joined
Jun 5, 2016
Messages
2,874
Likes
4,674
But if it is important to you, why don't you pick a player that can do it?

Because that sounds like punishment, not enjoyment.

To condense and adapt from a previous discussion, a source, especially a streaming source, is IMO the absolute worst place to put EQ. EQ here is defined as processing to mitigate flaws in the playback system, as opposed to tone controls to adjust the program material. That makes sense to include in the source material

EQ is an obvious benefit, but in my view should be localized to each distinct playback chain. [note - this is different from, and I think better than, the earlier phrasing.] To put it in the source rather than in the individual playback chain requires the listener to unnecessarily waste time time messing with stupid software, when she just wants to listen to music reproduced well.

Here's a reasonable hypothetical to illustrate why EQ should be as far down the chain as possible. Consider a home with the following:

-Three rooms for attentive listening: a family room with an immersive system, a formal salon with two good "full range" speakers, and a home office with a 2.1 channel nearfield setup as well as headphones. Plus various background audio zones (ceiling speakers, outdoor speakers, HomePods or Sonos, etc.), TWS earbuds, and so on.
-Three categories of content sources for which the audio is available on every system (streaming audio with their apps, YouTube, other streaming video services with their apps)
-Two local-only content sources: vinyl in the 2-channel salon and a digital disk spinner in the immersive family room

Here, if the transducer/room EQ resided in a player software, it would not improve the fidelity of music or other content played from 80% of the potential sources. And even if source EQ were available in all five sources, one would have to mess with multiple different EQ UIs in real time to duplicate EQ settings for each listening environment in order to equalize the transducers.

Let's assume one has low standards and only cares about fidelity for streaming audio. OK...what happens if you were listening to something headphones that require drastic EQ in your home office then decide to flip the AirPlay stream to the 2-channel system so you can share the track with someone else? The answer is PLEASE MAKE IT STOP!!!! Basically you've poisoned a listening session because there's a stupid button you have to click on the software UI to change the EQ. However, you just wanted to share some music so messing with a stupid effing computer slipped your mind. (I'll grant you this problem could be avoided if the software is sophisticated enough to automatically switch EQ presets for each AirPlay stream, analogous to what RME ADI-2 does for each input. But does that actually exist?)

By contrast, let's look at a more sensible approach:
-the immersive system and desktop systems have bass management and manual or automated EQ in their respective processor
-the stereo system has manual or automated EQ built into its preamp or integrated amp, and
-the headphones are powered, with correction in their own hardware, or the endpoint/DAC/headphone amp has selectable EQ presets if you have multiple headphones

With the sensible approach, one can seamlessly move music from source to source and system to system, and experience each in the best fidelity it can offer you without once having to remember to mess with software. You just set, forget, and enjoy - unless you want to change the processing for a given system. The problem is, the sensible approach requires sensible gear. And that's gear that does the necessary signal processing.
 
Last edited:

j_j

Major Contributor
Audio Luminary
Technical Expert
Joined
Oct 10, 2017
Messages
2,282
Likes
4,790
Location
My kitchen or my listening room.
@j_j, thanks for your contributions to the ASR community. Always a pleasure to have someone onboard with such impeccable credentials. I found the discussion over here quite enlightening and was wondering if you could share your MatLab scripts with us.

I believe they are out there somewhere. I just have to remember where . I have a 'c' program I can't release that does this and more, sorry.

Matlab has better plotting, though.

https://www.aes-media.org/sections/pnw/pnwrecaps/2018/oct2018/ talks about it. I'll continue to look for the Matlab version. I must have it somewhere. :D
 

Thomas_A

Major Contributor
Forum Donor
Joined
Jun 20, 2019
Messages
3,469
Likes
2,466
Location
Sweden
It would be nice if you could show tracks analyzed in MasVis. It is free and shows quite nice summary of statstics.

 

Ra1zel

Addicted to Fun and Learning
Joined
Jul 6, 2021
Messages
536
Likes
1,055
Location
Poland
Because that sounds like punishment, not enjoyment.

To condense and adapt from a previous discussion, a source, especially a streaming source, is IMO the absolute worst place to put EQ. EQ here is defined as processing to mitigate flaws in the playback system, as opposed to tone controls to adjust the program material. That makes sense to include in the source material

EQ is an obvious benefit, but in my view should be localized to each distinct playback chain. [note - this is different from, and I think better than, the earlier phrasing.] To put it in the source rather than in the individual playback chain requires the listener to unnecessarily waste time time messing with stupid software, when she just wants to listen to music reproduced well.

Here's a reasonable hypothetical to illustrate why EQ should be as far down the chain as possible. Consider a home with the following:

-Three rooms for attentive listening: a family room with an immersive system, a formal salon with two good "full range" speakers, and a home office with a 2.1 channel nearfield setup as well as headphones. Plus various background audio zones (ceiling speakers, outdoor speakers, HomePods or Sonos, etc.), TWS earbuds, and so on.
-Three categories of content sources for which the audio is available on every system (streaming audio with their apps, YouTube, other streaming video services with their apps)
-Two local-only content sources: vinyl in the 2-channel salon and a digital disk spinner in the immersive family room

Here, if the transducer/room EQ resided in a player software, it would not improve the fidelity of music or other content played from 80% of the potential sources. And even if source EQ were available in all five sources, one would have to mess with multiple different EQ UIs in real time to duplicate EQ settings for each listening environment in order to equalize the transducers.

Let's assume one has low standards and only cares about fidelity for streaming audio. OK...what happens if you were listening to something headphones that require drastic EQ in your home office then decide to flip the AirPlay stream to the 2-channel system so you can share the track with someone else? The answer is PLEASE MAKE IT STOP!!!! Basically you've poisoned a listening session because there's a stupid button you have to click on the software UI to change the EQ. However, you just wanted to share some music so messing with a stupid effing computer slipped your mind. (I'll grant you this problem could be avoided if the software is sophisticated enough to automatically switch EQ presets for each AirPlay stream, analogous to what RME ADI-2 does for each input. But does that actually exist?)

By contrast, let's look at a more sensible approach:
-the immersive system and desktop systems have bass management and manual or automated EQ in their respective processor
-the stereo system has manual or automated EQ built into its preamp or integrated amp, and
-the headphones are powered, with correction in their own hardware, or the endpoint/DAC/headphone amp has selectable EQ presets if you have multiple headphones

With the sensible approach, one can seamlessly move music from source to source and system to system, and experience each in the best fidelity it can offer you without once having to remember to mess with software. You just set, forget, and enjoy - unless you want to change the processing for a given system. The problem is, the sensible approach requires sensible gear. And that's gear that does the necessary signal processing.
Would be nice to put those features further down the chain, but sadly for now some things are still doable only on PC in software domain. Only other solution I can think of is maybe networked audio gear with advanced software to control all of this out of single workstation, but that's probably tideus anyway.
 

pjug

Major Contributor
Forum Donor
Joined
Feb 2, 2019
Messages
1,776
Likes
1,562
Back to the usable dynamic range of the Rickie Lee Jones track... it seems like in the production they gradually reduced the level of the noise in the fadeout by something like 40dB, maybe more. I estimated it by looking at magnitude at 60Hz, which I don't think has anything to do with the music in the fadeout. So without considering what you need to play the end of the fadeout cleanly, I guess the usable dynamic range would be quite a bit less, no more than 50 dB or so. So maybe that is why some folks are balking at the idea that this is >70dB dynamic range track?
 

Marc v E

Major Contributor
Joined
Mar 9, 2021
Messages
1,106
Likes
1,607
Location
The Netherlands (Holland)
Because that sounds like punishment, not enjoyment.

To condense and adapt from a previous discussion, a source, especially a streaming source, is IMO the absolute worst place to put EQ. EQ here is defined as processing to mitigate flaws in the playback system, as opposed to tone controls to adjust the program material. That makes sense to include in the source material

EQ is an obvious benefit, but in my view should be localized to each distinct playback chain. [note - this is different from, and I think better than, the earlier phrasing.] To put it in the source rather than in the individual playback chain requires the listener to unnecessarily waste time time messing with stupid software, when she just wants to listen to music reproduced well.

Here's a reasonable hypothetical to illustrate why EQ should be as far down the chain as possible. Consider a home with the following:

-Three rooms for attentive listening: a family room with an immersive system, a formal salon with two good "full range" speakers, and a home office with a 2.1 channel nearfield setup as well as headphones. Plus various background audio zones (ceiling speakers, outdoor speakers, HomePods or Sonos, etc.), TWS earbuds, and so on.
-Three categories of content sources for which the audio is available on every system (streaming audio with their apps, YouTube, other streaming video services with their apps)
-Two local-only content sources: vinyl in the 2-channel salon and a digital disk spinner in the immersive family room

Here, if the transducer/room EQ resided in a player software, it would not improve the fidelity of music or other content played from 80% of the potential sources. And even if source EQ were available in all five sources, one would have to mess with multiple different EQ UIs in real time to duplicate EQ settings for each listening environment in order to equalize the transducers.

Let's assume one has low standards and only cares about fidelity for streaming audio. OK...what happens if you were listening to something headphones that require drastic EQ in your home office then decide to flip the AirPlay stream to the 2-channel system so you can share the track with someone else? The answer is PLEASE MAKE IT STOP!!!! Basically you've poisoned a listening session because there's a stupid button you have to click on the software UI to change the EQ. However, you just wanted to share some music so messing with a stupid effing computer slipped your mind. (I'll grant you this problem could be avoided if the software is sophisticated enough to automatically switch EQ presets for each AirPlay stream, analogous to what RME ADI-2 does for each input. But does that actually exist?)

By contrast, let's look at a more sensible approach:
-the immersive system and desktop systems have bass management and manual or automated EQ in their respective processor
-the stereo system has manual or automated EQ built into its preamp or integrated amp, and
-the headphones are powered, with correction in their own hardware, or the endpoint/DAC/headphone amp has selectable EQ presets if you have multiple headphones

With the sensible approach, one can seamlessly move music from source to source and system to system, and experience each in the best fidelity it can offer you without once having to remember to mess with software. You just set, forget, and enjoy - unless you want to change the processing for a given system. The problem is, the sensible approach requires sensible gear. And that's gear that does the necessary signal processing.
I get your point. In fact this is exactly what I am struggling with. To make a decision between software eq and hardware eq basically revolves around the question: do I want all sources to benefit from eq in a system or just 1, the music streamer?

I also understand why for many this would pose as much of a problem as many of us use an avr for home cinema, which almost always has eq in it. And a seperate hifi solution.

In my situation however, I want to combine video and audio which leaves me with 4 options:
1) buy a minidsp to get eq for audio, movies, youtube etc. 500 to 1500 euros
2) buy Roon and accept only eq for streaming audio. 600 + 500 euros for a nuc
3) fix eq in every source possible with software. No solution for youtube and movies as I use 2 channel instead if an avr.
4) buy an avr with less than transparent spec. Probably the best for 2 channel available is the nad m33 for 6000 euros.

How I look at it now, is that I'm only left with 1 sensible solution, option 1 which is a shame. The main problem I think is that many hifi companies are small and not capable of implementing eq transparently. Let alone audiophiles willing to pay for it...
 

mdsimon2

Major Contributor
Forum Donor
Joined
Oct 20, 2020
Messages
2,515
Likes
3,369
Location
Detroit, MI
3) fix eq in every source possible with software. No solution for youtube and movies as I use 2 channel instead if an avr.

Why no solution for this (especially for something as simple as two channel)? I run two systems with software EQ, both have multiple physical inputs (analog, TOSLINK, etc) and are integrated with video sources (AppleTV).

What specific input / output functionality are you looking for?

Michael
 

Marc v E

Major Contributor
Joined
Mar 9, 2021
Messages
1,106
Likes
1,607
Location
The Netherlands (Holland)
Why no solution for this (especially for something as simple as two channel)? I run two systems with software EQ, both have multiple physical inputs (analog, TOSLINK, etc) and are integrated with video sources (AppleTV).

What specific input / output functionality are you looking for?

Michael
Inputs: 2x spdif optical, spdif electrical, usb
Outputs: rca
In essence a transparent dac with dsp that I can also use as a preamp.

Both for streaming audio, playing blurays and cds and an occasional youtube viewing. The last preferably through hdmi but more practical is viewing via the tv app and sound through spdif I think. Preferably with a streamer included but not a deal breaker if it's not there.
 

danadam

Addicted to Fun and Learning
Joined
Jan 20, 2017
Messages
994
Likes
1,545
The Sibelius is all bass! :)
Yes, but isn't that expected in the absence of noise shaping? It actually looks like something between pink and brown noise with log frequency scale:
with.noise.png


Now do you have the PSD of the room?
No, but I hope it's clear I post it just as a curiosity thing, not as a support that it makes any meaningful difference when listening :)
 

j_j

Major Contributor
Audio Luminary
Technical Expert
Joined
Oct 10, 2017
Messages
2,282
Likes
4,790
Location
My kitchen or my listening room.
Yes, but isn't that expected in the absence of noise shaping? It actually looks like something between pink and brown noise with log frequency scale:
View attachment 223527


No, but I hope it's clear I post it just as a curiosity thing, not as a support that it makes any meaningful difference when listening :)

No. Proper dither in the lack of noise shaping should be flat-out white. Most emphatically NOT LF.

But the psd of the room is the real key. THAT is more likely to be pink/brown/something not described by a distribution.

I would suspect the Sibelius, if it's on a quiet part, is A/C noise, then. Wondering where the dither got to.
 

danadam

Addicted to Fun and Learning
Joined
Jan 20, 2017
Messages
994
Likes
1,545
No. Proper dither in the lack of noise shaping should be flat-out white. Most emphatically NOT LF.

But the psd of the room is the real key. THAT is more likely to be pink/brown/something not described by a distribution.

I would suspect the Sibelius, if it's on a quiet part, is A/C noise, then. Wondering where the dither got to.
But this Sibelius, in contrast to my other examples, is a 24-bit recording. Wouldn't that move any dither way below and what's shown is the actual noise in the room/concert hall (wherever they recorded it)?
 
Top Bottom