• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Audio Bit Rates

JayGilb

Major Contributor
Joined
Jul 22, 2021
Messages
1,371
Likes
2,306
Location
West-Central Wisconsin
Absolutely!

I often wonder a bit when people talk about their "preference" in audio gear. Is it something that's more more suited to their personality and/or anatomy? Or is it simply something that feels familiar to them? I wouldn't be surprised if, in a lot of cases, it's the latter.

I think that's also one of the reasons why the high-end industry is so enamoured with the concept of "burn-in". They sell it as a physical phenomenon, but in reality it just gives the customer the time they need to get past the feeling of "This is unfamiliar to me, and therefore I don't like it" (in the cases with an actual audible difference).
As well as getting them past the warranty.
 

ribonucleic

Active Member
Joined
Mar 22, 2023
Messages
215
Likes
230
My 55 year old ears can not distinguish V0 MP3 from FLAC even in classical music. Maybe if I had a super-rig and strained my faculties to the utmost... but even then I doubt it.

It will always make sense to archive in lossless. But streaming at anything over 320 AAC/MP3 is just wasted bandwidth.
 

eddy555

Member
Joined
May 29, 2021
Messages
24
Likes
13
when playing mp3s over ldac do you best keep it at adaptive or switch to the lowest bitrate setting?
 

Sokel

Master Contributor
Joined
Sep 8, 2021
Messages
5,827
Likes
5,756
This comes from professional test files (Merging or Korg if I remember well,I got them a long time ago).
I wonder about the need of that crazy bit rate:

bit rate.PNGbit rate1.PNG

(posted similar before but no answer so far)
 

rickmurphy

Member
Forum Donor
Joined
May 20, 2023
Messages
19
Likes
7
I'm not convinced that blind ABX listening tests fully reveal listeners ability to differentiate encodings.

Does anyone have a reference to psycho-acoustic research on listening tests where users are first trained on what to listen for, then tested in a controlled environment? I suspect that research would complement the blind tests.

In my own experience when first mixing mid-side stereo in a sound field, the plugin ui allowed me both 1) a bypass so I could AB repeatedly; and 2) an interactive click and drag display of the width and depth in the sound field so I could both see and hear the change. By training myself with eyes and ears I developed an intuition about what I was hearing.

So the research question would be if listeners are trained what to listen for, would there be a statistically significant difference.

My Google search criteria are not returning results that tell me this research has been conducted.
 

antcollinet

Master Contributor
Joined
Sep 4, 2021
Messages
7,408
Likes
12,291
Location
UK/Cheshire
I'm not convinced that blind ABX listening tests fully reveal listeners ability to differentiate encodings.

Does anyone have a reference to psycho-acoustic research on listening tests where users are first trained on what to listen for, then tested in a controlled environment? I suspect that research would complement the blind tests.

In my own experience when first mixing mid-side stereo in a sound field, the plugin ui allowed me both 1) a bypass so I could AB repeatedly; and 2) an interactive click and drag display of the width and depth in the sound field so I could both see and hear the change. By training myself with eyes and ears I developed an intuition about what I was hearing.

So the research question would be if listeners are trained what to listen for, would there be a statistically significant difference.

My Google search criteria are not returning results that tell me this research has been conducted.
Our host Amir has participated in at least one such test - though I can't find the reference to it. Perhaps somone else knows where it is.

But here is the thing for me. If you hae to be trained to listen to the tells - and even then, they are so tiny that you have to listen, carefullly switching between A and B and back again to try to find them. And even then, it is really difficult to tell....

Does that tiny difference really matter in your every day listening?
 

voodooless

Grand Contributor
Forum Donor
Joined
Jun 16, 2020
Messages
10,221
Likes
17,798
Location
Netherlands
Our host Amir has participated in at least one such test - though I can't find the reference to it. Perhaps somone else knows where it is.
Even better, @amirm headed the codec division at Microsoft. He can tell you plenty about ABX testing and training for codec evaluation.
 

rickmurphy

Member
Forum Donor
Joined
May 20, 2023
Messages
19
Likes
7
Even better, @amirm headed the codec division at Microsoft. He can tell you plenty about ABX testing and training for codec evaluation.
Looking forward to it. I have enjoyed his hardware reviews over the years.

And I am a retired computer scientist, so we may have something other than an interest in audio science in common.
 

rickmurphy

Member
Forum Donor
Joined
May 20, 2023
Messages
19
Likes
7
Our host Amir has participated in at least one such test - though I can't find the reference to it. Perhaps somone else knows where it is.

But here is the thing for me. If you hae to be trained to listen to the tells - and even then, they are so tiny that you have to listen, carefullly switching between A and B and back again to try to find them. And even then, it is really difficult to tell....

Does that tiny difference really matter in your every day listening?
Thanks for checking for references.

You are introducing subjective judgments into the discussion which I avoid. I am a retired computer scientist.

Links to research papers are welcome. I look forward to hearing about the test Amir participated in.
 

antcollinet

Master Contributor
Joined
Sep 4, 2021
Messages
7,408
Likes
12,291
Location
UK/Cheshire
Looking forward to it. I have enjoyed his hardware reviews over the years.

And I am a retired computer scientist, so we may have something other than an interest in audio science in common.
His main information about blind testing is here. I think he references the tests he has participated in, but being a video, it is difficult to skim read to check :).


Also - re your second post. There is nothing wrong with subjective judgements based on objective information. That is how most objective information is used.


(Edited to add video link about 1.5 hours later :( )
 
Last edited:

rickmurphy

Member
Forum Donor
Joined
May 20, 2023
Messages
19
Likes
7
His main information about blind testing is here. I think he references the tests he has participated in, but being a video, it is difficult to skim read to check :).

Thank you for providing the reference to the Doing it Wrong video.

Its an informal review of experimental design, but does not address the specific issue of whether there's a statistically significant difference between trained subjects and untrained subjects in codec evaluation.

I also read Amir's commentary on the Carlson, et al. Case Study of "Two Amplifiers Proven to Sound Different."


As published the case study is not the strongest experimental design.

Good news! I've identified some research papers that may well address subject training in codec evaluation.

I'll need a few weeks to survey some literature, so I may be unavailable for a while.

Also - re your second post. There is nothing wrong with subjective judgements based on objective information. That is how most objective information is used.


(Edited to add video link about 1.5 hours later :( )

I didn't say there was anything wrong with subjective judgments. I said I avoid them.
 

rickmurphy

Member
Forum Donor
Joined
May 20, 2023
Messages
19
Likes
7
Thank you for providing the reference to the Doing it Wrong video.

Its an informal review of experimental design, but does not address the specific issue of whether there's a statistically significant difference between trained subjects and untrained subjects in codec evaluation.

I also read Amir's commentary on the Carlson, et al. Case Study of "Two Amplifiers Proven to Sound Different."


As published the case study is not the strongest experimental design.

Good news! I've identified some research papers that may well address subject training in codec evaluation.

I'll need a few weeks to survey some literature, so I may be unavailable for a while.



I didn't say there was anything wrong with subjective judgments. I said I avoid them.

While my research is ongoing, I want to share a few early findings and ask a few questions.

1. This video on Audio Blind Testing and Listener Training provides at least some anecdotal evidence that test results from a "critical" listener who knows what to listen for can be different than test results of naive or untrained listeners :


Of course anecdotal evidence leaves us short of a sound inference. Does anyone have a reference to papers that provide evidence of a statistically significant difference between naive and critical listeners in a controlled study?

2. Any inferences following from the public listening tests - Kamedo2 Multiformat and Archimago Musings - cited in this thread are subject to limitations of experimental design. We can learn a few things about how the Kamedo2 Multiformat test was conducted here https://listening-test.coresv.net/, but there's no discussion of methodology.

Can anyone provide a reference to further information on the Kamedo2 Multiformat design and methodology?

Archimago does provide some discussion of methodology with admissions here http://archimago.blogspot.com/2013/02/high-bitrate-mp3-internet-blind-test_2.html.

3. ITU-R BS.1116, 3 Selection of Listening Panels, restricts listening panels to expert listeners only and they are subject to pre- and post- screening. See also the attachments 1, 2 and 3 to Annex 1 for further qualification of listener expertise.


ITU-R BS.1116 is useful both as a standard for listening tests and also to qualify representations made about listening tests.

I look forward to reporting more findings in the coming weeks.
 

HarmonicTHD

Major Contributor
Forum Donor
Joined
Mar 18, 2022
Messages
3,326
Likes
4,829
While my research is ongoing, I want to share a few early findings and ask a few questions.

1. This video on Audio Blind Testing and Listener Training provides at least some anecdotal evidence that test results from a "critical" listener who knows what to listen for can be different than test results of naive or untrained listeners :


Of course anecdotal evidence leaves us short of a sound inference. Does anyone have a reference to papers that provide evidence of a statistically significant difference between naive and critical listeners in a controlled study?

2. Any inferences following from the public listening tests - Kamedo2 Multiformat and Archimago Musings - cited in this thread are subject to limitations of experimental design. We can learn a few things about how the Kamedo2 Multiformat test was conducted here https://listening-test.coresv.net/, but there's no discussion of methodology.

Can anyone provide a reference to further information on the Kamedo2 Multiformat design and methodology?

Archimago does provide some discussion of methodology with admissions here http://archimago.blogspot.com/2013/02/high-bitrate-mp3-internet-blind-test_2.html.

3. ITU-R BS.1116, 3 Selection of Listening Panels, restricts listening panels to expert listeners only and they are subject to pre- and post- screening. See also the attachments 1, 2 and 3 to Annex 1 for further qualification of listener expertise.


ITU-R BS.1116 is useful both as a standard for listening tests and also to qualify representations made about listening tests.

I look forward to reporting more findings in the coming weeks.
To 1) see the studies from Toole / Olive. They tested all kind of listeners (and much more).
 

rickmurphy

Member
Forum Donor
Joined
May 20, 2023
Messages
19
Likes
7
To 1) see the studies from Toole / Olive. They tested all kind of listeners (and much more).
Thanks for the reference. There already appears to have been discussion of Olive's study of differences among trained and untrained listeners here :


I was able to find a limited email discussion of the Kamedo2 Multiformat public listening test here :


Notice, the author avoids answering the question "How are listeners selected for a Listening Test?" Rather, he redirects the discussion to low anchors. There's also some additional discussion of limitations in experimental controls.
 

rickmurphy

Member
Forum Donor
Joined
May 20, 2023
Messages
19
Likes
7
Notice the Joshua Reiss paper concludes both a statistically significant discrimination of hi-res audio sources and that training imputes a "far more significant" ability to discriminate. See section 4.1.


Because the paper is a meta analysis, its a useful survey of prior studies and there's plenty of discussion about experimental design.
 
Joined
Oct 8, 2018
Messages
60
Likes
34
Location
Singapore
My 55 year old ears can not distinguish V0 MP3 from FLAC even in classical music. Maybe if I had a super-rig and strained my faculties to the utmost... but even then I doubt it.

It will always make sense to archive in lossless. But streaming at anything over 320 AAC/MP3 is just wasted bandwidth.
Your equipment resolution also ..

Qobuz up to 192khz if paid and SuperStereo internet stations at 96khz if foc for me if I demo my home and car audio. Cheers
 
Top Bottom