• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Can the Android 48khz sample conversion be measured in the form of distortion+noise, has anyone done this?

dunkuk

Member
Joined
Apr 23, 2020
Messages
31
Likes
9
I've been reading a lot about how people use bit-perfect drivers or players when using a phone as a DAC amp or as a DAC transport with external Dongle DACs.

My understanding is this is to avoid 44.1khz and other rates being converted to 48khz as people say this degrades the quality of the signal.

As this is a science forum, is this something that anyone has measured? eg Distortion measurement with bit-perfect driver vs conversion to 48khz?

The reason I ask is because I use 44.1khz FLACs and if the difference is not measurable then I'm wondering if it's worth using a bit-perfect driver/player app. I have heard many reviews/forum posts saying there is a difference but I have not seen any measurements to show how bad it is.
 
I don't know if you can measure the conversion without going through the DAC, and of course there are no "bits" coming out of the DAC.

There WILL, of course, be digital differences that you MIGHT be able to measure as "distortion". But it's usually not audible.

The best experiment would be a What is a blind ABX test but you'd need a way of getting the other 44.1kHz audio out of the same hardware.

You can convert your FLACs to 48kHz (with Audacity, or whatever) and do an ABX test. But that's different conversion software. If you keep the new 48kHz FLACs to play on your phone they will be about 10% larger.

I have heard many reviews/forum posts saying there is a difference but I have not seen any measurements to show how bad it is.
You're right that this is one of the few scientific resources and most of these guys "don't believe in" blind listening tests. ;)

It's true that you should avoid "unnecessary" conversions but sometimes it's useful or more convenient. Any digital volume change or EQ changes the bits. Bit perfection is "nice" because you know that nothing is altering the data,
 
Last edited:
Thanks, I agree a listening test is the easiest way.

I was thinking of a standard test where a sine wave is played from a phone, of course it would have to go through a DAC. Either an external DAC or the phones internal DAC. Then measure the distortion.
1st with no driver or the phones standard music player doing sample rate conversion 44.1 to 48khz,
2nd "bit-perfect" with UAPP driver or Hiby player playing 44.1khz.

I can see an immediate problem, if it turns out that using bit-perfect software makes almost no difference it will damage the sales of the bit-perfect software..

I have a Motu M4 interface, I wonder if that is good enough to measure the distortion or if it needs something more sensitive?
 
Are there Android-compatible digital interfaces? As in OTG/class compliant USB to S/PDIF converters for example. I seem to remember seeing one or two of these.

That way you could easily compare the two use cases: play a 44.1kHz test file thru Android native resampling, then thru bit perfect playback app, both into audio analyser's digital input, then compare.
 
I hadn't thought of doing it in the digital domain, I think there are interfaces that have a digital out. One I found is the SMSL PO100 pro

What interests me is claims that using a bit-perfect driver on a phone, either internal or usb-c DAC improve the audio quality when listening to headphones.

I just spent a while googling, distortion analysis of sample rate conversion and found nothing. The closest thing I found is this youtube video of a guy resampling a file 400 times, then inverting the file against the original and finding virtually no audible differences. He is using good software on a Mac though.


What I am finding is posts on forums recommending a certain bit-perfect driver and other uses replying saying wow what a difference, which may or not be genuine, then posts with people saying they can't tell any difference at all.

Here is one I thought was interesting:
"In theory changing sample rates can lead to some audio degradation, particularly near the cutoff frequency, due to the interpolation and filtering required to do it and if you're using very high speed/low quality resampling algorithms. It would be good to avoid multiple rounds of resampling to different sample rates, but a single resampling step on playback is going to be irrelevant."

This is the sort of analysis I was thinking of, doing it once with 44.1>48khz Android conversion and one with a bit-perfect driver.


JCALLY JM20MAX headphone dac portable phone adapter dongle measurement.png
 
I suppose although my Motu M4 will not be able to measure very low levels, it is quite a good interface, so if I can not measure any difference of sample converted output vs bit perfect using the M4 then the difference must be quite small. I might try and do that myself and see what happens.
 
I’m pretty sure with Neutron you can choose direct hardware access to the DAC or go via the usual audio path. So you could connect an external DAC and switch between 2 profiles in Neutron and see if you can hear a difference.

That might be hard to do blind unless you have someone willing to do the switching for you.
 
I hadn't thought of doing it in the digital domain, I think there are interfaces that have a digital out. One I found is the SMSL PO100 pro

What interests me is claims that using a bit-perfect driver on a phone, either internal or usb-c DAC improve the audio quality when listening to headphones.

I just spent a while googling, distortion analysis of sample rate conversion and found nothing. The closest thing I found is this youtube video of a guy resampling a file 400 times, then inverting the file against the original and finding virtually no audible differences. He is using good software on a Mac though.


What I am finding is posts on forums recommending a certain bit-perfect driver and other uses replying saying wow what a difference, which may or not be genuine, then posts with people saying they can't tell any difference at all.

Here is one I thought was interesting:
"In theory changing sample rates can lead to some audio degradation, particularly near the cutoff frequency, due to the interpolation and filtering required to do it and if you're using very high speed/low quality resampling algorithms. It would be good to avoid multiple rounds of resampling to different sample rates, but a single resampling step on playback is going to be irrelevant."

This is the sort of analysis I was thinking of, doing it once with 44.1>48khz Android conversion and one with a bit-perfect driver.


View attachment 450386
That's interesting. It also confirms the good old theory of multiple conversion cycles back and forth: initially, in the first conversion from better to worse format (here: 48 to 44.1kHz), there is information loss, which is inevitable. After that, the lost information can't be retrieved obviously, but what's more important is that additional steps/cycles don't and can't remove *further* information beyond what was initially lost.

It's very similar with lossy audio codecs such as MP3. Encoding removes certain parts of the original information, in that case frequency content above 15kHz (iirc). That's lost with the initial encoding step, but if you decode and encode again, the encoder again tries removing the same info that is now already gone - and as a result removes nothing more.

I tested this once. Took a lossless file of my own making (hobbyist Electro with big bass, sharp claps and sizzling hihats, which I was intimately familiar with because I made it), encoded to 128kbps MP3, then took the same and ran it through ten encode-decode cycles. Then ABX compared the 1x and 10x encoded versions.

Result, both with speakers and headphones: couldn't tell any difference whatsoever. It answered for me personally the (back then) rampant question in producer circles whether you should or could use MP3 sources as samples in music production. Yes, you absolutely can and shouldn't bother. If I as the, ahem, "musician" can't tell a difference between one and ten coding cycles of a horribly lossy format, then so can't the average listener between a lossless and MP3 encoded sample that's used somewhere in the whole mix, with several processing steps and endless amounts of effects.
 
That's a good explanation. I agree that once the file goes through one cycle it makes sense the modified file will not be affected again by the same process.

This has made me realise the limits of my understanding of how these things work, I'm sure there is a way to look at the software and mathematically work out what happens to the file. That's too much for me hence the idea of using standard analogue measurements, as if the audio has low harmonic distortion my understanding is that means it has not been altered much from the original. The thing is I bet there is more too it than that.

MP3 encoding must be a much more destructive process than sample rate conversion. I just read a little about mp3 coding and it is based on the idea that the human ear is sensitive to some information within audio so it keeps the information that we need to hear and strips out data that we cannot hear easily.

I would think if this sample rate conversion produces artefacts that the human ear can not hear, eg above 15khz then it should be hard to tell if the signal has been modified. My hearing seems to be pretty much gone above 10khz so maybe it's not something I should worry about!

Drifting off topic here but I know a lot of old gear had very compressed samples like the first Roland digital synths, and the early samplers had terrible quality so I don't think mp3 is any worse for sample use. With electronic music, my view is that it does not exist in the real world so a hifi does not need to try and make it sound realistic, it's more about it sounding good or interesting, and that can vary by taste, eg a bass heavy speaker stack at a festival can sound very good even though the frequency response is far from flat.

I'm awaiting a dongle DAC to arrive in the post, maybe I will have a go at doing some measurements when it turns up.
 
I was thinking of a standard test where a sine wave is played from a phone, of course it would have to go through a DAC. Either an external DAC or the phones internal DAC. Then measure the distortion.
1st with no driver or the phones standard music player doing sample rate conversion 44.1 to 48khz,
2nd "bit-perfect" with UAPP driver or Hiby player playing 44.1khz.
I don't know if you can measure the conversion without going through the DAC, and of course there are no "bits" coming out of the DAC.
You can if the "DAC" is an USB to SPDIF converter which is then used as the analyzer input signal.
 
IMO measuring resampler "impact" via DAC -> standard ADC will not show anything meaningful, as the analog chain will corrupt the results.

As @KSTR suggests - an USB-SPDIF converter would keep the samples in digital domain. Any bitperfect SPDIF receiver chain would record the stream, and this stream can be either analyzed or even compared with the original source for bit-perfection etc.

As of the resampling - as noted before we are at audioscience forum, hence deeper insight would be handy.

Stock Android audio stack is open source and nicely searchable, at any android version. A quick look at version 15 reveals there are several configurable resamplers. The default resampler is DYN_MED_QUALITY https://cs.android.com/android/platform/superproject/+/android15-qpr2-release:frameworks/av/media/libaudioprocessing/AudioResampler.cpp;l=166-174?q=DYN_MED_QUALITY&ss=android/platform/superproject , unless af.resampler.quality is set by vendor when customizing his android. Meanings of the quality params https://cs.android.com/android/platform/superproject/+/android15-qpr2-release:frameworks/av/media/libaudioprocessing/AudioResampler.cpp;l=217-257?q=DYN_MED_QUALITY&ss=android/platform/superproject

Parameters of DYN_MED_QUALITY https://cs.android.com/android/platform/superproject/+/android15-qpr2-release:frameworks/av/media/libaudioprocessing/AudioResamplerDyn.cpp;l=445?ss=android/platform/superproject

As of the property af.resampler.quality - setting it via adb setprop seems to work, however only for a rooted device. My rooted emulator did set it, my unrooted phone returned "access denied".

Also possible (but unlikely due to https://cs.android.com/android/platform/superproject/+/android15-qpr2-release:frameworks/av/media/libaudioprocessing/AudioResampler.cpp;l=146?q=qualityMHz&ss=android/platform/superproject) downgrading of the resampler quality due to large CPU load could be applied https://cs.android.com/android/platform/superproject/+/android15-qpr2-release:frameworks/av/media/libaudioprocessing/AudioResampler.cpp;l=176-213?ss=android/platform/superproject

TLDR: IMO every android device can yield different results due to different android versions and different vendor setups.

Note: Every android playback app can ask for an "exclusive" bitperfect mode, as explained in https://source.android.com/docs/core/audio/preferred-mixer-attr . IMO it's similar to wasapi exclusive in windows or alsa hw:x access in linux. Discussion e.g. https://forums.whathifi.com/threads/android-bit-perfect.130639/
 
Last edited:
Back
Top Bottom