• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Using a Raspberry Pi as equaliser in between an USB source (iPad) and USB DAC

DeLub

Active Member
Joined
Jun 16, 2020
Messages
146
Likes
185
Location
The Netherlands
I wrote this 'guide' in the hopes it will be useful for anyone.

Introduction

I would like to connect my dac and amp to my iPad to be able to listen to Apple Music on my HD600. However, I want to eq the sound and this is not possible on iOS. That's where the pi comes in: I want to put it in between my dac and the iPad, take the sound from the iPad, eq it, and output it to my dac. This is possible on a Raspberry Pi 4, as it support OTG mode for the usb power input port.

Required hardware

Obviously you need to have a Raspberry Pi 4 (it might also work on a Pi Zero). Earlier Pis don't support OTG, so won't work.

I also bought this thingie:

972b5428-7a6c-48ab-9c43-e65941f1402a_1000x.jpg


This allows to plug in the Pi's power source and a data cable that will be connected to the iPad on one side, and a single cable with data and power combined to the Pi. This way the Pi stays powered on when no sound source is connected and doesn't draw any power from the iPad when it is connected. (You can buy one here or on other Pi shops.)

Finally, you need a bunch of cables :).

Setup the Pi

In the rest of this description I'm using DietPi as Linux distribution, but it should work quite similar on other distributions.

So, first install DietPi as per the instructions on their site. Make sure audio is enabled in the configuration program that is automatically run the first time you boot into DietPi.

Make the Pi function as an USB audio gadget

  1. Add the line dtoverlay=dwc2 to /boot/config.txt.
  2. Add two lines to /etc/modules:

    Code:
    dwc2
    g_audio
  3. Create file /etc/modprobe.d/usb_g_audio.conf:

    Code:
    #load the USB audio gadget module with the following options
    options g_audio c_srate=48000 c_ssize=4
  4. Reboot.

After reboot the Pi functions as an USB audio gadget that accepts audio with a sample rate of 48kHz (change c_srate if you would like another sample rate), and a sample size of 32 bits (c_ssize=4 means 4 bytes, equals 32 bits). For this, a new alsa device has been created. In my case it's hw:0 (and hw:1 is my dac), but you can find out by running arecord -l (and aplay -l to find your dac).

Setup equalisation

I'll use ffmpeg for equalisation. Install ffmpeg, and create /etc/systemd/system/ffmpeg.service:

Code:
[Unit]
Description=Equalizer
Requires=sound.target

[Service]
Type=simple
ExecStart=/usr/bin/ffmpeg -loglevel panic -nostats -nostdin -f alsa -acodec pcm_s32le -i hw:0,0 -af "volume=volume=-5.8dB,bass=f=25:t=q:w=0.71:g=6.0,equalizer=f=150:t=q:w=0.5:g=-2.9,equalizer=f=1330:t=q:w=1.4:g=-2.0,equalizer=f=3150:t=q:w=2.2:g=-3.1,equalizer=f=3500:t=q:w=4.0:g=-1.0,equalizer=f=5000:t=q:w=6.0:g=-1.2,equalizer=f=5850:t=q:w=4.5:g=-2.2,equalizer=f=5600:t=q:w=0.71:g=2.0,equalizer=f=7710:t=q:w=5.0:g=-2.5,treble=f=13000:t=q:w=0.71:g=-3.0" -f alsa -acodec pcm_s32le hw:1,0
Restart=always

[Install]
WantedBy=multi-user.target

This creates a service that is automatically started after booting the Pi. I've put in the equalisation values for the HD600 by Oratory1990, but you should in your filters of course.

Start with systemctl start ffmpeg. To have this service start at reboot, type systemctl enable ffmpeg.

Going further: dynamic sample rate switching

Ideally, you want the Pi to support multiple sample rates and dynamic switching. The current kernel modules don't support that however. Luckily P.Hofman forked the Linux source and applied a couple of patches to support dynamic sample rate switching. This does, however, require the compilation of the kernel.

I've got the kernel to compile, however it's not working stable currently. I'll update the extra steps necessary to support dynamic sample rate switching, when I've got it figured out :)

References

I've pulled this together based on several sources (which I did not all write down), but mainly from:
 
has g audio been solved for windows yet? or vice versa?

Just a disclaimer for the people who aren't using Mac out there.
 
has g audio been solved for windows yet? or vice versa?

Just a disclaimer for the people who aren't using Mac out there.

I believe it is in the patched kernel, but as I'm no Windows user I'm not sure.
 
@DeLub do you think anything similar to g_audio exist for windows x86 machines?

PEQ is really a solved problem at this point, but a Dirac live engine with multi-channel support and bass control (with USB in, USB out) would be phenomenal.

Edit: I feel stupid right now since you just said you don't use Windows.
 
@DeLub do you think anything similar to g_audio exist for windows x86 machines?

I'm not sure if there are alternatives. Most devices (PCs, NUCs,...) can only function as an USB host, therefore I doubt a g_audio alternative exists for windows.
 
I'm not sure if there are alternatives. Most devices (PCs, NUCs,...) can only function as an USB host, therefore I doubt a g_audio alternative exists for windows.

i thought USB-C can act as both host and device? which is determined when the device is plugged in.
 
Are you using the Apple Camera Adapter to get USB out from the iPad or are you connecting directly to the Lightning port?

Tom
 
No. PCs are by default in host mode. That's why you need the dw2 overlay and module.

it's just that you make it sound as if it is a hardware limitation when as far as i know a type C USB port can act as both type A and B (host or device).

Not trying to sound rude, just trying to better understand the situation.
 
it's just that you make it sound as if it is a hardware limitation when as far as i know a type C USB port can act as both type A and B (host or device).

Not trying to sound rude, just trying to better understand the situation.
I'm no expert either. Just googling my way around and re-combining all the info I find :)

I actually had the same thought as you. But it just doesn't work on a NUC with Linux. So perhaps it is a hardware limitation... not sure...
 
USB-C does not mean Dual-Role-Data (formerly USB OTG) automatically. The USB controller must support the feature.

AFAIK very few x86 devices feature a USB controller with the gadget mode. I know of some embedded intel SoCs (Apollo Lake etc.) for which linux should have gadget support but I have not seen a device with physically proper connector for OTG yet.

Chaining the USB gadget (clocked by the USB host) and USB output (clocked by the soundcard clock or the USB controller of the gadget device) requires either adaptive resampling or async feedback to control the pace at which the USB host sends audio data. Using ffmpeg would require a separate handling of this. CamillaDSP provides both equalization and the feedback support (or adaptive resampling).

Win10 is getting fully supported (async mode, rate switching). Still some issues to iron out but generally working OK. Patching and recompiling RPi kernel is still required though.
 
Chaining the USB gadget (clocked by the USB host) and USB output (clocked by the soundcard clock or the USB controller of the gadget device) requires either adaptive resampling or async feedback to control the pace at which the USB host sends audio data. Using ffmpeg would require a separate handling of this. CamillaDSP provides both equalization and the feedback support (or adaptive resampling).
That what I found out in the mean time as well, and I actually switched back to the stock kernel with a fixed sample rate and switched to CamillaDSP :).

Perhaps I should add this to my initial post. But I'd prefer to wait a couple of weeks (months?) until your patches have been accepted into the kernel sources.

I know you have been asking for feedback but did not get any response. Unfortunately I cannot provide you with any feedback, but if you think I can assist you in another way, please let me know. Happy to help.
 
Thanks for sharing, @DeLub. I have a similar need in that I would like to experiment with a dedicated RPI4 that does only the DSPing. It takes 2x stereo input, hopefully on a RPI Linux distro with dynamic samplerate, what is status, @phofman?

Then it does its DSP and then send out as quadrophonic format, i.e two times 2xstereo.

As I understand (not much of ) the RPI, the hifiBerry GPIO configuration for i2s is stuck to 2x stereo output. Is this correct?

And if correct, would it be imaginable with a future Linux RPI variant that accept stereo i2s input via GPIO or otherwise? Would that be a huge task, or more of a Linux configuration issue involving changes of the GPIO pins, etc? (I wonder why it is not there since long...)

Ok, assuming for now that RPI i2s input is just a wet dream, then that DSP RPI will have to act also as source, and as usb Host and outpur its result, for example to MiniDSP USB streamer, which has features to distribute multichannel i2s, which then is connected to two dacs, each with 2x stereo input. I am using the Audiophonics 90x8 DACs which I like very much for their excellent audio quality. They are connected to high-end gear. So that is a solution, however limited. And it will require that the source (roon as an example) "sees" the DSP as a viable output. The DSP then takes care of output further on.

Then comes therefore, the dream to feed an external source to this DSP RPI. That could be another RPI, for example a RPI3 with RoPieee that I have here (recommended). BUT, how to transfer i2s or whatever from the output of one RPI to the ditto input of another? Is there something with the OTG feature that I can use here? That would mean that the DSP RPI must act as usb "receiver" to the source, and then as USB host to the DACs. Is that possible?

It is probably easire to use one RPI only for source + DSP, but along these lines I hope for a possibility to use RPI+RoPieee, another external digital source, yet another digital source, etc, and then I imagine that an autonomic DSP RPI may be a good thing, since the DSP feature is the same regardless of source. If it can take the double USB role mentioned.

So my question, does all this seem like a good idea? Or some of it?

Thanks beforehand!
 
Last edited:
RPi has an I2S input which works fine, up to large samplerates (my ESS ADC board samples at 512kHz/32bit OK, 768kHz/32bit ADC fails, but direct I2S loopback works OK up to 1536/16bits https://forums.raspberrypi.com/viewtopic.php?f=44&t=8496&start=1000#p1865884 https://forums.raspberrypi.com/viewtopic.php?f=44&t=8496&start=1000#p1865717 ). Of course 2ch only, like its I2S output.

As of the USB gadget - the patches are in the process of acceptance by the kernel devs. I hope kernel 5.17 should have most of them.
 
RPi has an I2S input which works fine, up to large samplerates (my ESS ADC board samples at 512kHz/32bit OK, 768kHz/32bit ADC fails, but direct I2S loopback works OK up to 1536/16bits https://forums.raspberrypi.com/viewtopic.php?f=44&t=8496&start=1000#p1865884 https://forums.raspberrypi.com/viewtopic.php?f=44&t=8496&start=1000#p1865717 ). Of course 2ch only, like its I2S output.

As of the USB gadget - the patches are in the process of acceptance by the kernel devs. I hope kernel 5.17 should have most of them.
Thanks for info. For my study. One question on the fly: Would it be possible to enhance the Linux i2s output to multichannel, at least 4 channels? By reconfig of GPIO pins or whatever? As a custom Linux solution if necessary? It would be more "practical" than using a USB unit - in between RPI and DACs - that converts to multichannel i2s, like the MiniDSP USB streamer. I imagine then, a Linux configuration utility with choices: "Use GPIO as default hifiberry or as multichannel i2s?". Is it possible to guess-timate the level of effort needed, in approx manhours?

Not yet sufficiently competent on i2s signaling, but as I understand, one would need a "hat" I2S buffer for proper distribution of the I2S clock pins? Should not be hard to put into manifistation. Or perhaps a TDM output format that this "hat" converts to multi I2S. As you understand, I am asking dummy questions out in thin air, but I shall make a drawing and put it up here. ASAP....
 
Last edited:
The RPi HW is limited to 2ch TDM. There are attempts to "hack" multichannel using a higher samplerate and an external FPGA signaled by a GPIO pin for demultiplexing https://forums.raspberrypi.com/viewtopic.php?t=8496&start=1025#p1939522 https://forums.raspberrypi.com/viewtopic.php?t=193550 https://github.com/raspberrypi/linux/blob/rpi-4.9.y/sound/soc/bcm/audioinjector-octo-soundcard.c

Personally I would use a different SoC (e.g. Rockchip64) with proper multichannel I2S, instead of hacking RPi. But this topic has come up here many times, with the result that nobody wants to spend time porting any of the major media players from RPi.
 
@phofman, I read about the Rockchip64, but could not clearly see how it is a multi i2s device on its output, or how it could be configured via pheriphals to become that. Would you be so kind to grant me a helpful comment?

But I have read a little about IanCanadas products over at https://github.com/iancanada/DocumentDownload. As it seems to me, those are probably good examples on products that will gladly take on the jitterfree quality audio transport towards the DACs in the end of the chain, as delivered from a Soc multichannel i2s (or the MiniDSP USB Streamer's) output. Interesting ambition, at least.

PS: A RPI4 is on its way to test the OTG solution described here.
 
Back
Top Bottom