• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Multichannel experiments with Beaglebone Black. (Any device tree experts out there?)

arvidb

Member
Joined
Dec 1, 2022
Messages
96
Likes
114
Location
Sweden
This is a technical post aimed at people who like to tinker with electronics and software. Nothing really useful has come out of it (so far).

Like many others before me, I thought it would be neat to have multichannel audio working on the Beaglebone Black. It could be used as a base for a custom USB DAC (I'm envisioning a desktop DAC with 2 balanced main channels, 2 SE sub channels, and 2 headphone channels, all with different filters and with a volume knob controlling only main+sub outputs) or for DIY active speakers fed via AES67, for example.

So I tried it. Here's my experimental setup:
SAM_3564.jpg

(The larger board on the left with the two gray cables going to it I'm just using as a USB<->3.3V serial converter.)

To the Beagleboard Black is connected a Raspberry Pi DAC+ (codec TI PCM5122) and a Raspberry Pi DAC Pro (codec TI PCM5242, balanced output) - tested by Amir here. Both boards also have a headphone amp which was useful during testing.

The Beaglebone Black's (BBB's) audio peripheral is what TI calls a "Davinci McASP" - short for Multichannel Audio Serial Port. Actually the SoC has two such peripherals. Each one supports four I2S audio streams (and each one of those can also be TDM:ed to carry up to 32 channels each - on paper at least).

The BBB also has a dedicated 24.576 MHz audio clock oscillator connected to mcasp0. This clock is evenly divisible by 48 kHz but not by 44.1 kHz which means only multiples of 48 kHz are supported (actually I think multiples of 16 kHz?). On the other hand it should give a very nice and stable clock at the supported rates.

In my experiments I used mcasp0 and I connected the codecs to individual I2S ports: the DAC+ is connected to axr2 and the DAC Pro is connected to axr0. They share bit clock and frame clock which are supplied by the McASP, derived from the 24.576 MHz oscillator. Both codecs generate their own master clock internally from the bit clock via a PLL, which makes things easier.

Besides the I2S data streams the codecs also need an I2C connection for setup of audio data format and for controlling things like digital volume and analog gain. The codecs have what TI calls a "basic digital volume control" (+24 dB to -103 dB in 0.5 dB steps) and also a selectable -6 dB analog gain control (i.e. one can select 2 Vrms or 1 Vrms full scale), among other settings. By default both of these boards use I2C address 0x4c, but the Pro board is designed so that one can change the codec address to 0x4e using a small solder bridge, which I elected to do to be able to hook them up to the same I2C bus. (Alternatively, the BBB has two I2C buses available on its pin headers.)

Both the McASP peripheral and the PCM5xxx codecs are pretty complicated beasts. Luckily they already have drivers in the Linux kernel. "All" that remained to be done was to make them talk to each other! This is generally accomplished using what's called a device tree that describes to the Linux kernel what's connected and how.

Getting either one of the codecs to talk to ALSA and output sound was not too difficult actually. My device tree source for using either codec can be found here.

However, try as I may I cannot seem to get both codecs working at the same time. Modifying the device tree to use both codecs by enabling both McASP serializers and "commenting-in" the pcm5142 port just gives:

Code:
kernel: asoc-audio-graph-card sound: parse error -22
kernel: asoc-audio-graph-card: probe of sound failed with error -22

... as does all other things I've tried (well, some have failed more spectacularly). I've really tried to make this work but without a solid understanding of ALSA drivers and device trees I think I am at the end of the road, at least for the moment. I have asked for help on LKML but that's not really the right forum for the "please tell me how to do this" kind of questions; it's more of an "I found a kernel bug and here's a patch to fix it" kind of place. (Obviously, if there's a Linux kernel/ALSA developer here that wants to chime in I'd be all ears!)

Well, I guess a negative result is also a result, so there you have it! Perhaps it can be of use to someone.
 

phofman

Addicted to Fun and Learning
Joined
Apr 13, 2021
Messages
502
Likes
326
OP
arvidb

arvidb

Member
Joined
Dec 1, 2022
Messages
96
Likes
114
Location
Sweden
Thank you for the links! I'm a bit burnt out on this at the moment: there are plenty of similar threads in multiple locations asking/discussing the same thing (how to configure multichannel mcasp audio using a device tree overlay) but I haven't found a single example of someone actually succeeding. The only "open" instance I recall now of apparently working multichannel mcasp audio is the Botic project (source links at the very bottom of that page) and they are using a full ALSA card driver as well as patches to the mcasp driver.

The reason I used LKML is that it turns out that that's where the people who actually wrote the code are discussing these things . I CC:ed them to my message. Unfortunately the mcasp author no longer seems to be working at TI so that CC bounced. And anyway it's likely that they have better things to do than to explain this to me! But rather than starting another similar thread on the alsa-devel maillist and not getting any answers I thought this was my best bet. *shrug*

Right now I'm thinking that any further effort would be best placed into actually writing an ALSA card driver to get a better understanding of how things are hooked together and understand what's missing in my device tree overlay.
 

phofman

Addicted to Fun and Learning
Joined
Apr 13, 2021
Messages
502
Likes
326
The botic project is by miero, e.g. https://www.diyaudio.com/community/threads/support-for-botic-linux-driver.258254/ .

Starting with/going for a coded driver makes perfect sense. DTS files are basically configurations for specific drivers - it's always much easier to hard-code the functionality into the source code than to make it configurable. Also DTS config means an extra layer of complexity for the developer and for the user. I usually have to study the source code to learn what a DTS option actually does. A hard-coded feature is much easier to read and understand.

IMO if a custom hard-coded driver works, no reason to fuss with DTS afterwards.
 
Top Bottom