• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Integrating DSP Engine into the Linux Audio Stack

dc655321

Major Contributor
Joined
Mar 4, 2018
Messages
1,597
Likes
2,235
I would very much appreciate some input from the wise members of this site on
various aspects of implementing and integrating a convolution engine into the
Linux audio stack. I'm thinking of members like @Cosmik , @mansr , @pos, @yue or
others that may have some experience digging around in the depths of audio
software.

Apologies in advance for the lengthy post.


Some Context:
--------------
I am building a convolution engine to support DSP operations as part of a
project to create a speaker system.

Why build my own engine?
Because I am a sucker for punishment enjoy the challenge and re-learning
aspects. My education is in physics, control systems, and DSP, but I have not
really exercised those neurons since grad school (longer ago than I'm comfortable
admitting). Most of my work in the intervening years has been software
development for particle accelerators (device drivers, data acquisition, etc).

The Good:
-------------
At this point, I have constructed a basic convolution library, and integrated
it into ALSA via the extplug API. I can push audio files through my software
with aplay, cmus, Audacity and it comes out the other side (into sound
card or file) with the expected results. Yay for small victories.

The Bad:
-------------
However, sharp edges abound. If I direct the audio player to ffwd or rrew, the
system typically does not respond well (crash, burn, freeze, etc). More on that
later. Also, if I try to get REW to use my ALSA plugin, I can see it open and
close the plugin roughly once a second. REW appears hung in this state, and
more importantly, I cannot use REW like this and progress on making a set a
speakers stalls. Boo for crushing defeats.

My Suspicions:
-------------
The ALSA extplug API is very narrow, perhaps too narrow to be used
directly
by audio clients. I will admit to choosing it based solely on it
being reasonably low hanging fruit. My plugin is only ~350 lines of code, most
of which is parsing configuration options to be passed to the lower level
convolution library (see example config below). My hunch is that what I'm trying
to do is far better suited to the ioplug API in ALSA (essentially a virtual
sound card). With the ioplug API, audio clients can get sufficiently duped
into thinking they're dealing with a real sound card, or at least a full
featured ALSA PCM driver. I am guessing that REW is doing something like:

- open "default" sound sink (myplugin)​
- probe sink for its capabilities​
- probe fails, close sink​
- try opening sink again...​

Here, I guess the probing fails because the extplug API is returning bad
info. This could be due to the narrow API or (just as likely) errors in my
simplistic implementation. Perhaps @JohnPM could venture a guess?

A New Hope?
-------------
I'm not worried about throwing out my crappy little plugin for something
better, subject to the caveat that the "something better" is actually better.
So, here are some options for integration of the convolution library into the
Linux audio stack:

1) ALSA ioplug - much broader API, much more complexity. The model here is that
audio client software (REW, Audacity, etc) open and communicate with this
plugin, and in turn, the plugin opens and communicates with a real soundcard on
the client's behalf, feeding it the DSP-molested data.

2) Pulseaudio (PA) filter - Going a layer above ALSA in the audio stack may
have some benefits, but I have only minimally explored this option. I
understand how PA works and is incorporated in ALSA (via an ioplug!), but I
have no experience trying to integrate directly with PA itself. Ultimately,
integration with PA would be great, but at least at this point, that's only a
"nice-to-have".

3) ALSA PCM plugin plumbing - it may be possible to wire a set of PCM plugins
together (including mine), to better emulate the interface that sophisticated
audio clients require. The attraction here is that it would not require writing
any new code, only configuration file editing.

4) Write an ALSA driver - no libc, no userspace FFT libs,
userspace-to-kernelspace plumbing. Blech, no thanks.


Thoughts on these options or anything else covered here?
Related experiences to share? Please do!

Sample of configuration for ALSA extplug "myplug", loading a 300-3000Hz bandpass FIR filter of 1024 taps:

Code:
pcm.!default myplug

pcm_type.myplug {
    lib "/path/to/lib/libasound_module_pcm_myplug.so"
}

pcm.myplug {
    type myplug
    logpath "/tmp"
    #slave.pcm "save"
    slave.pcm "plughw:0,0"
    coeffs {
            myfilter "/path/to/filter/fir_bp_300_3000_1024.txt"
    }
    channels {
       left {
            input 0
            output 0
            mute no     # or "yes"
            bypass no
            gain -6     # in dB
            delay 55    # in microsec
            filter myfilter
        }
        right {
            input 1
            output 1
            mute no    
            bypass no
            gain -6
            delay 55
            filter myfilter
        }
    }
}

pcm.save {
    type file
    slave.pcm "null"
    file "/tmp/output.wav"
    format "wav"
}
 
OP
dc655321

dc655321

Major Contributor
Joined
Mar 4, 2018
Messages
1,597
Likes
2,235
Have you looked into LADSPA?

Thanks for that.

Yes, I did look at LADSPA a few months ago as I was starting to think about integration.
I looked at the Charlie Laub (ACDf) and the Richard Taylor (rt-plugins) libraries to see what LADSPA implementations were all about.
They do have an advantage in that Pulseaudio knows (or can be told) how to handle them natively.

It was chiefly the restrictions on the configuration interface that turned my attention elsewhere.
AFAICT, there is no way to instruct a LADSPA plugin to "load this file of filter coefficients", as it only accepts the scalar LADSPA_Data type (i.e. floats). The usual ALSA configuration interface is not exposed...

There was also a question in my mind about potential buffer management issues for the overlap-save algorithm I use given the extremely narrow API of LADSPA plugins.
 
OP
dc655321

dc655321

Major Contributor
Joined
Mar 4, 2018
Messages
1,597
Likes
2,235
Isn't brute fir the defacto? Or do you really really want to punish yourself and design your own? :)

Yes and yes.

EDIT: pulled the trigger way early on my reply.

It's not really the DSP software that I'm too concerned about, but rather the integration of said software into Linux.
Kinda of a shame, but that seems to be "the hard part" to me.
Hopefully I'm just missing something small, thus my query/thread.
 
Last edited:
OP
dc655321

dc655321

Major Contributor
Joined
Mar 4, 2018
Messages
1,597
Likes
2,235
Have you looked at JACK or JACK2?

What you are trying to do may be already implemented?

Check this out https://sourceforge.net/projects/heaven/files/Audio Applications/Jack Related/jack_convolve/

I've only looked at JACK in a very cursory way.
It did not appear to offer any compelling reasons to pursue, so I have not.
That's not to say that there are none: I simply did not find any at the time.

I would like to push the convolution software as far down the audio stack as reasonable (closer to the hardware).
 

Krunok

Major Contributor
Joined
Mar 25, 2018
Messages
4,600
Likes
3,067
Location
Zg, Cro
Yes and yes.

EDIT: pulled the trigger way early on my reply.

It's not really the DSP software that I'm too concerned about, but rather the integration of said software into Linux.
Kinda of a shame, but that seems to be "the hard part" to me.
Hopefully I'm just missing something small, thus my query/thread.

I'm not sure I got this right - what exactly do you want to make that BruteFIR doesn't have?

And why would you want REW and your convolution engine to interact directly when REW can't generate FIR filters?
 
OP
dc655321

dc655321

Major Contributor
Joined
Mar 4, 2018
Messages
1,597
Likes
2,235
I'm not sure I got this right - what exactly do you want to make that BruteFIR doesn't have?

As I said in the OP, the joy of discovery and creation would be far less if I used a pre-existing tool :)

And why would you want REW and your convolution engine to interact directly when REW can't generate FIR filters?

The ability of REW to generate FIR filters (or not) is irrelevant for my purposes. I have my own mean of filter crafting.

I want to use REW in the usual manner: have it generate a test signal, route that through my computer's sound software (including my filtering software), out to a DAC and into speakers, use a mic to capture the DUT response and plot for analysis.

The test signal needs routing through DSP software, as the software IS the cross-over.

It is possible to use REW in an "offline" mode, but that's a much less convenient workflow.
 

Krunok

Major Contributor
Joined
Mar 25, 2018
Messages
4,600
Likes
3,067
Location
Zg, Cro
As I said in the OP, the joy of discovery and creation would be far less if I used a pre-existing tool :)

Ahhh ok - all clear! :)


The ability of REW to generate FIR filters (or not) is irrelevant for my purposes. I have my own mean of filter crafting.

I want to use REW in the usual manner: have it generate a test signal, route that through my computer's sound software (including my filtering software), out to a DAC and into speakers, use a mic to capture the DUT response and plot for analysis.

The test signal needs routing through DSP software, as the software IS the cross-over.

It is possible to use REW in an "offline" mode, but that's a much less convenient workflow.

LOL I didn't think that way, let me explain why.. :D

As I'm playing my music with Volumio I didn't use REW in that scenario so that's the reason for my question. My laptop doesn't see Volumio as audio device so when measuring response with REW I'm playing files generated by REW via Volumio from my NAS. When doing it with RTA and pink noise it's easy but with log sweeps I have to generate sweep files with timing reference signal and then it works as it should.

P.S. I'm using rePhase for filter crafting - excellent tool!
 

pwjazz

Addicted to Fun and Learning
Forum Donor
Joined
Sep 21, 2018
Messages
507
Likes
747
there is no way to instruct a LADSPA plugin to "load this file of filter coefficients"

What about just keeping the file in a well known location and having the plugin read it? That's what ladspa_dsp does.
 

Krunok

Major Contributor
Joined
Mar 25, 2018
Messages
4,600
Likes
3,067
Location
Zg, Cro
So, you want to be able to run REW on the same machine (PC?) where you will be running your convolution engine and for that you need your "system" to be represented as an audio device which will then be recognised by REW, correct?

In my opinion you should also have a possibilty to accept SPDIF as input signal. In order to do that you should have some kind of device with SPDIF inputs and USB output that would be recognised as USB audio input device by your PC. This device may be able to do it assuming it has Unix drivers.
Output could be handled via any USB DAC.
 
OP
dc655321

dc655321

Major Contributor
Joined
Mar 4, 2018
Messages
1,597
Likes
2,235
As I'm playing my music with Volumio I didn't use REW in that scenario so that's the reason for my question. My laptop doesn't see Volumio as audio device so when measuring response with REW I'm playing files generated by REW via Volumio from my NAS. When doing it with RTA and pink noise it's easy but with log sweeps I have to generate sweep files with timing reference signal and then it works as it should.

Yes, you're describing the "offline mode" of REW usage that I mentioned.

P.S. I'm using rePhase for filter crafting - excellent tool!

And Windows only isn't it, or am I mistaken?
I'm just generating filters using scipy.signals, numpy, etc in Python. Works a treat!

So, you want to be able to run REW on the same machine (PC?) where you will be running your convolution engine and for that you need your "system" to be represented as an audio device which will then be recognised by REW, correct?

Exactly.

In my opinion you should also have a possibilty to accept SPDIF as input signal. In order to do that you should have some kind of device with SPDIF inputs and USB output that would be recognised as USB audio input device by your PC. This device may be able to do it assuming it has Unix drivers.
Output could be handled via any USB DAC.

I don't follow. Why would I care about data transport (spdif) here? That figures in well below my system in the software stack...
 

Krunok

Major Contributor
Joined
Mar 25, 2018
Messages
4,600
Likes
3,067
Location
Zg, Cro
And Windows only isn't it, or am I mistaken?

Yes, Windows only. But you can hope that @pos has some plans to port it to Unix as well. :)

I don't follow. Why would I care about data transport (spdif) here? That figures in well below my system in the software stack...

Yes, it's certainly below from the OSI point of view, but if I imagine your sofwtare running on some PC how am I supposed to connect my SPDIF source to it?

Maybe I didn't understand correctly the concept of your solution. I thought you will be designing a PC box with convolution engine that will stand before the DAC in a playout chain representing itself as an USB audio device. That way your box would add room EQ and other DSP capabilties to any USB DAC.
 
OP
dc655321

dc655321

Major Contributor
Joined
Mar 4, 2018
Messages
1,597
Likes
2,235
What about just keeping the file in a well known location and having the plugin read it? That's what ladspa_dsp does.

Yes, that would work for this limited case.

But, the larger point is that using LADSPA is agreeing to bypass the conf API of ALSA (because one is limited to LADSPA's view of what a configuration can contain). If one agrees to that, then one needs to implement their own config grammar, tokenizer, parser, etc (config compiler), assuming one's configuration requirements are more broadly scoped than simply specifying a few files to open and digest.

You have made me think that I should revisit the features and limitations of LADSPA plugins. So, thanks for that!
 
OP
dc655321

dc655321

Major Contributor
Joined
Mar 4, 2018
Messages
1,597
Likes
2,235
but if I imagine your sofwtare running on some PC how am I supposed to connect my SPDIF source to it?

By transforming the electrical information produced by the spdif device into bits in the memory of the PC via a sound card of some type.

The flow goes as:
spdif input --> soundcard --> alsa pcm source --> alsa pcm sink --> filtering software --> soundcard (usb, etc) --> amp --> speaker.

EDIT: the flow I'm focused on ATM is audio player (REW, audacity, etc) --> alsa pcm sink --> filtering software --> usb dac --> bliss
The portion I'm constructing is in bold.
 

Krunok

Major Contributor
Joined
Mar 25, 2018
Messages
4,600
Likes
3,067
Location
Zg, Cro
By transforming the electrical information produced by the spdif device into bits in the memory of the PC via a sound card of some type.

The flow goes as:
spdif input --> soundcard --> alsa pcm source --> alsa pcm sink --> filtering software --> soundcard (usb, etc) --> amp --> speaker.

EDIT: the flow I'm focused on ATM is audio player (REW, audacity, etc) --> alsa pcm sink --> filtering software --> usb dac --> bliss
The portion I'm constructing is in bold.

Aha, so you planned to use PC soundcard with SPDIF input? That means your software couldn't be used on a small factor fanless PC as they have no place for such sound card. That is why I sent you link to that device as it can be used with any hardware platform, for those who need such thing of course.

One more thing, what about users who already have players running on some other box on the network? Like Roon and Jriver crowd etc? How would they make use of your box? Or you are primarily building this solution for yourself?
 
Top Bottom