• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Integrating DSP Engine into the Linux Audio Stack

OP
dc655321

dc655321

Major Contributor
Joined
Mar 4, 2018
Messages
1,597
Likes
2,235
Aha, so you planned to use PC soundcard with SPDIF input?

No. Like I said, data transports don't factor into this.

Or you are primarily building this solution for yourself?

ATM, yes, this solution is intended for personal use only.
But the architecture does not forbid other uses - its input and output are simply streams of pcm data.
 

Krunok

Major Contributor
Joined
Mar 25, 2018
Messages
4,600
Likes
3,067
Location
Zg, Cro
No. Like I said, data transports don't factor into this.

ATM, yes, this solution is intended for personal use only.
But the architecture does not forbid other uses - its input and output are simply streams of pcm data.

But if I install your software on a fanless PC in which I cannot put PC soundcard with SPDIF input how would I stream PCM data into your box?

Btw, what is ATM?

IMHO opinion it would be much better use of your effort if your solution would fit other users as wel..
 
OP
dc655321

dc655321

Major Contributor
Joined
Mar 4, 2018
Messages
1,597
Likes
2,235
But if I install your software on a fanless PC in which I cannot put PC soundcard with SPDIF input how would I stream PCM data into your box?

If the software is on a PC, just use your favorite audio player to access audio files, locally, over networking, whatever.

what is ATM

ATM == at the moment

it would be much better use of your effort if your solution would fit other users as wel..

I think there is some misunderstanding here - there are very few requirements for others to use such software.
All that would be needed is a typical Linux installation to install and run the software on and some minimal configuration.
Getting the output of the software where it needs to go (a dac, a file, over the network some where else, etc) is a related but separate concern and far removed from the topic of the OP.
 

Krunok

Major Contributor
Joined
Mar 25, 2018
Messages
4,600
Likes
3,067
Location
Zg, Cro
If the software is on a PC, just use your favorite audio player to access audio files, locally, over networking, whatever.



ATM == at the moment



I think there is some misunderstanding here - there are very few requirements for others to use such software.
All that would be needed is a typical Linux installation to install and run the software on and some minimal configuration.
Getting the output of the software where it needs to go (a dac, a file, over the network some where else, etc) is a related but separate concern and far removed from the topic of the OP.

Well, if you're building this for yourself there's nothing to discuss - you know best what you'd like to build to suit your needs. :)

IMO there's a reason why BruteFIR has not been used widely although it works eprfectly well as a convolution engine, and that reason is that it is far beyond competence of an average audiophile to integrate software player or some hardware player device with SPDIF output with PC running BRuteFIR on Unix and DAC connected to it. Pity..
 

DWPress

Major Contributor
Forum Donor
Joined
May 30, 2018
Messages
1,000
Likes
1,453
Location
MI

Cosmik

Major Contributor
Joined
Apr 24, 2016
Messages
3,075
Likes
2,180
Location
UK
I have done exactly what you're doing, but without the wider research you have done. I just blundered into a way of doing it and stuck with that, even though it may not be very elegant.

My solution involves the ALSA-supplied loopback driver that acts as a virtual sound card and can sink and source audio streams. This can be seen by applications such as REW and streaming music player software, allowing me to intercept the stream and apply my own processing before I send it off to the multichannel DAC. However, I don't actually believe that the loopback driver can work properly as it is supplied, and in the end I had to modify the code ever so slightly and re-compile it, then invoke it in my system as a runtime module using 'modprobe' - and believe me, that is not something I would ever tackle lightly!

The loopback driver effectively has its own sample clock, and I control that dynamically with my application, keeping it synchronised on average with my DAC's consumption rate using ALSA system commands. i.e. the rate at which data is consumed from streaming apps etc. by the loopback driver is regulated by me to prevent a software FIFO from over-filling or emptying, and which is being drained by the DAC at its constant sample rate.

The main issue, it seems to me, is sample rate conversion: avoiding it if possible, and knowing when, where and how in the system it is being applied at other times. For my purposes, no sample rate conversion is needed because the sources I am using are supplying their streams as demanded by the loopback driver (whose sample rate I have control over), but I have written my own sample rate conversion code for the time when, inevitably, such digital streams are taken away from us and I will have to use a S/PDIF link that runs at a slightly different rate from my DAC and has no facility for closing the loop.

I agree with you that the 'infrastructure' and operating system is far more difficult to deal with and understand than the DSP stuff.
 
OP
dc655321

dc655321

Major Contributor
Joined
Mar 4, 2018
Messages
1,597
Likes
2,235
Thanks for weighing in.

My solution involves the ALSA-supplied loopback driver that acts as a virtual sound card and can sink and source audio streams.

That approach had recently occurred to me as well. But, the idea of passing info into kernel-space and back out to user-space, just to do it all over again (back through the DAC via a different driver), did not sit well in my mind. The pragmatist in me thinks that's a nice hack, though :)

I don't actually believe that the loopback driver can work properly as it is supplied, and in the end I had to modify the code ever so slightly and re-compile it

Interesting find. Did you report the problem "upstream" to the maintainers?
What was the problem?

The loopback driver effectively has its own sample clock, and I control that dynamically with my application, keeping it synchronised on average with my DAC's consumption rate using ALSA system commands. i.e. the rate at which data is consumed from streaming apps etc. by the loopback driver is regulated by me to prevent a software FIFO from over-filling or emptying, and which is being drained by the DAC at its constant sample rate.

The ability to provide back-pressure/notifications to audio application is a key reason I'm looking beyond the extplug driver I currently have implemented. That API provides for control over only sample format and channel number, nothing in the way of buffer, period, sample rate, etc restrictions.

The ioplug API has a much broader scope, and is actually how Pulseaudio integrates with ALSA. The PA plugin declares itself as "default", audio apps open and shovel data into this "default" ioplug, which then feeds the PA server for mixing, resampling, distribution, before finally getting sunk into actual ALSA driver.

I'm fairly certain this is the path I would like to try.

I have written my own sample rate conversion code

Neat! Sinc interpolation?

for the time when, inevitably, such digital streams are taken away from us

Why do suppose that will "inevitably" happen?
 

Cosmik

Major Contributor
Joined
Apr 24, 2016
Messages
3,075
Likes
2,180
Location
UK
Interesting find. Did you report the problem "upstream" to the maintainers?
What was the problem?
It has separate record and playback sample rates which, as far as I can tell, means that it inevitably results in a full or empty internal FIFO - even if you modify both rates as close to simultaneously as possible. I changed the code to force the number of bytes going out to equal the number of bytes coming in - I think. I did talk to someone about it, but I think the response was basically to use the plugin route - but I couldn't be sure whether they were oblivious to any idea of needing to avoid resampling. I suspect that, for most people, seeking to avoid resampling (or at least worrying about the nature of that resampling) would be audiophile stuff and nonsense :).

The ability to provide back-pressure/notifications to audio application is a key reason I'm looking beyond the extplug driver I currently have implemented. That API provides for control over only sample format and channel number, nothing in the way of buffer, period, sample rate, etc restrictions.
If you could understand the code for the loopback driver (I didn't really), I think you might be able to modify it to do exactly what you need. I think that the ideal driver wouldn't have its own sample rate at all (except maybe for an optional 'idling' function) but would simply pass on requests for chunks of data from your app to the source app. Certainly this would work for sources like Spotify or, I think, a typical app that plays CDs, both of which can be blocked from sending more data for as long as you want. I think this is also the case for network-based streaming links..? Your app would be slaved to your DAC's sample rate, so everything would be hunky-dory.
The ioplug API has a much broader scope, and is actually how Pulseaudio integrates with ALSA. The PA plugin declares itself as "default", audio apps open and shovel data into this "default" ioplug, which then feeds the PA server for mixing, resampling, distribution, before finally getting sunk into actual ALSA driver.
You have a much better grasp of this stuff than I do. My (possibly irrational) fear of resampling caused me to remove PulseAudio - not as straightforward as it sounds, because it is so heavily integrated into Ubuntu (the flavour of Linux I was using). This meant that I could assign my version of the loopback driver as the default audio device.
Neat! Sinc interpolation?
Yes. I did it just to make sure that I know what's going on. It's surprising how inefficient such code can be, and yet still perfectly practical on a small, fanless PC.
Why do suppose that will "inevitably" happen?
I cannot imagine that much content in future will not be protected with DRM of some kind, and this may be at the operating system level, or even hardware (e.g. MQA). For sure, we'll always be able to play with DSP to our hearts' content but the content providers don't have to make their audio streams directly available to us. However, I'm reasonably confident that it will always be possible to access the stream via a S/PDIF or TOSLINK interface, so if we can guarantee that the resampling is sufficiently good, we should be OK.
 
Top Bottom