• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

RPi + CamillaDSP Tutorial

Good question, I made some measurements of this in the past, it largely depends on chunk size and whether resampling is enabled. In general with the default settings shown here for a 96 kHz sample rate, 2048 chunk size and balanced async resampling the additional latency from processing is 60 ms. If you disable resampling this drops to 45 ms. If you halve the chunk size the processing delay roughly halves.

For A/V applications I use a HDMI extractor upstream of an Apple TV. The video processing delay between the Apple TV and my TV is around 60 ms so this matches my CamillaDSP processing delay almost exactly. The Apple TV also has the ability to adjust audio / video delay but I do not find the need to use that.

Unfortunately for live sound applications this probably will not work very well.

Michael
About what I expected unfortunately. Not even worth trying to optimize it with this kind of latency for live sound. Even at 1/10th that latency it would be problematic.

Gonna set it up eventually regardless, as a handy "hardware" para eq for non time critical applications I guess.
 
has anyone find any dac with 3 or more channels which is supported by moode? I would like to use moode with my 2.1 system.
 
has anyone find any dac with 3 or more channels which is supported by moode? I would like to use moode with my 2.1 system.
I have a xonar u7 dac (that you can get very cheap and has a bunch of channels) since a few days ago and i would be happy to try it with moode if someone explains me how :D. So far the pi can recognise it.
 
Okto DAC8 pro is known to work with Moode.

In my experience the MOTUs need at least 5.11 kernel to work well so I don't recommend those for Moode.

I just started exploring this path but the miniDSP USBstreamer and MCHstreamer are class compliant and can be used with many pro audio DACs with ADAT inputs. Similarly the miniDSP U-DIO8 with multiple SPDIF input stereo DACs should work (but I have not personally tested it).

Kind of a somewhat convoluted option but you can use Moode with a digital output HAT to feed a MOTU Ultralite Mk5 with a separate RPi4 running CamillaDSP.

Or just get a miniDSP Flex and don't worry about CamillaDSP :).

Michael
 
I use the essence evolve-ii-4k-hdmi DAC (8 channel) or a focusrite scarlett 4i4 usb (4 channel).

Nice. I assume if the Essence works that means that multichannel HDMI output from the RPi works in Moode? Any chance you have measured the Essence? I am interested in testing it but really don't have a use case for it so have been holding off.

Michael
 
Hi Michael,
I have not measured the essence, I don't have the equipment. I hope @amirm will get it on the workbench. I can only say it sounds like my other DACs and soundcards. ;)
 
Sweetwater does international shipping and they accept paypal. Unfortunately the Mk5 is out of stock everywhere right now so you may need to wait a few months.
Do you (or anyone) have any idea whether the Motu availability is just 'covid supply chain' related or whether there is something more specific? They've been out of stock everywhere for several months. It looks like you can place an order but not knowing whether it's going to be weeks/months/unknown makes that a bit of a gamble.
 
How do you get the multichannel audio into the RPI 4? Toslink? Which HDMI extractor do you use?

I don't do multichannel input, stereo only for me :). Also I had a typo in my original statement, should have said I have a HDMI extractor downstream of the Apple TV, not upstream. I use the Monoprice Blackbird 4K HDMI Extractor and it works well, the Apple TV handles the stereo downmix.

Getting multichannel audio in to the RPi is possible but it requires some work to do. Probably the simplest way is to use a receiver or a HDMI input multichannel DAC (like the Essence Evolve mentioned a few posts up) and use the analog inputs of an audio interface like the MOTU Ultralite Mk5. This adds extra AD / DA conversions but should not be too bad. Another more complicated / expensive option is to use a Meridian HD621 to split a multichannel HDMI stream to 4 stereo SPDIF / AES outputs and then feed those to a miniDSP U-DIO8 with multiple stereo DACs or to an Okto DAC8 pro. Again you definitely need to be aware of latency in these setups but in my experience pulling the audio upstream of the TV works well because of inherent video delay in the TV itself.

Michael
 
Do you (or anyone) have any idea whether the Motu availability is just 'covid supply chain' related or whether there is something more specific? They've been out of stock everywhere for several months. It looks like you can place an order but not knowing whether it's going to be weeks/months/unknown makes that a bit of a gamble.

Unfortunately I have no idea, doesn't seem like a great sign that there are no specific dates.

Michael
 
I do not use multiple channel input in the Rpi. I only use multichannel output via hdmi, because i use camilladsp as a digital cross over with 65k Taps FIR filter. There is only one HAT for analog multi channel input.
 
Thank you so much mdsimon2 for sharing. Most excellent work.

I have a rpi4 and the Okto DAC 8 Pro, but I have not been able to output multichannel yet. I am not a technical/linux person. Do you think I can use your okto config file and just change the input channels from 2 to 8 and the dest-souces pairs?
What worked so far (with just a vanilla setup):
- 2 channels rpi4 (lms or Moode) -> okto pure usb
- 6 channels input through hdmi -> Meridian hd621 -> okto pure aes
- 6 channels Windows 10 with Foobar2000 -> okto pure usb

What does not work: Moode playing 6 channels file -> okto pure usb results in okto playing all channels input simultaneously to all channels output. I have not activated CamillaDSP that comes with Moode yet.
 
Thank you so much mdsimon2 for sharing. Most excellent work.

I have a rpi4 and the Okto DAC 8 Pro, but I have not been able to output multichannel yet. I am not a technical/linux person. Do you think I can use your okto config file and just change the input channels from 2 to 8 and the dest-souces pairs?
What worked so far (with just a vanilla setup):
- 2 channels rpi4 (lms or Moode) -> okto pure usb
- 6 channels input through hdmi -> Meridian hd621 -> okto pure aes
- 6 channels Windows 10 with Foobar2000 -> okto pure usb

What does not work: Moode playing 6 channels file -> okto pure usb results in okto playing all channels input simultaneously to all channels output. I have not activated CamillaDSP that comes with Moode yet.

I really do not know much about Moode but I do not think it supports multichannel audio at this time. You could try using my configuration with specifying 8 channels on the loopback and adjusting the routing matrix (0->0, 1->1, 2->2, 3->3, etc) accordingly. Then specify the loopback as your Moode output device and see if it works.

Michael
 
indeed moode audio doesn't support multichannel through usb (only through hdmi). i read that it may be supported in moode 8 newer kernel wich will support more dac and multichannel.
 
  • Like
Reactions: MCH
Good question, I made some measurements of this in the past, it largely depends on chunk size and whether resampling is enabled. In general with the default settings shown here for a 96 kHz sample rate, 2048 chunk size and balanced async resampling the additional latency from processing is 60 ms. If you disable resampling this drops to 45 ms. If you halve the chunk size the processing delay roughly halves.

For A/V applications I use a HDMI extractor downstream of an Apple TV. The video processing delay between the Apple TV and my TV is around 60 ms so this matches my CamillaDSP processing delay almost exactly. The Apple TV also has the ability to adjust audio / video delay but I do not find the need to use that.

Unfortunately for live sound applications this probably will not work very well.

Michael

I was going to ask about various other alternatives to reduce the delay but it looks like the baseline for inaudible times delays is 10 microseconds which is magnitudes lower (0.01ms) so even splitting the computational load among multiple cores/clustering pi's is going to be insufficient. It looks like the way forward on this is to leave the actual processing to a FPGA. So we'd need a container that can parse CamillaDSP generated filters and program the FPGA?

How do commercial DSP solutions manage this?
 
I was going to ask about various other alternatives to reduce the delay but it looks like the baseline for inaudible times delays is 10 microseconds which is magnitudes lower (0.01ms) so even splitting the computational load among multiple cores/clustering pi's is going to be insufficient. It looks like the way forward on this is to leave the actual processing to a FPGA. So we'd need a container that can parse CamillaDSP generated filters and program the FPGA?

How do commercial DSP solutions manage this?

Definitely out of my wheelhouse so I don't have a real answer on how you could potentially use CamillaDSP with much lower latency but non-Dirac miniDSPs have delays in the range of 2-3 ms, I've tested similar delays in DACs. I don't think the microsecond delays you are discussing are feasible.

If latency is more of a concern than processing power a commercial DSP definitely seems like the way to go.

Michael
 
but it looks like the baseline for inaudible times delays is 10 microseconds which is magnitudes lower (0.01ms)

That doesn’t look right. Where did you get that figure? Are you using inter-aural, just-noticeable times here?
 
That doesn’t look right. Where did you get that figure? Are you using inter-aural, just-noticeable times here?
I was using inter-aural from here as the lower boundary because it is what is used for the psycho-acoustic illusions encoded in multi-channel (stereo, etc) audio.

But uh, that's post-hoc reasoning, I just grabbed the first paper that roughly had what I was looking for with the search terms: lowest noticeable auditory delay

Noticeable visual-auditory desync is probably more forgiving because most people who are listening to DSP enhanced audio are probably also watching video rather than a live performance and most video is 60-120fps at most so you just have to have all of the data there by the next frame.
 
I was using inter-aural from here as the lower boundary because it is what is used for the psycho-acoustic illusions encoded in multi-channel (stereo, etc) audio.

But uh, that's post-hoc reasoning, I just grabbed the first paper that roughly had what I was looking for with the search terms: lowest noticeable auditory delay

Noticeable visual-auditory desync is probably more forgiving because most people who are listening to DSP enhanced audio are probably also watching video rather than a live performance and most video is 60-120fps at most so you just have to have all of the data there by the next frame.

What is your application, exactly?
Perhaps it is something that pro-audio already has a solution for?

There are techniques to get the latency down to zero or near-zero with a DSP convolution engine (see Gardner paper, et al), so-called Non-Uniform Partitioning, etc. But, other conditions must also be present in the processing chain to achieve minimal latency: min-phase filtering, hardware transit times (buffer size), etc.
 
Back
Top Bottom