• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

CamillaFIR v2.5.0 - Automated Mixed Phase FIR Generator (Python/WebUI)

This looks excellent and I'm looking forward to giving it a try... Can you elaborate on how the TDC works? Is it active cancellation of modes in the sense of using a delayed + inverted signal?
Great question.
No — TDC is not active cancellation in the sense of injecting a delayed and inverted signal to cancel room modes in the time domain. There is no extra delayed copy of the signal being added anywhere.
What TDC does instead is decision control, not signal injection.
It analyzes the group delay behaviour of the measured system and identifies frequency regions where the response is clearly non-minimum-phase (typically room modes, reflections, or late energy that cannot be corrected cleanly with EQ).
In those regions, TDC simply reduces how aggressively magnitude correction is allowed. In practice:
the measured signal is not modified
no delayed or inverted signals are added
phase is not “fixed” or forced
only the amount of EQ correction is limited
You can think of it as a trust filter:
when the time-domain behaviour looks reliable, full correction is allowed;
when the time-domain behaviour looks unreliable, the correction is softened or suppressed.
This avoids the classic problem where EQ tries to “fill in” deep modal dips that are caused by time-domain cancellations — something that usually sounds worse and adds ringing.
So TDC doesn’t cancel modes — it prevents the filter from fighting physics.
The audible effect is typically less bass overhang, cleaner transients, and more stable imaging, even if the frequency response looks slightly less flat on paper.
 
All requiments :

python3 -m venv venv source venv/bin/activate
pip install numpy scipy pywebio matplotlib

I can confirm that, at least for Linux Mint 22.3, plotly also is required. Once I installed it, I was able to start the application server.

Also, the github page recommends installing "requests" along with the other packages, which I installed last night. So, perhaps the requirements should be as follows?:

python3 -m venv venv
source venv/bin/activate
pip install numpy scipy pywebio matplotlib requests plotly

Maybe take a look at it.

Thank you for this project!!!! It is really neat. I am looking forward to using it.
 
Great question.
No — TDC is not active cancellation in the sense of injecting a delayed and inverted signal to cancel room modes in the time domain. There is no extra delayed copy of the signal being added anywhere.
What TDC does instead is decision control, not signal injection.
It analyzes the group delay behaviour of the measured system and identifies frequency regions where the response is clearly non-minimum-phase (typically room modes, reflections, or late energy that cannot be corrected cleanly with EQ).
In those regions, TDC simply reduces how aggressively magnitude correction is allowed. In practice:
the measured signal is not modified
no delayed or inverted signals are added
phase is not “fixed” or forced
only the amount of EQ correction is limited
You can think of it as a trust filter:
when the time-domain behaviour looks reliable, full correction is allowed;
when the time-domain behaviour looks unreliable, the correction is softened or suppressed.
This avoids the classic problem where EQ tries to “fill in” deep modal dips that are caused by time-domain cancellations — something that usually sounds worse and adds ringing.
So TDC doesn’t cancel modes — it prevents the filter from fighting physics.
The audible effect is typically less bass overhang, cleaner transients, and more stable imaging, even if the frequency response looks slightly less flat on paper.
Sounds great. Seems like this utility uses a really smart piecewise approach.
 
Thanks so much for your work putting this together! I've been iterating through various settings using ChatGPT to assess the before and after sweeps via REW and using your output plots. Using Roon (btw, roon does not like the cfg files, but is happy if i only load zipped .wav files).

This is the closest to being able to press a button and get decent filters (for me at least!!) - looking forward to seeing how this develops!
 
Last edited:
Just downloaded latest version - works so well with my setup.

Roon seems fine with the fill .zip file now (before it did not like the .cfg files so I was strpping them out).

Would it be possible to persist the file names for Left, Right and Target files, rather than having to reload each time? Not a big deal though.

Thanks again for this!
 
Just downloaded latest version - works so well with my setup.

Roon seems fine with the fill .zip file now (before it did not like the .cfg files so I was strpping them out).

Would it be possible to persist the file names for Left, Right and Target files, rather than having to reload each time? Not a big deal though.

Thanks again for this!
Just to clarify the current behavior: this is mainly limited by web / PyWebIO security constraints.
Files selected via the browser upload controls cannot be persisted across sessions for security reasons — the application only receives a temporary file object, not a reusable file path.
 
choreographer.browsers.chromium.ChromeNotFoundError

Presumably a Chrome/Chromium-based browser is a requirement? And where? I've built the package on a headless Ubuntu 24.04 machine, running the browser UI from MacOS.
 
choreographer.browsers.chromium.ChromeNotFoundError

Presumably a Chrome/Chromium-based browser is a requirement? And where? I've built the package on a headless Ubuntu 24.04 machine, running the browser UI from MacOS.
Yes — a Chromium-based browser is required, but only on the machine where the Python process renders PNGs.


The error comes from Plotly’s Kaleido backend, which uses headless Chromium for static image export.


In this setup, the UI runs in a MacOS browser, but all Plotly PNG rendering happens on the headless Ubuntu 24.04 build machine. That machine must have Chromium installed (e.g. apt install chromium-browser).


The browser used to view the UI does not matter.

Edit : Updated readme at github.
 
Last edited:
Forgive me as I'm out of my element here, but I want to learn more about this stuff and head towards advanced DSP use beyond what Dirac gives me... always gotta be chasing something. Anyway, I have a few maybe dumb questions:

How is sub integration handled if the input is only L+R? Would I first do that with sub + mains calibration on MSO, and then measure L/R after those are applied? Up to what frequency would someone want to use your program for?
 
Forgive me as I'm out of my element here, but I want to learn more about this stuff and head towards advanced DSP use beyond what Dirac gives me... always gotta be chasing something. Anyway, I have a few maybe dumb questions:

How is sub integration handled if the input is only L+R? Would I first do that with sub + mains calibration on MSO, and then measure L/R after those are applied? Up to what frequency would someone want to use your program for?
These are actually good questions,


Sub integration:
If the FIR tool only sees L/R inputs, then sub integration is assumed to be done upstream. The program does not perform bass management, sub summing, or delay optimization internally. Whatever sub contribution exists in the L/R measurements is what gets corrected.


The correct workflow is therefore:


  1. Integrate subs and mains first using MSO, miniDSP, AVR bass management, etc. (set delays, gains, crossovers, and reduce modal variance).
  2. Measure Left and Right after that integration, with subs active.
  3. Apply FIR correction to those L/R responses.

If sub delays or gains are changed after FIR correction, the filter is no longer valid.


Frequency range:
In practice:


  • 20–300 Hz: This is where FIR correction is most effective and most repeatable. Room modes and timing dominate here.
  • 300–1000 Hz: Can be beneficial, but requires restraint and good measurement quality.
  • ~1–2 kHz: Increasingly risky due to spatial variability.
  • Above ~2–3 kHz: Generally not advisable. At that point you are correcting speaker directivity and off-axis behavior rather than the room.

Most users limit correction to bass and lower midrange and leave higher frequencies to the loudspeaker’s native response.


Summary:
Sub integration is a system-level problem and should be solved first. FIR correction is best used afterward to clean up residual response errors. This is not a limitation of the tool; it reflects the physics of rooms and loudspeakers
 
Thank you very much!
 
The last step/screen in your process is XO... I could not find any description/explanation/documentation on how to use this feature... I have L & R mains that crossover at 500hz (electrostats) and have the mains crossover with a sub at 60hz... Should I just set an XO for the mains or an XO for the mains and an XO for the mains to the sub?
Measurements are as suggested above L+sub and R+sub... Sub already corrected to mains...
 
Last edited:
The last step/screen in your process is XO... I could not find any description/explanation/documentation on how to use this feature... I have L & R mains that crossover at 500hz (electrostats) and have the mains crossover with a sub at 60hz... Should I just set an XO for the mains or an XO for the mains and an XO for the mains to the sub?
Measurements are as suggested above L+sub and R+sub... Sub already corrected to mains...
It's meant to straighten speakers internal xo:s, but it works with every xo. Like your subs 60hz. Example Kef R3 500 & 2800 Hz. Works really well with passive xos.
I have to clarify that on program.
 
Last edited:
Back
Top Bottom