• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

AutoEQ for speaker and headphone

pierre

Major Contributor
Forum Donor
Joined
Jul 1, 2017
Messages
1,075
Likes
3,574
Location
Switzerland
Hello everyone,

i wrote a software that can equalise your headphone or speakers (anechoic and in-room). The software is open source and will remain so.
I am looking for feedback on what would be useful to you.

What can it do today?
  • the software is available as a CLI (a command line tool that geek will like) and an App that most people will prefer
  • data acquisition:
    • loading a text file with freq/response and a target file
    • loading data for headphone with a target curve
    • loading data from spinorama.org (if a speaker is in the database, you have access to it from the app)
    • acquire the in-room data from your microphone
  • optimise and find an eq that optimise for either being close to a target or improving a score (Olive scores for speakers / Headphone)
  • configure the EQ how you want
  • Listening
    • you can swap between with / without EQ quickly to see if you like the eq or not; you can also edit the eq dynamically).

What could it do?
  • scan your head and compute your HRTF (kind of working)
  • extend the in-room part and do it for complex setup (easy)
  • show you how a measured speaker will sound in your room (almost working)
  • more complicated: tell you where to put your speakers in your room to get good sound
  • your idea?

How to use?
  • currently only tested on MacOS (Windows, Linux not far on the way)
  • you can compile it ... should be easy [autoEQ](https://github.com/pierreaubert/autoeq)
  • you can compile the app not that easy [autoEQ App](https://github.com/pierreaubert/autoeq-app) but doable
  • why cannot I download an app? it will come soon. I need access to a linux and windows computer to build the other versions and I need a dev account to build on macOS.

How does it looks like today?

Screenshot 2025-10-05 at 16.59.28.png


Select a speaker, here JBM M2

Screenshot 2025-10-05 at 16.59.50.png


Optimisation running

Screenshot 2025-10-05 at 16.59.57.png


Toons of graphs to understand what's happening, you also see the player with a spectrum analyser
and a mini view of the current EQ. A/B is with the on/off button. The wheel in-between allow to configure
the eq.

Screenshot 2025-10-05 at 17.03.41.png


FAQ:
  • why not a web app: it could be a webapp; it currently is very fast because it is a native app but I may be wrong; i will build a WASM version and see if it is fast enough.
  • why build an app like that? there are many already. Yes sure. I wanted to learn Rust and that's the project for it.
I will update when the app are working and easy to test.
Feedback welcome.
 

Attachments

  • Screenshot 2025-10-05 at 17.05.41.png
    Screenshot 2025-10-05 at 17.05.41.png
    346.9 KB · Views: 63
UUh - I want to be a beta tester! Would you provide me with an executable?
They are coming. First two are Linux x86 and Mac with an ARM processor.
You can download here: https://github.com/pierreaubert/autoeq/releases/tag/v0.4.44

Windows versions are coming real soon now.

- I did struggle with MacOS security and it took forever to get it to work.
- If someone wants a version for a Intel Mac, let me know.
- For Windows, i am still fighting the system but I am close now both on X86 and ARM.

What to expect: the software can do a few things but it is still in experimental phase. What I really would like to understand is what would be useful to you all.
 
Last edited:
Hey, so I took an hour or so to test.

For an alpha a really cool state! It probably does not make sense to for what has not been working. Awesome name and logo. No crashes!

So for my use case... I would like to insert a few measurements (woofer, mids, tweeter) and be able to correct them as a "team" and auralize them together. Visual overlays would be great to evaluate X-Over. I am looking forward to the auralization feature! I tried to import im CSVs from REW, but that did not work.

Can you compile for Win 11, too?
 
Hey, so I took an hour or so to test.

For an alpha a really cool state! It probably does not make sense to for what has not been working. Awesome name and logo. No crashes!

So for my use case... I would like to insert a few measurements (woofer, mids, tweeter) and be able to correct them as a "team" and auralize them together. Visual overlays would be great to evaluate X-Over. I am looking forward to the auralization feature! I tried to import im CSVs from REW, but that did not work.
for N measurements, that's coming.
for auralizaition not yet.

Can you compile for Win 11, too?
yes but that's painfull and I need to find a better way. I am very close and yet still not there. Should be here next week if I have some time to work on it. I have setup a small lab with a machine for each OS instead of me trying to understand what the hell is happening on GH or in Docker. I am also working on the iPad version.
 
What framework are you using? Qt?
I wish, that would be easy. I want to learn rust so rust it is. Framework is Tauri (so all rust for the backend with a Typescript frontend). I will get ride of the frontend and also do it in Rust with gpui when it is stable. I put a minimum of logic in the frontend to make it easy.
 
I made some progress, they are not visible in the app but
- you can optimise crossovers
- you can play a sweep and the software record each speakers in turn so you have nothing else to do

newplot (2).png

next steps:
- integrated the upmixer into the player
- wire all the features into the UI
- make a version per platform
 
I apologise if I missed something, how does the upmixer work?
For now, that's a basic one:
- split the stream into 3 zones low freq (60-120), room freq (120-350), higher freq
- use fft to decorrelate the signal
- put some part of the diffuse field to rear channels, keep the energy more or less constant
- construct an lfe stream

Two references:
- Relevant paper: 2D-to-3D Ambience Upmixing Based on PerceptualBand Allocation
- this PhD thesis explained it much better than me :)

See also ITU-R BS.1116-3 for the official recommendation.

A lot more in the pipeline:
- currently does 5.1 but soon 7.1 and 5.1.x up 9.1.x (done)
- will add randomness both in repartition across channels but also in small delays
- use of latent algo (less cpu intensive)

- use of AI to split the music into stems and reconstruct. This will be offline. The AI split is GPU/CPU heavy and needs more tuning.
 
Last edited:
Wow, thank you, I'll take a look.

I noticed you use Claude Code when I was looking at your GitHub , I was reading your agents md files.

I may clone it and ask Claude Code to help me understand.
 
Hi Pierre, I have seen a lot of after work activity this week on the project.

I just wanted to share a use case that is important to me: I want to sum several frequency responses (sub, woofer, midrange, tweeter) and see the effect of EQ/Delay/Gain made on one frequency response (e.g., the midrange) effecting the „virtual sum“. Is this thought of?
 
Progress of the weekend:

1. upmixer is now good: works very well with music and not too bad with voices (opera, single singer)
2. roomeq is moving into production quality: the iir part is getting good
3. there is audio player with a TUI (Terminal UI) which is good enough for me to use daily. It does work with my 40k tracks library and support all the plugins.
5. binaural plugin is almost working but not great yet.

Things which are progressing but not yet good enough:
- math related things
- convex hull is working in 3D: not very fast but looks correct
- BEM starts to work (Boundery Element Method)
- Head scanning starts to work with a camera and a lidar sensor for depth understaning
- HRTF computations are not yet where they should be
- UI and integration of all the components is still not moving
- Testing, Fuzzing and Release integration are progressing but not yet good.
- Making all the plugins be real audio plugin (AU or VST compatible)
- MIDI integration (I can control the software + GLM + RME via the MIDI interface but that's not stable yet)
- Proper UI is work in progress
- Proper A/B testing for plugin is not implemented yet
- FIR filters are here but not visible yet
- Sunflower like app is almost working.

You can experiment yourself if you can compile a few stuff. MacOS is working well, Linux just behind and Windows is kind of experimental. iOS also kind of work but I have not been able to sign off binaries properly.

Pierre

TUI in action: for people that do remember mutt :) upmixing in action from stereo to 5.1.4, LUFS monitoring, level meters, mute/solos etc

Screenshot 2025-11-25 at 22.06.52.png


Plugin view allow you to build rack of plugins: you can see the list of available plugins

Screenshot 2025-11-25 at 22.08.36.png


and you can configure them live, here is an example with the upmixer where you can configure it fit your needs
Screenshot 2025-11-25 at 22.10.24.png
 
While I am working on scanning a head and deriving an HRTF (more complicated than what I thought), i used a C++ BEM library (Boundary Element Method). One thing you can do with it is simulating a room with some speakers and see the resulting freq/spl at the listening position. The software is faster on a local machine with a lot of CPU core but thanks to rust can also run in the browser via WASM.

https://roomsim.spinorama.org/

More options are coming soon. I do not claim it is correct yet even if it looks sensible. I does output very good results on problems where analytical solutions are known (3d ex here). If you increasing the frequency range, it will increase the size of the mesh and the computational requirements. It can use a lot of CPU but below 2kHz you get a quick answer.

The cli version can handle a lot more options, like building custom rooms etc. You should see something like this with a horizontal plane slice at the LP (here with 2 sources symmetrical), the same view in the vertical plane and a 3D view that show both at the same time.

1764360506433.png



Next I will add absorbtion on the wall, some furnitures to see the impact and more importantly model the source to match any speaker for which we have the spin thanks to Amir and others. Ideally we would also have the phase in the measurement, but one do what it can do with what he has :)

P.S.: this thing has been coded in 2 days after/before work so it is likely to have bugs. If you see irrational behaviour, please let me know.

Acknowledgements: I have not done BEM/FEM in 30 years so my knowledge is a bit fuzzy. I have used Mesh2HRTF and translated it in rust for being able to use it in the browser or run in apps. The research papers are really good and interesting (you can find links to them at the bottom on the GH page linked above).
 
Back
Top Bottom