• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Best way to apply a time correction (FIR) to a 2.1 system

Thanks @ernestcarl and @Keith_W for the detailed descriptions and possible paths to proceed.
Overall, I see FIR 'exercise' as a potential boost in the system quality and good opportunity to enrich my knowledge and 'toolbox' so, even if insignificant improvement on the former, I am good with only the later.

Overall, from correspondence above and though I did not try it in person, I feel like the approach that Accourate take - where GD is being treated directly - is more elegant, straightforward and similar to what is done on the frequency domain; unfortunately, I do not have this cash to spare at the moment, so I am forced to continue using RePhase for that (which, btw, is a VERY good alternative).

My current 'schema' for FIR generation using RePhase is following Bear's REW & Rephase tutorial (Acoustic treatment & Calibration Rephase - The Tutorial) and threads that are available on ASR (https://www.audiosciencereview.com/forum/index.php?threads/rew-and-re-phase-for-dummies.18891) (@Keith_W this is the source for the FDW=6-8 settings. @mitchco seems to use a Psychoacoustic smoothing window)
I am also applying a conservative design approach while limiting the Rephase Q value default to <2.0 (which I believe conceptually equivalents to Accourate's limitation on fixing GD with too high Qs).

I guess that when I have a better grasp of the theory & impact various parameters, I will let myself loose a and explore a bolder configuration.

As I see it, my main acoustic 'challenge' (and can be seen in the snapshot provided by @ernestcarl is the lag of the low frequencies (upto ~200Hz), which per my understanding is not a Sub-main speaker integration issue but rather a room artifact, as it occurs also when, so sub is engaged (hence Rt & Lt alone). Since heavy room treatment is not practical (the system sits in my living room), FIR seems like the main path to relief some of the imperfections.

I will be glad to share more after applying some of the good practices mentioned (and likely need some supporting tips) - hence 'I am keeping this channel open ;))

You should try to obtain as “clean” and complete as possible time-referenced “raw” set of measurements — perhaps even capture some off-axis spatially spread measurements as well. Then re-measure the sub(s) with min phase/IIR correction pre-equalization in place. From there you can build as many number of filter iterations and crossover combination designs via REW and rePhase — even if it’s only based on limited electronic simulation/modelling.
 
You should try to obtain as “clean” and complete as possible time-referenced “raw” set of measurements

I essentially agree with this fundamental and important suggestion.;)

Let me again recommend you, therefore, to try (just as your first step) very primitive simple and reproducible "air sound recording at your listening position with (measurement) microphone on second independent PC" using "time-shifted multi-frequency rectangular tone burst signal sequences having time-zero marker" and also the "precise air-sound wave-shape matching method"; all of them I shared in my post #5 above. These methods can be applied regardless of FIR and/or IIR you would use.

You can try these without using any advanced software tools having (unknown) latencies and black-box-type (to you, to me) processing; you need only measurement microphone and free audio recording/analysis software like Audacity, or old version of Adobe Audition. As I wrote in my post #5, please simply PM me if you would like to use all the (any of the) test tone signals which were prepared by myself for these measurements and tunings.
 
Last edited:
Although quite belated... Let me inquire a minor point.
In the title of this thread, does OP @ErLan mean "way" by his wording "wat"??
 
I essentially agree with this fundamental and important suggestion.;)

Let me again recommend you, therefore, to try (just as your first step) very primitive simple and reproducible "air sound recording at your listening position with (measurement) microphone on second independent PC" using "time-shifted multi-frequency rectangular tone burst signal sequences having time-zero marker" and also the "precise air-sound wave-shape matching method"; all of them I shared in my post #5 above. These methods can be applied regardless of FIR and/or IIR you would use.

You can try these without using any advanced software tools having (unknown) latencies and black-box-type (to you, to me) processing; you need only measurement microphone and free audio recording/analysis software like Audacity, or old version of Adobe Audition. As I wrote in my post #5, please simply PM me if you would like to use all the (any of the) test tone signals which were prepared by myself for these measurements and tunings.

Thank you @dualazmak for the example...

Yet, I really don't think measurements using Audacity or Adobe Audition will be better than what can already be achieved staying within just REW -- only hurdle for me (or anyone else for that matter) is the learning curve.

Magnitude, Phase, and "Filtered" Impulse Response views (sub+center speaker) with preliminary channel delays already applied:
1725586416143.png 1725586432771.png

"Wavelets" which are roughly equivalent to your "tone burst" methodology after sub -1.6ms delay applied:
1725586465806.png 1725586473434.png

Phase "Linearized" Center coaxial HF+MW drivers at ~0.75m distance (frequency dependent window 8 cycles):
1725586489947.png 1725586497228.png

Sub+Center speaker ETC, GD, and Wavelet/Fourier Spectrogram (40dB scale) views:
1725587558951.png 1725587563589.png 1725587567632.png 1725587571817.png

Results are pretty good at least in this example in my own room with minimal visible acausal bump/dip-peaks/oscillation artifacts. Pre-ringing from the subwoofer bass phase filtering (8k sample rate, 113ms FIR centering delay, manual MTW setting) can really only be heard far off-axis when standing up or moving well away from the main desk listening/sitting position.
 
Last edited:
....
Yet, I really don't think measurements using Audacity or Adobe Audition will be better than what can already be achieved staying within just REW -- only hurdle for me (or anyone else for that matter) is the learning curve.
....

I can well understand your "staying within just REW"; it is fully up to your preference!
Yes, as you would know, I myself too occasionally use REW (e.g. ref. here and thereafter, esp. #20, #21 and #22 in very early stage of my project); REW is really great software tool, I agree with you. :D

I know, however, some beginner people do not fully understand the nature/feasibility/reliability of loopback recording/hearing/monitoring/automated-analysis by REW within single specific PC; this is why I suggest trying primitive "time-shifted multiple Fq rectangular tone burst sequence with time-zero marker" and "wave-shape matching" on recorded air sound data taken by second independent PC (even tiny notebook PC can do it well) using Audacity and/or Adobe Audition. Those people may compare (validate) the straightforward results with those given by REW, and they may/can proceed into full utilization of REW like what you (and I, partially) are doing/enjoying.
 
Last edited:
Great thread everyone. I'll be diving back into my subs to mains integration this winter when I have more time so I love finding this sort of thread to gleam new approaches and ideas from. Wish I had something more to contribute constructively to the discussion but the processes I use with REW and rePhase have already been covered.

After swapping out woofers and tweeters last spring I level matched the drivers and cooked up a simple XO with minor corrections up to 300Hz and called it good enough (after being sucked into the vortex for a week) so I'm currently "cheating" and use DLBC in a very limited way to deal with the time domain stuff at the moment. Gotta say, Dirac sounds pretty good, perhaps better than I've been able to achieve in the past.


HLC graph.jpeg
 
Great thread everyone. I'll be diving back into my subs to mains integration this winter when I have more time so I love finding this sort of thread to gleam new approaches and ideas from. Wish I had something more to contribute constructively to the discussion but the processes I use with REW and rePhase have already been covered.

After swapping out woofers and tweeters last spring I level matched the drivers and cooked up a simple XO with minor corrections up to 300Hz and called it good enough (after being sucked into the vortex for a week) so I'm currently "cheating" and use DLBC in a very limited way to deal with the time domain stuff at the moment. Gotta say, Dirac sounds pretty good, perhaps better than I've been able to achieve in the past.


View attachment 390518

jRiver is great and hard to replace for me… but I wish they had an “easy” WDM-like feature for Linux. Often I send optical SPDIF signal from my Linux desktop to an old Win laptop with jRiver since I depend on its upmixing capability.

Been also very tempted to get a miniDSP HT/HTx — but I do not appreciate that there is no available option for fully manual input PEQs and in/out FIRs. I don’t mind Dirac as an extra opt-in service, but I’d rather do the EQ manually myself. I well know processing is quite limited in this unit; however, even having similar capability as the ancient 2x4HD would have been sufficient enough for me.
 
jRiver is great and hard to replace for me… but I wish they had an “easy” WDM-like feature for Linux.

The image I showed is using @mitchco's Hang Loose Convolver app which is available in Linux form. Maybe worth checking out if you're willing to explore a stable virtual cable in the Linux world and using plugins to manage things outside of jRiver.
 
Thanks @ernestcarl and
You should try to obtain as “clean” and complete as possible time-referenced “raw” set of measurements — perhaps even capture some off-axis spatially spread measurements as well. Then re-measure the sub(s) with min phase/IIR correction pre-equalization in place. From there you can build as many number of filter iterations and crossover combination designs via REW and rePhase — even if it’s only based on limited electronic simulation/modelling
While liking the idea of taking spatially spread measurements, I guess it also have to follow 'good practices' in order not to lose the targeted signal but to reflect best the personal psychoacoustic perception of sound.
In view of that, are there any thumb-rule regarding weighting of the spatially spread signals vs. those taken at the MLP (which BTW are also avg. of 5-9-13 points (depends on the schema followed)?

@ErLan mean "way" by his wording "wat"??
Correct, "wat" == "way", but from some reason I did not manage to edit title :facepalm: sorry for that.

jRiver is great and hard to replace for me…
a newbie question, I am currently using EQAPO convolver, neverthless, been thinkning on jRiver as an alternative (or just another tool to satisfy my strong desire to play :rolleyes:), does jRiver is capable of bass management and DSP of a 2.1 configuration (3 channels, subchannel is a sum of Rt+Lt, Rt\Lt\Sub are crossed over at 80Hz)
 
As you may know well, I am a Windows guy, and I mainly use JRiver MC as my music (audio-visual) player together with DSP "EKIO" (IIR) as system-wide one-stop DSP center; EKIO receives all the digital audio signals through ASIO/VASIO/VAIO routings from JRiver MC and all the other audio-output software/hardware including various internet browsers, Adobe Audition, Audacity, live-TT-LP-vinyl-through-ADC, etc., and EKIO feeds the DSP-processed multichannel digital tracks into 8-Ch DAC OKTO DAC8PRO.

In order to enable the above digital routings within Windows PC, I use VB-AUDIO MATRIX as system-wide ASIO/VASIO/VAIO routing center.

If you would be interested, you would please find all the details of my latest system setup as of June 26 2024 (hardware and software) in my post #931 on my project thread.
 
Last edited:
Last edited:
Thread title typo corrected. Thanks
Thanks you!
As you may know well, I am a Windows guy, and I mainly use JRiver MC as my music (audio-visual) player together with DSP "EKIO" (IIR) as system-wide one-stop DSP center; EKIO receives all the digital audio signals through ASIO/VASIO/VAIO routings from JRiver MC and all the other audio-output software/hardware including various internet browsers, Adobe Audition, Audacity, live-TT-LP-vinyl-through-ADC, etc., and EKIO feeds the DSP-processed multichannel digital tracks into 8-Ch DAC OKTO DAC8PRO.

In order to enable the above digital routings within Windows PC, I use VB-AUDIO MATRIX as system-wide ASIO/VASIO/VAIO routing center.

If you would be interested, you would please find all the details of my latest system setup as of June 26 2024 (hardware and software) in my post #931 on my project thread.
Thanks for sharing, but it is still hard for me to understand where do you do the upmixing (2.0 --> 2.X/5.X/7.X: JRiver MC, EKIO, or VB-Audio), can you please clarify?
 
Thanks you!

Thanks for sharing, but it is still hard for me to understand where do you do the upmixing (2.0 --> 2.X/5.X/7.X: JRiver MC, EKIO, or VB-Audio), can you please clarify?

To set JRiver to output in 2.1:
- Tools-Options
- In the "Audio" tab, choose "DSP and Output Format"
- In the "Output Format" section, look at the "Channels" drop down box. Choose 2.1 or anything you like.

If you are using JRiver to output to Hang Loose Convolver:
- In the "Output Format" section, choose 2.0

If you are using JRiver for convolution:
- In the "Output Format" choose the number of speakers/subs you want to control
- Tick the "Convolution" option and choose the .CFG file.
 
Thanks you!

Thanks for sharing, but it is still hard for me to understand where do you do the upmixing (2.0 --> 2.X/5.X/7.X: JRiver MC, EKIO, or VB-Audio), can you please clarify?

In my setup, system-wide DSP center "EKIO" is taking care of all the XO, EQ, phase, group-delay, individual-gain, etc. The entire signal path and the details of EKIO DSP configuration can be found in my post #931 on my project thread.

Here, I paste some of the key diagrams clarifying your points; EKIO receives 2.0 stereo from JRiver MC for DSP processing.
Fig03_WS00007533 (3).JPG


Fig11_WS00007525 (2).JPG


Fig12_WS00007541 (2).JPG


Fig14_WS00007522 (3).JPG
 
I paste some of the key diagrams clarifying your points
Not the op but the question was about upmixing, the diagrams are rather detailed but I guess it's not a feature that exists there?
 
Not the op but the question was about upmixing, the diagrams are rather detailed but I guess it's not a feature that exists there?

By "upmixing", do you mean real-time (on-the-fly == while-listening-to-music) "up-sampling" like 44.1 kHz 16 bit CD-format into 88.2 kHz (or 96 kHz) 24 bit, or real-time "format change/conversion" like DSD into PCM or FLAC into PCM?
 
Last edited:
Not the op but the question was about upmixing,
Look at diagram #11 - it's the same sort of channel mapping miniDSP uses. I know, clear as mud.....
 
By "upmixing", do you mean real-time (on-the-fly == while-listening-to-music) "up-sampling" like 44.1 kHz 16 bit CD-format into 88.2 kHz (or 96 kHz) 24 bit, or real-time "format change/conversion" like DSD into PCM or FLAC into PCM?
it was stated as "2.0 --> 2.X/5.X/7.X:", in a jriver context that refers to JRSS so it's talking about surround extraction from a stereo source
 
it was stated as "2.0 --> 2.X/5.X/7.X:", in a jriver context that refers to JRSS so it's talking about surround extraction from a stereo source
Correct, that was my question.
Seems like @dualazmak 's setup the EKIO SW is responsible for that.
Also learnt that JRiver holds such module under JRCC (thanks @3ll3d00d )
 
Back
Top Bottom