• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

ASR Acourate users

I will split your question into two answers - recording and playback.

1. For recording sweeps: your Denon X3800 MUST have an ASIO driver and microphone input capable of supporting a calibrated 48V Phantom Power microphone. You could use whatever mic came with your Denon, but it is unlikely to be calibrated, and not of high quality. I checked Denon's website, and it has no ASIO driver. So I don't think you could use your Denon to record sweeps. In this case, you will need a multichannel interface with as many DAC outputs as you need, e.g. a Motu Ultralite Mk.5 or RME Fireface UCX.

Other software (e.g. Audiolense) allows you to use a USB microphone, but there are many reports of inconsistent timing measured with USB microphones. This is because the microphone ADC is not clock synchronized to the DAC. I am starting to take the view that USB mics are good for frequency response sweeps only, and they are too inconsistent for time alignment. This is not based on first-hand experience, I formed this view from reading about complaints of USB mics on other forums.

2. For playback: in this scenario, you measure the sweeps with Acourate and playback via the Denon. In this case, you will need either:

(2a) the Denon is capable of convolution. I checked Denon's website and it does not appear to support this. The online manual suggests that the Denon only supports Audyssey. I don't know how you would even get filters generated by third party software (like Acourate) onto your device. In any case, Acourate only outputs .WAV files, and if you need it in another format, you will need more software to convert it into another format. Your Denon is likely to be incapable of 65536 tap FIR filters given that very low powered DSP chips are usually installed in AVR's. Alternatively:

(2b) you use your Denon as a multichannel DAC, and use a convolver hosted on your PC (like Acourate Convolver, Hang Loose Convolver, JRiver, Roon, etc) to do the processing and send processed signals to the DAC. In this case, your Denon needs to be recognized by Windows as an audio device, preferably via ASIO or WASAPI. Sadly for you, it does not appear to be the case for either. There is a downside to doing this however - all FIR filters introduce latency, which will cause lip sync problems if you use your AVR for video as well. You will need some way to adjust lip sync with your AVR.

Your Denon already has Audyssey built in. You could use it. The advantage of doing this is that your hardware already supports it, and it is likely to produce a "good enough" result. It won't produce ultimate quality - if that is what you want, you will need a major reconfiguration of your system and be prepared to climb the DSP learning curve. You will also lose convenience and may even lose functionality - one example is getting any convolver (not just Acourate) to process HDMI audio. For this you need two additional pieces of equipment - a HDMI splitter, and an interface card capable of internal routing. Audio goes into the HDMI splitter, which extracts the audio and sends it to the interface card. The card then routes audio from the digital input into the convolver input, and the convolver outputs via the DAC on the interface card.
Helìlo Keith,

Many thanks for the detailed answer!
The Denon has indeed Audyssey. It also has DLBC, which does a better job than Audyssey. It sounds really good to be fair, but since a while, I have been curious if Audiolense/Acourate can produce a better sound than Dirac… I guess I would need to take a different route than the AVC as you clearly explained.

All the best.-
 
Helìlo Keith,

Many thanks for the detailed answer!
The Denon has indeed Audyssey. It also has DLBC, which does a better job than Audyssey. It sounds really good to be fair, but since a while, I have been curious if Audiolense/Acourate can produce a better sound than Dirac… I guess I would need to take a different route than the AVC as you clearly explained.

All the best.-

Audiolense and Dirac are what I call "black box" DSP's. Measurement goes in, result comes out. You don't know what the black box has done to your signal. I was recently made aware that Audiolense does not let you manipulate curves, e.g. if you wanted to change the volume of a crossover, you can't do it. With all DSP software, if you feed it garbage measurements, you will get garbage results. With Acourate, everything is pretty manual and YOU have to make all the decisions ("black box" DSP makes the decisions for you). The advantage of "black box" DSP software is ease of use and speed. Most of the time the results are excellent. But if you are a tinkerer, and you think you can do a better job than automation, then it's Acourate, or REW+RePhase.

The other difference is that Dirac and the DSP built into your AVR is pretty low res. It typically has 1024 taps per channel, which limits the amount of correction you can do.

So yes, Acourate and Audiolense will potentially give you better sound than Dirac which is built in to your AVR. How much of an improvement depends on how much correction needs to be done, and the skill of the user.
 
From the 4th Jan 2024 post by @Keith_W ... thank you very much for that!... I now understand the ASIO reasons why one cannot use a different DAC separate from the microphone interface when creating the active crossover filters.

For a Roon user, could it still make sense to use a separate DAC for playback? One can avoid Roon volume control… which is not always reliable and many folks question its quality. And one can avoid manual knob-twisting on the interface to change the volume.

The thinking is to get a multichannel DAC like an exaSound, an Okto or a Topping DM7. After the filters are made with an RME Fireface 802 cabled to the speaker amps, the cables are switched over to the DAC, the filters are loaded into Roon and the DAC is used for playback. Testing this with an exaSound S88 works. The exaSound s88 is nice as one can choose whether to send from Roon over USB or over the network. If one uses the network, then the volume can be changed from a tablet. Though, yes, the exaSound is much more expensive than the Okto and Topping combined.

Any feedback from folks using a separate DAC for playback? Any advice for why else this might be a good idea, or why not, or what else to try? Thanks much!
 
I use a separate DAC for playback. Measurements are taken with an RME Fireface UC, and playback is via a Merging NADAC MC-8 (the 8 channel version). The reason I did this was because back then, I believed that DAC's made an audible difference, and I specifically chose Merging because I believed in DSD. I don't believe this any more.

There are issues with using separate DAC's for measurement and playback. I have a multichannel active system, with each driver having its own amplifier. All amps are XLR except for the tweeters which are RCA. The Merging outputs a lower voltage through RCA than it does through XLR, whereas the RME outputs the same voltage in all channels. This means that a measurement which is flat when taken through the RME will show a drop in tweeter volume when played back through the Merging. I had to adjust the tweeter gain on the RME by taking a dozen sweeps.

I hate having to swap 8 cables over when I want to change from measurement to listening. It is a pain in the arse and having two sets of 8 cables creates a snake pit that you wouldn't believe. On top of that there is the potential for plugging cables into the wrong channel and blowing up your tweeters.

Since I already own all the equipment, I persist with it. But if I were to start over, I would get a Merging Hapi. It can have up to 32 DAC outputs and 16 mic inputs on a single device (via option cards), and is infinitely expandable via Ravenna. I could buy 10 Hapi's and have 320 DAC outputs if I wanted to. Right now, the easiest way for me to get more DAC channels is to sell the RME and buy a Merging Anubis. It has 6 DAC channels and 2 mic inputs, and can be added to the NADAC via Ravenna. But to me, having too many boxes sitting around is rather inelegant so I might just spring for the Hapi instead. Decisions, decisions.

And BTW, I don't use Roon. I use JRiver. Its volume control is absolutely reliable.

So: regardless of your reason or motivation for using a separate DAC for measurement and playback, of course it is possible. I just don't think it's a good idea any more.
 
That is very helpful information. Thank you. Was just playing with the equipment I have handy today… though I have started considering some Merging equipment… and have a question.

Has anyone successfully used Acourate Convolver to send to an exaSound DAC via the exaSound ASIO?

When I use Acourate Convolver to send 24-88 to an RME Fireface 802, the RME dials the buffer size up to 2048 samples. But with the exaSound ASIO, Acourate Convolver reports only 512 samples and the sound is all chopped up.
 
He told me to ask exaSound whether there is a way around that it seems their ASIO does not have a setting available to up the buffer. I have asked and am waiting for a reply. Was wondering if anyone else had figured it out already here. Or maybe I have to use a different convolver etc. Do not want to post the question across forums while already waiting on a reply from an OEM. But figured it would not be too rude to ask the question in one logical place. I only have so many days Easter holiday to play and they are going... PS: He uses a Merging Hapi himself.
 
Last edited:
You can try a different convolver. JRiver and Roon have built-in convolvers. I believe Foobar2000, which is free, has a convolver plugin. There's also CamillaDSP, which is free, but with the caveat that it is rather difficult to install and get working. And you can download and try some other convolvers for free, e.g. Hang Loose Convolver.
 
Any thoughts or experience with driver linearization for an MTM style speaker?
 
Any thoughts or experience with driver linearization for an MTM style speaker?

I haven't tried it, so I don't have direct experience. But I can't imagine that it would be a problem. Do you have a more specific question?
 
Yes. Wondering about best distance from the mic and the thought process.

(BTW, we had a wonderful time in your country about 2 months ago.)
 
When in doubt, measure at various distances and repeat. I measured some speakers which I have never seen before today. They had open a pair of baffle woofers per side, and I have never measured open baffle woofers, or a pair of woofers for that matter. So:

1712403067776.png


I had all kinds of doubts, e.g. would the pair of woofers cause driver lobing, what about the out-of-phase back wave (I had no choice but to do an in-room measurement), and so on. So I took measurements 20cm, 50cm, and 100cm from the driver plane on axis (red, green, brown respectively). I elected to use the green measurement for correction, but in hindsight I should have not done a nearfield correction of the woofers at all. When in doubt, leave them alone. Not to say it's not worth experimenting, if you don't mind spending the time.
 
Any thoughts or experience with driver linearization for an MTM style speaker?
I would measure with both drivers active quasi anechoically and mic facing the tweeter to start with
 
I suspect I might be talking to myself here. But anyway, I had a discussion with Uli about a new driver linearization technique that also flattens phase by convolving a Reverse All Pass filter into the correction. I believe that this method has not been described anywhere online. As a teaser, this is what it does:

AIL4fc8aBEvGL9GWJeez1_QqKri3OIS8M_U0qY5olxanjpwaQu15xANoKcAx-PY5KINcqajAkAW4UBah90va1-mLQIDFIA7mX28s0eEfZwhrdR2JJRw8WwxRD6s8dRULM9rRvVVx1AgfnJ5G1yuRqghALElh7A=w1920-h1080-s-no


Red = uncorrected, Green = corrected and convolved with crossover. These are actual before and after verification measurements, and not a sim. You can see how the phase angle remains absolutely flat, never deviating from 0 degrees, even through the crossover.

Uli sent me the instructions on how to do it. I have re-organized what he said, and inserted some screenshots, Mitch style, to make the method easier to follow. Any mistakes or errors are mine.

STEP 1: Create Crossovers

1. Create a working directory so that it looks like this:

AIL4fc-foL8AJgbt3aL9liWQenm0W9nHTxZD1m1hSUbA2UFzGEQRxy8JSUyAdInz-4Nb-qXvPJql8TJm9cVjzBa-4Whx4LRBsxx3w8CbSr_zV2X1djbd1w4x1Aza90SMLR1pUvBefon4JinLAvZp7WlkuWvf4A=w161-h167-s-no


2. Create your crossovers (Generate-Crossover) and save the raw, unmanipulated crossovers into "00 Naked XO". Uli has persuaded me to move away from NT2 crossovers, and I am now using his new "UB jPol 11" crossover, 1st order. Separate discussion to follow.

Step 2: Measure and Create Filters

3. Set one of the drivers as your project working directory.

4. Logsweep Recorder to record your Pulse48L (work on one driver at a time). The focus is to only measure within the range of the intended crossover, usually 1-2 octaves before the corner frequency depending on the crossover configuration. Load Pulse48L into Curve 1, and note its maximum and minimum gain (in this case, max of 20.655 @ 606.445Hz, and min of 14dB @ 3500Hz; see top right panel). DO NOT MOVE YOUR MICROPHONE until you have completed your verification measurement and you are 100% happy with the result.

The principle of linearization is to avoid too much gain loss through the crossover by correcting too large a range of volume. Uli suggests a maximum of 6dB should be corrected. However, this method involves magnitude limitation followed by normalization, which compensates for the gain loss to an extent, so perhaps up to 10dB can be corrected. This is valid for bandpass filters, for low pass or high pass filters (with rising volume at each extreme frequency), we may need to limit the range of correction, depending on the circumstance - e.g. there may not be any point sacrificing tweeter gain to correct a rising response above 20kHz, since it is less audible.

- TD-Functions - Gain. We are going to add or subtract the gain to avoid over-correcting the gain. In this case, it was 20.655dB, with a minimum of 14dB. So we accept a 10dB gain correction range. I subtracted 10.655dB to bring the maximum gain down to 10dB. This step might need several iterations (see step 6). As Uli said to me in his email, "IMPORTANT: the proper amount of correction is selected by "feeling".

5. Make a linearization filter. All these steps apply to Curve 2 ONLY.
- Copy Curve 1 into Curve 2. (Ctrl-C, 2).
- FD-Functions - Magnitude Inversion (linear phase) into Curve 2. This mirrors Curve 1 along the 0dB axis.
- FD-Functions - Magnitude Limiter 0 into Curve 2. The result is always minimum phase.
- TD-Functions - Frequency Dependent Window (F3): 15/15, result into Curve 2. You may prefer a softer correction with FDW 10/10.
- Save Curve 2, I called mine "LinearizationL.dbl". At this stage your screen should look like this:

AIL4fc8arB0Fkhs-w6tTvH2h5p4PWICWstS3S2wP4Ou-z_URfXIMzP5fKVkg3RLdnuj0LyneHHdFFArQ57dG0cg5c3tZ8caJgVYH1vDrGN0d8zIf7o63T5ikVRN-MrNj5jFJNx0bJNo54MAhaeY-PTR1GIFx1w=w1920-h1080-s-no


6. Linearize your measurement and use it as a guide slope to help create the All Pass Filter. Convolve Curve 1 (raw measurement) into Curve 2 (linearization filter). Result into curve 3. Hit the "PK" button (you will find it at the bottom of the phase window) to reveal the Peak. This is what you should get:

AIL4fc8XoxKOadvOPZooRXI73EqOF8punxCEBJNUYlAc1rv0cfygZb7emygjuWBaIERSQrEzgtHa8BAIQ-VZnOPfRz-sMBt58WrZA7XDIJr3jU7tbYf3HcxgvDx9VMcQuKcvtIBiFiXZUcSIIFZnKfiEgRywtA=w1920-h1080-s-no


Now study your guide slope, if there is significant phase lag (i.e. most of the phase angle is below zero), you will not be able to correctly create the All Pass filter. It is important to note that the peak of Curve 3 is centered at Sample 6000. Left-Right click on the Time display, and make sure Max is at 6000. If it isn't, then adjust the center in the box circled. Sometimes unconventional thinking is required - e.g. my tweeters had a 180deg phase lag. The solution was to invert the tweeter at the amplifier. To avoid having to re-measure, the inversion was simulated in Acourate and the filter was applied.

AIL4fc_bfaEirRh_PR5BfkOud5CY1nh9yMrYT_FMfKPZ3iirFWoCg8kZVnekdk_bkcTE6dR5kxVFOjXgZ_dJqOY1g0VOyjNdeX99smqNgvq3RltthEQlqxws-LllIHosCp5GB2lqiDFg80JANJ6K2xoQP6020g=w1920-h1080-s-no


7. Create your Reverse All Pass Filter. Set Active Curve 4 (it should currently be blank) and hit the IIR button (or Generate - IIR Filter).
- Play with the f(0) and Q until the curve reasonably matches the guide slope - see blue line in the above diagram.
- Once satisfied, TD-Functions - Reversion (F12) to invert the All Pass filter. This will correct the guide slope.
- Save this as "RevAP-L".

AIL4fc8EtGK-U70Xq9TiMfOqggQYkXln-psueKFRsGGVkr-IlWW-TWZy-5DWYFvL5ujXNXNhN1LUx5hQe5u5JESFyjLtynMfyMjDriilsOQ-Cks9RKLu5_YYRzpJ4v7DV_9UkH8yEutyfUtU4jAapjztJwVZ3w=w1920-h1080-s-no


8. Create the correction filter. Convolve the linearized measurement (curve 3) with the reverse all pass filter (curve 4); result into Curve 5. It should show some improvement. In the above image you can see that the frequency response (in blue) has been flattened compared to the initial measurement (in red). The phase angle has also shown improvement.

9. Load the raw crossover (in this case XO3L48.dbl) into Curve 6. Now we are going to complete all the operations into the raw crossover and turn it into a driver correction filter.
- Convolve Curve 5 (correction filter) with the crossover, result into Curve 1.
- Set Curve 1 Conv4_6 active: FD-Functions - Magnitude Normalization
- TD-Functions - CutNWindow; Cut length 65536, position 65536, result into Curve 1.
- Save Curve 1 as XO3L48.dbl

STEP 3: Verification

10. Now we are going to verify that the driver correction filter is working as intended.
- Create a "Verify" subfolder in your current working directory. Change workspace to this folder.
- Set active Curve 1. File-Create Mono .WAV. I called it XO3L48.wav
- Logsweep recorder, and make sure you load XO3L48.wav as a filter!
- Perform your measurement and compare the before and after.

STEP 4: Time Alignment

11. After you have completed linearization of all drivers, proceed to time alignment using your preferred method. NOTE that the reverse all pass filter will affect your time alignment. A normal All Pass filter causes propagation delay. A reverse all pass filter causes a negative delay - i.e. it moves the impulse forward in time. So do not be surprised if all your previously measured delays are thrown off. As an example, without the reverse AP filter, my subwoofers are about 200 samples delayed. After the reverse AP filter was applied, it is now 37 samples delayed.

12. After all these steps are completed, you have a complete set of filters ready for application of room correction and target curves.

Listening Impressions

I generated a new set of filters using the same target curve that I normally use to ensure that the only variable that has changed is the driver linearization technique. The old crossovers that I was comparing to have also had driver linearization, but these were performed using the method described in Mitch's book. Those of you who are familiar with the book know that it does not include the reverse all pass filter in the driver linearization. These old filters that I made already sound very good, and I wondered how the sound could possibly improve from this because I did not think it was possible.

With the old filters, the crossover points were difficult to detect, and in fact I thought I could not hear them. Careful listening with the new filters makes me realize that I can actually hear the crossover points in my old filters, although the difference is fairly subtle - but according to the maxim "once seen, can not be unseen", now that I know what to listen for, I can hear the crossover points in my old filters reliably consistently now. The smoothness of the filters generated by this method is pretty unbelievable. NOW I have a new standard for "can not possibly be improved". The other noticeable difference was clarity - with the old filters there was a bit of smudging here and there (in hindsight, probably at the crossover points) but now everything sounds clean top to bottom.

The other impressive improvement was with the impact of transients, although I am more inclined to credit this to improved time alignment rather than linearizing the phase angle. I use a recording of Japanese drums for this, and when there is a big whack on a Taiko drum, you can hear the skin of the drum, the huge bass transient, and rapid decay back to silence. To me, I had to rub my eyes (and ears) in disbelief as to how realistic it is.
@Keith_W

I had already had the opportunity to read about this method of linearization on the Acourate forum.

I don't understand what's (eventually) wrong with the automatic method of Audiolense.
Uli states in the linked thread that he doesn't believe in automatic task...
(curious he also say that Bernt of Audiolense denied him to buy a license :D).

However, if I measure IR and Phase obtained with Audiolense it is all perfectly aligned...
What is called pre-linearization here should be a bit like the automatic TTD correction of Audiolense.

But in fact Uli in that thread recently wrote that he is developing a new macro for Acourate V3 to perform just this task of aligning the drivers.

I don't know ... I'm not an expert. I'm just curious to understand what tools and methods are better to use.

PS. You should update with the actual price of Acourate :p
 
Last edited:
@Keith_W

I had already had the opportunity to read about this method of linearization on the Acourate forum.

I don't understand what's (eventually) wrong with the automatic method of Audiolense.
Uli states in the linked thread that he doesn't believe in automatic task...
(curious he also say that Bernt of Audiolense denied him to buy a license :D).

However, if I measure IR and Phase obtained with Audiolense it is all perfectly aligned...
What is called pre-linearization here should be a bit like the automatic TTD correction of Audiolense.

But in fact Uli in that thread recently wrote that he is developing a new macro for Acourate V3 to perform just this task of aligning the drivers.

I don't know ... I'm not an expert. I'm just curious to understand what tools and methods are better to use.

PS. You should update with the actual price of Acourate :p

I have no idea how much Acourate costs these days :)

As for the new version of Acourate, from what Uli says on his forum it does look like there will be a new macro for time alignment and driver linearisation. I guess we'll have to wait and see.

Uli has given a number of reasons why he favours manual time alignment over automatic alignment. I can tell you that I have done this procedure dozens of times now. IMHO time alignment of the higher frequencies is easy, quick, and almost mindless in its simplicity so perhaps automatic alignment may be more convenient. But that is certainly not true for subwoofer time alignment, that is a real bear and to be honest I haven't completely made my mind up. There are a number of people who give conflicting opinions, e.g. Geddes does not time align at all, he says that you need a couple of cycles of bass before you even hear it (if you do the maths, a "couple of cycles" of 20Hz is 100ms!). Bob McCarthy recommends phase alignment and not alignment of the impulse peaks. And of course Uli lets you choose whether you want to align the impulse peak or phase. I don't know how Audiolense does it. I was told it has something to do with "gamma tone filtering" but I don't know what that is.

All I can say is that Acourate's approach makes you to think about what you are doing. My friend said - "when you figure it out you feel smart and smug".
 
I have no idea how much Acourate costs these days :)

As for the new version of Acourate, from what Uli says on his forum it does look like there will be a new macro for time alignment and driver linearisation. I guess we'll have to wait and see.

Uli has given a number of reasons why he favours manual time alignment over automatic alignment. I can tell you that I have done this procedure dozens of times now. IMHO time alignment of the higher frequencies is easy, quick, and almost mindless in its simplicity so perhaps automatic alignment may be more convenient. But that is certainly not true for subwoofer time alignment, that is a real bear and to be honest I haven't completely made my mind up. There are a number of people who give conflicting opinions, e.g. Geddes does not time align at all, he says that you need a couple of cycles of bass before you even hear it (if you do the maths, a "couple of cycles" of 20Hz is 100ms!). Bob McCarthy recommends phase alignment and not alignment of the impulse peaks. And of course Uli lets you choose whether you want to align the impulse peak or phase. I don't know how Audiolense does it. I was told it has something to do with "gamma tone filtering" but I don't know what that is.

All I can say is that Acourate's approach makes you to think about what you are doing. My friend said - "when you figure it out you feel smart and smug".
It's almost 500€.

Regarding the topic ... I believe it is a relative discourse.
Mathematically there is only one solution that provides perfect sum at crossover, alas perfectly complementary IR (so phase, IR time and IR envelope need to match).
But for non coaxial speakers this is valid only for 1 point in space.
The same discourse could then be applied to the stereo, as only on the central plane between the two speakers can achieve the best phase coherence.
So in practice the theory is of little or no use...
However, if I have to spend money on software that does this, I want to understand how much it is worth it.

I currently find more cost/effectiveness in Audiolense.

Here Bernt explains how it aligns drivers. Not sure about gammatone filtering...
 
Last edited:
I did not read through the thread, sorry, but in case it has not been mentioned Mitch Barnett @mitchco is an ASR member and wrote the book on using Acourate: Accurate Sound Reproduction Using DSP.

 
Back
Top Bottom