• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Does Phase Distortion/Shift Matter in Audio? (no*)

If the crossover frequency is not 1 kHz, then the Phase vs. Frequency will be the same shape, except that it will shift left or right so that the inflection point is at the new crossover frequency.
Yes, I'm aware. Like I said before, each data point in this plot was calculated with both the all pass and ERB centered at the frequency plotted. In other words, at 300Hz, both the all pass and ERB are centered at 300Hz. At 1kHz, both are centered at 1kHz. At 3kHz, both are centered at 3kHz. (And so on...)

As you say, the bandwidth in octaves changes with frequency (lower frequencies being wider), which is why the phase rotation over the ERB decreases with increasing frequency.

Here's another plot which shows the (instantaneous) phase rotation per ERB for some individual crossover frequencies:
lr4_phase_per_erb_discrete.png

You can see that the peaks approximately match the curve shown in the other plot.
 
I thought what he was saying in response to the question is that phase shift is an issue if it happens within the human hearing range, and that is why devices have higher bandwidths. In devices with a wide bandwidth, the frequency roll-off and subsequent phase issues happen above human hearing, thus the device can produce a linear response within human hearing with no audible phase shifting.

Jumping from that, Marketing takes this to an unnatural extreme - and my experience is that downsampling filters have gotten good enough on the consumer level that sample frequencies above 48khz aren't needed to produce incredible fidelity. Combine that with good performances, quality instruments and quality microphones, and you have a recipe for an amazing end result when the production process is done (Assuming no one makes any big production mistakes).

I used to have an M-audio Firewire 18/14 that had terrible sound at 44.1khz, but when I switched to 88.2khz, it sounded excellent. Then I would downsample recordings afterward with software and there was no difference (subjectively that I could tell) between 44.1 and 88.2khz. I ultimately found 48khz to be what sounded great without needing to fiddle with downsampling as a separate process until the very end of production. I figured that there was a downsample filter issue in the device itself since 44.1khz should sound just as good as higher sampling rates, as evidenced by Compact Discs which are still considered lossless audio today, and are 44.1khz. I complained about this to M-audio to no avail (At the time).

Incidentally, some software instruments like EZ Keys v1 have strange resonances and a graininess sound at 44.1khz which disappear at higher sampling rates. But - I still think this is a software design problem. 44.1khz is really all that's needed to produce hi-fi audio. Above that is fairy dust, but unfortunately poor design forces us to use higher sampling rates to make sure we can get good fidelity and compensate for design flaws in our gear. Marketing is leveraging that to convince people to use ridiculously high sampling rates - but that's a useless endeavour that will only force you to buy more hard drive space.
 
and it's pretty much gone by 2kHz
By definition phase should be undetectable for wavelengths shorter than the distance between our ears. If we assume that to be about 15cm, biological cutoff frequency is 2286Hz.
 
By definition phase should be undetectable for wavelengths shorter than the distance between our ears. If we assume that to be about 15cm, biological cutoff frequency is 2286Hz.

Head size has nothing to do with this, and envelope shape takes over from phase above 2kHz. The mechanism is firing rate of neurons, which fire in phase at least up to 500Hz, to some extent to 1kHz, and a bit to actually about 4kHz, although the effect is almost gone by 2kHz. Above 2kHz, the neurons fire on leading edge of signal envelope in a given ERB.

And the "by definition" is entirely wrong for many reasons, as well, things like HRTF, etc, also confound your idea.
 
The mechanism is firing rate of neurons
Thanks for the information. Is this what is meant by phase locking of auditory nerve fibers?
 
Thanks for the information. Is this what is meant by phase locking of auditory nerve fibers?

Mostly, as with anything, "it's more complicated than that". :) The firing of the outer hair cells to decrease the gain on the cochlea can't help but intervene too.
 
  • Like
Reactions: OCA
Head size has nothing to do with this, and envelope shape takes over from phase above 2kHz. The mechanism is firing rate of neurons, which fire in phase at least up to 500Hz, to some extent to 1kHz, and a bit to actually about 4kHz, although the effect is almost gone by 2kHz. Above 2kHz, the neurons fire on leading edge of signal envelope in a given ERB.

And the "by definition" is entirely wrong for many reasons, as well, things like HRTF, etc, also confound your idea.

Thank you JJ. I don't mean to argue with you, but I am now a bit confused by Wikipedia's entry on ITD which states that the ITD is dependent on the width of the head.

1724659779369.png


Perhaps it would be good to clarify if you are talking about INTRA-aural or INTER-aural phase differences?
 
Thank you JJ. I don't mean to argue with you, but I am now a bit confused by Wikipedia's entry on ITD which states that the ITD is dependent on the width of the head.

View attachment 388581

Perhaps it would be good to clarify if you are talking about INTRA-aural or INTER-aural phase differences?


ITD does scale exactly due to shape and size of head, yes. But that is a different question than detecting phase of a signal, either in monaural or binaural signals.

Up to about 500Hz, phase of bass over 40Hz or so is a remarkably powerful "directional" cue.

Above 2K, envelope is as good a cue.

You may, if you've tried, notice that clicks and sudden signals at high frequencies can be localized, but a continuous sine wave at 10kHz offers just about zero cues, and only HRTF's help, and poorly so. There are actually signals designed to be precisely localized.

Something to realize is that with that kind of a signal, there is some sensation detectable down to between 10 and 5 microseconds, in terms of subtle shift of the source. The ear can not phase lock at all at that level, but it CAN lock on to the attack of a signal envelope.

I think I said "it's complicated" for a reason.

Now, in terms of monaural, without concerning localization, the coherence of the signal at the two ears is also detectable, exactly what the threshold is is not yet perfectly clear, but I can tell you it's much, much better than MPEG Surround can provide. :)
 
Dp you mean 40 degrees?
I mean between about 40Hz and about 500/800 Hz (depending on how you define strong) interaural phase is a very important part of directional cues. Among other things, that shows that phase locking happens at those frequencies. Above the upper limit, the locking becomes less and less and less effective due to the 1ms best neural recharge rate. Below the lower limit, the edge of the waveform is too small to overcome the noise floor on the cochlea.
 
Last edited:
I recently heard a demo (at the Melbourne hi-fi show) of a linear phase loudspeaker that made me think about this subject of intrachannel phase distortion a little more. The speaker was set up very close to the listeners, I was about 1m (3ft) away from the speakers. There was exceptional clarity to the sound. Why?

It was stated by Amir in his video that phase distortion is only audible with headphones or in anechoic chambers with special test signals, and not with loudspeakers in normal listening rooms.

The statement has a few unspoken assumptions. Firstly, loudspeakers are minimum phase devices that by themselves produce phase distortion. Secondly, room reflections produce excess phase and add to that phase distortion. By the sound reaches your ears, there is so much phase distortion that it swamps any additional phase distortion that may be introduced by electronics.

I mentioned Griesinger earlier in this thread. Griesinger said that the sensation of "proximity" is due to the preservation of phase coherence which is gradually lost the further we travel from the sound source due to phase distortion created by reflections. Griesinger mentioned a sudden transition where the sensation of proximity is lost. He called this the "Limit of Localisation Distance", or LLD.

We also know from acoustics that there is such a thing as "critical distance" (CD), which is the distance from the source where the SPL of the direct sound equals the SPL of the reverberant sound field. I have never seen anybody linking CD to the LLD, even though the two are conceptually similar. One could easily imagine that if the direct sound was phase coherent, and the reverberant sound is phase incoherent (it has to be, by definition), any point beyond the CD would result in loss of "proximity".

@j_j has stated elsewhere that intrachannel phase distortion of more than 15 degrees per ERB is audible, and that phase distortion between ERB's which are far apart is inaudible.

So here is my question: what if we use linear phase filters, and sit so close to the speakers that there is not much excess phase to speak of. Do we then get to hear what lack of phase distortion sounds like? People who use minimum phase filters say that phase distortion is inaudible. Are they correct?

All I can give is my anecdotal experience. To me, linear phase sounds exceptionally clear. Right now, what I want to know is whether this is my confirmation bias, or whether it's actually true.
 
I recently heard a demo (at the Melbourne hi-fi show) of a linear phase loudspeaker that made me think about this subject of intrachannel phase distortion a little more. The speaker was set up very close to the listeners, I was about 1m (3ft) away from the speakers. There was exceptional clarity to the sound. Why?

Was that clarity noted over a range of program material or just a few limited sample tracks?
 
I mentioned Griesinger earlier in this thread. Griesinger said that the sensation of "proximity" is due to the preservation of phase coherence which is gradually lost the further we travel from the sound source due to phase distortion created by reflections. Griesinger mentioned a sudden transition where the sensation of proximity is lost. He called this the "Limit of Localisation Distance", or LLD.

We also know from acoustics that there is such a thing as "critical distance" (CD), which is the distance from the source where the SPL of the direct sound equals the SPL of the reverberant sound field. I have never seen anybody linking CD to the LLD, even though the two are conceptually similar. One could easily imagine that if the direct sound was phase coherent, and the reverberant sound is phase incoherent (it has to be, by definition), any point beyond the CD would result in loss of "proximity".
Griesinger noted that LLD can significantly exceed CD, as in Boston Symphony Hall (https://www.audiosciencereview.com/...cert-hall-acoustics-links-and-excerpts.51487/)
 
IME, effects of phase distortion are audible even in only moderately treated rooms. It can be seen as an overall change of the source signal and, for example, the change of timbre of LF content with strong even order harmonics (bass guitar) is there no matter how bad your room modes might be. It's just a bit harder to pinpoint, up to the point of hopelessness when you live in a hall of mirrors, of course.
My reading of posts here is the following -

DSP stuff corrects frequency response with IIR filters
Due to vast difference in processing power of DSPs via PC CPUs PC-based software is better and it typically uses IIR filters
Software that uses FIR filters will correct frequency response and time response as well

So it looks like if you are concerned about this you need sophisticated PC software that uses FIR.

I have used Sonarworks and it improved the sound by smoothing the frequency response, but it, like all the hardware stuff, does not manage timing as well as a system with FIR. Maybe try Audiolense 2.0 or Acourate?

ChatGPT4o-

You've outlined some important points regarding the differences between using IIR (Infinite Impulse Response) filters and FIR (Finite Impulse Response) filters in DSP (Digital Signal Processing) applications for audio correction. Let's expand on these distinctions and their practical implications:

IIR Filters​

  • Processing Efficiency: IIR filters are generally more computationally efficient than FIR filters. They require fewer calculations to achieve a desired effect, making them suitable for lower-powered devices like DSPs embedded in consumer electronics.
  • Phase Distortion: One limitation of IIR filters is that they inherently introduce phase distortion, which can affect the timbral quality and spatial characteristics of audio. This can be acceptable or even negligible in many applications but might not be ideal for high-fidelity or critical listening environments.
  • Application: IIR filters are commonly used in situations where computational resources are limited, such as in hardware DSP units in AV receivers or in some live sound environments.

FIR Filters​

  • Phase Linearity: FIR filters can be designed to be linear phase, meaning they do not introduce phase distortion across the frequency spectrum. This preserves the original time-domain characteristics of the audio signal, which is crucial for maintaining the spatial cues and clarity of the sound.
  • Computational Demand: FIR filters typically require more processing power due to their need to handle a larger number of calculations, especially as the filter length increases to correct for lower frequencies or for more precise control over the sound.
  • Application: FIR filters are preferred in high-end room correction software used in PC-based systems where there is ample processing power. These systems can handle the intensive computation required to run large FIR filters without compromising system performance.

Practical Considerations​

  • Choice of Filters: The choice between IIR and FIR filters often comes down to the specific requirements of the application and the available processing resources. For home audio enthusiasts and professionals who require precise control over both frequency and time response, FIR filters are typically the better choice.
  • Hardware vs. Software: Dedicated hardware DSP units often use IIR filters due to their lower computational requirements. In contrast, PC-based DSP software can take advantage of the greater processing power available in modern computers to use FIR filters, providing more comprehensive correction.
  • Sound Quality: For audiophiles and sound engineers, the ability of FIR filters to correct both the amplitude and phase of audio signals is crucial for achieving the highest sound quality, as it ensures that the audio output is as close as possible to the original or intended sound.

Conclusion​

If you are concerned about achieving the best possible correction of both frequency response and time-domain response, using sophisticated PC-based software that employs FIR filters is advisable. This approach will typically yield better overall sound fidelity, particularly in environments where the accuracy of audio reproduction is critical, such as in mastering studios or high-end home theaters. However, the specific choice will also depend on practical considerations like budget, hardware capabilities, and personal preferences regarding the audio quality and type of correction needed.

4
When comparing Acourate, Sonarworks, and Audiolense, it's important to consider how each software addresses room and speaker calibration. Each has distinct features, goals, and user interfaces tailored to different segments of the audio community, from studio professionals to home audio enthusiasts. Here's a detailed look at each one:

Acourate​

Focus and Approach:
  • Acourate is renowned for its highly technical and precise approach to audio correction, offering extensive control over room acoustics and speaker anomalies. It's designed for users who are well-versed in audio analysis and want deep customization.
Key Features:
  • High-resolution FIR Filters: Corrects both amplitude and phase across the frequency spectrum, including time-domain correction.
  • Room Simulation: Useful for planning and optimizing room setups and speaker placements.
  • Advanced Customization: Offers tools for generating tailor-made convolution filters and implementing complex crossover designs.
Usability:
  • Acourate requires a significant learning curve and is best suited for audio professionals or serious audiophiles who have a good understanding of acoustic principles.

Sonarworks​

Focus and Approach:
  • Sonarworks aims to simplify the calibration process, focusing on achieving a flat frequency response for accurate audio production. It's targeted primarily at music producers and studio engineers who need consistent sound across various playback systems.
Key Features:
  • Easy Calibration Process: Streamlined setup for speaker and headphone calibration.
  • Extensive Headphone Profiles: A comprehensive library of headphone calibration profiles.
  • Systemwide or Plugin Operation: Can be used as a DAW plugin or to adjust system-wide audio.
Usability:
  • Sonarworks is known for its user-friendly interface and straightforward calibration process, making it accessible even to those with limited technical audio knowledge.

Audiolense​

Focus and Approach:
  • Audiolense blends high-end room correction capabilities with user-friendly elements, focusing on both frequency and time-domain corrections to ensure sound waves from all speakers are synchronized.
Key Features:
  • True Time Domain Correction: Corrects timing discrepancies alongside frequency response adjustments to ensure coherent sound reproduction.
  • Integrated Crossover Design: Facilitates the design and optimization of digital crossovers, enhancing multi-way speaker systems.
  • Intuitive User Interface: While offering complex features, it remains accessible to enthusiasts without deep technical backgrounds.
Usability:
  • Audiolense offers a balance between advanced correction capabilities and ease of use, providing guided processes that help users effectively manage room acoustics.

Key Differences​

  • Technical Depth: Acourate offers the most control and customization but requires the most technical knowledge. Sonarworks, while powerful, focuses more on simplicity and ease of use, making it less daunting for non-experts. Audiolense sits in the middle, offering advanced features with more user-friendly guidance.
  • Correction Focus: Acourate and Audiolense both address phase issues and time-domain corrections comprehensively, which are crucial for accurate soundstage reproduction in audiophile systems. Sonarworks focuses more on achieving a flat frequency response, particularly beneficial in studio environments for mixing and mastering.
  • Target Audience: Acourate is geared towards audiophiles and professionals who need detailed adjustment capabilities. Sonarworks targets studio professionals and music producers looking for reliable sound across devices. Audiolense is suitable for home audio enthusiasts and audio professionals who want detailed audio correction but with easier setup than Acourate.

Conclusion​

Choosing between Acourate, Sonarworks, and Audiolense depends on your specific needs:
  • Professional Studios: Sonarworks is likely the best choice for consistent and accurate monitoring.
  • Home Audio Enthusiasts and Audiophiles: Audiolense offers a good mix of depth and usability, with time-domain correction that enhances the listening experience.
  • Technical Users with Specific Needs: Acourate provides the tools for those who want to delve deeply into audio correction and customization.
Each software caters to different requirements and levels of expertise, so aligning your choice with your audio goals and technical comfort will yield the best results.

4
To clarify and compare Audiolense XO with Audiolense 2.0, we need to understand that these are different editions or versions of the same core software designed for audio correction and speaker optimization.

Audiolense XO​

Audiolense XO is a specialized version of Audiolense focused primarily on crossover design and optimization for advanced audio systems, particularly those involving multi-way speakers.
Key Features:
  • Advanced Crossover Capabilities: Allows users to design and optimize digital crossovers, critical for systems with separate drivers handling different parts of the audio spectrum.
  • Comprehensive Correction: Includes both time and frequency domain corrections to ensure that audio from all speakers is perfectly aligned and coherent.
  • High-Resolution Support: Capable of handling high-resolution audio up to 192 kHz, making it suitable for high-end audiophile systems.
  • Multi-Channel Configuration: Supports complex multi-channel setups, essential for surround sound environments and advanced stereo systems.

Audiolense 2.0​

Audiolense 2.0 refers to a version of the Audiolense software, which could imply a significant update or improvement over previous versions. This version would typically include enhancements across the board, improving both usability and technical capabilities.
Potential Updates Might Include:
  • Enhanced Algorithms: Improved accuracy in room measurement and correction calculations.
  • User Interface Improvements: More intuitive and accessible interface adjustments to simplify the process of setting up and managing room corrections.
  • Expanded Hardware Support: Increased compatibility with a broader range of audio interfaces and measurement microphones.
  • Refined Filter Designs: Enhancements to how FIR and other filters are applied for even more precise room correction.

Comparison and Use Cases​

  • Audiolense XO is more niche and targeted towards users who need specific control over multiple speakers and channels, with a particular focus on crossover design. This makes it ideal for audiophiles and professionals who are designing sophisticated audio setups involving multiple speakers and want to ensure that each speaker's output is meticulously calibrated.
  • Audiolense 2.0 (or any new version update of Audiolense) would generally appeal to a broader audience, improving upon the existing foundation with better usability, more precise correction algorithms, and broader hardware support. It would be suitable for both home audio enthusiasts and professional settings where accurate sound reproduction is critical but may not necessarily require the advanced crossover functionalities of the XO version.

Conclusion​

Choosing between Audiolense XO and Audiolense 2.0 would depend on your specific needs:
  • If your focus is on advanced speaker configurations with complex crossover needs, Audiolense XO is the better choice.
  • If you are looking for the latest improvements in usability and general room correction technology, a newer version like Audiolense 2.0 would be more appropriate.
Both versions highlight Audiolense's commitment to providing top-tier audio correction capabilities, tailored to different segments of the audio community.
Using Finite Impulse Response (FIR) filters for room correction does not inherently limit the dynamic range of an audio system in a way that would be detrimental under normal circumstances. However, there are a few aspects and considerations regarding how FIR filters interact with dynamic range:

1. Bit Depth and Quantization Noise

  • FIR filters require digital signal processing, which operates within the constraints of the system’s bit depth. Every time an audio signal is processed through a digital filter, it undergoes quantization, which can introduce noise if not properly managed. However, with adequate bit depth (24-bit or higher, which is common in modern audio equipment), the impact of quantization noise on dynamic range is minimal.

2. Filter Implementation and Precision

  • The way FIR filters are implemented can affect the signal. High precision in filter coefficients and processing is essential to maintain the integrity of the audio signal. Poorly designed FIR filters or those implemented with insufficient computational accuracy might introduce errors or distortions that could, theoretically, impact dynamic range.

3. Headroom and Gain Structure

  • FIR filters often change the level of certain frequencies to correct the room's acoustic anomalies. Proper gain structure must be managed to ensure there is enough headroom to avoid clipping while maintaining the full dynamic range of the audio content. This involves careful calibration of the audio levels post-filtering.

4. Linear Phase Response

  • One of the advantages of using FIR filters for room correction is their ability to provide a linear phase response, meaning all frequency components of the audio signal are time-aligned. This preserves the waveforms of the original signal more faithfully compared to minimum phase filters, which can alter the phase and potentially affect the perceived clarity and impact of the dynamics.

5. Practical Usage

  • In practical applications, any reduction in dynamic range due to FIR filtering is usually imperceptible. The design and application of FIR filters typically focus on preserving or enhancing the audio quality, including maintaining dynamic range. In high-fidelity audio systems where room correction is applied, the benefits of correcting frequency and phase anomalies generally outweigh any minimal impact on dynamic range.

Conclusion​

In well-designed audio systems and room correction setups using FIR filters, the impact on dynamic range should be negligible, providing the system is properly calibrated and the filters are designed with high precision. The key is to use high-quality equipment and software that can handle the demands of high-resolution audio processing without introducing significant artifacts or noise.

4
4
 
Last edited:
Was that clarity noted over a range of program material or just a few limited sample tracks?

I suspect what you are asking is whether they curated the tracks to show off the speakers? I have no idea. I did not get to play any of my music. They were playing their own tracks. It was mostly standard audio show fare - jazz, female vocals.
 
My reading of posts here is the following -

DSP stuff corrects frequency response with IIR filters
Due to vast difference in processing power of DSPs via PC CPUs PC-based software is better and it typically uses IIR filters
Software that uses FIR filters will correct frequency response and time response as well

So it looks like if you are concerned about this you need sophisticated PC software that uses FIR.

I have used Sonarworks and it improved the sound by smoothing the frequency response, but it, like all the hardware stuff, does not manage timing as well as a system with FIR. Maybe try Audiolense 2.0 or Acourate?

ChatGPT4o-

You've outlined some important points regarding the differences between using IIR (Infinite Impulse Response) filters and FIR (Finite Impulse Response) filters in DSP (Digital Signal Processing) applications for audio correction. Let's expand on these distinctions and their practical implications:

IIR Filters​

  • Processing Efficiency: IIR filters are generally more computationally efficient than FIR filters. They require fewer calculations to achieve a desired effect, making them suitable for lower-powered devices like DSPs embedded in consumer electronics.
  • Phase Distortion: One limitation of IIR filters is that they inherently introduce phase distortion, which can affect the timbral quality and spatial characteristics of audio. This can be acceptable or even negligible in many applications but might not be ideal for high-fidelity or critical listening environments.
  • Application: IIR filters are commonly used in situations where computational resources are limited, such as in hardware DSP units in AV receivers or in some live sound environments.

FIR Filters​

  • Phase Linearity: FIR filters can be designed to be linear phase, meaning they do not introduce phase distortion across the frequency spectrum. This preserves the original time-domain characteristics of the audio signal, which is crucial for maintaining the spatial cues and clarity of the sound.
  • Computational Demand: FIR filters typically require more processing power due to their need to handle a larger number of calculations, especially as the filter length increases to correct for lower frequencies or for more precise control over the sound.
  • Application: FIR filters are preferred in high-end room correction software used in PC-based systems where there is ample processing power. These systems can handle the intensive computation required to run large FIR filters without compromising system performance.

Practical Considerations​

  • Choice of Filters: The choice between IIR and FIR filters often comes down to the specific requirements of the application and the available processing resources. For home audio enthusiasts and professionals who require precise control over both frequency and time response, FIR filters are typically the better choice.
  • Hardware vs. Software: Dedicated hardware DSP units often use IIR filters due to their lower computational requirements. In contrast, PC-based DSP software can take advantage of the greater processing power available in modern computers to use FIR filters, providing more comprehensive correction.
  • Sound Quality: For audiophiles and sound engineers, the ability of FIR filters to correct both the amplitude and phase of audio signals is crucial for achieving the highest sound quality, as it ensures that the audio output is as close as possible to the original or intended sound.

Conclusion​

If you are concerned about achieving the best possible correction of both frequency response and time-domain response, using sophisticated PC-based software that employs FIR filters is advisable. This approach will typically yield better overall sound fidelity, particularly in environments where the accuracy of audio reproduction is critical, such as in mastering studios or high-end home theaters. However, the specific choice will also depend on practical considerations like budget, hardware capabilities, and personal preferences regarding the audio quality and type of correction needed.

4
When comparing Acourate, Sonarworks, and Audiolense, it's important to consider how each software addresses room and speaker calibration. Each has distinct features, goals, and user interfaces tailored to different segments of the audio community, from studio professionals to home audio enthusiasts. Here's a detailed look at each one:

Acourate​

Focus and Approach:
  • Acourate is renowned for its highly technical and precise approach to audio correction, offering extensive control over room acoustics and speaker anomalies. It's designed for users who are well-versed in audio analysis and want deep customization.
Key Features:
  • High-resolution FIR Filters: Corrects both amplitude and phase across the frequency spectrum, including time-domain correction.
  • Room Simulation: Useful for planning and optimizing room setups and speaker placements.
  • Advanced Customization: Offers tools for generating tailor-made convolution filters and implementing complex crossover designs.
Usability:
  • Acourate requires a significant learning curve and is best suited for audio professionals or serious audiophiles who have a good understanding of acoustic principles.

Sonarworks​

Focus and Approach:
  • Sonarworks aims to simplify the calibration process, focusing on achieving a flat frequency response for accurate audio production. It's targeted primarily at music producers and studio engineers who need consistent sound across various playback systems.
Key Features:
  • Easy Calibration Process: Streamlined setup for speaker and headphone calibration.
  • Extensive Headphone Profiles: A comprehensive library of headphone calibration profiles.
  • Systemwide or Plugin Operation: Can be used as a DAW plugin or to adjust system-wide audio.
Usability:
  • Sonarworks is known for its user-friendly interface and straightforward calibration process, making it accessible even to those with limited technical audio knowledge.

Audiolense​

Focus and Approach:
  • Audiolense blends high-end room correction capabilities with user-friendly elements, focusing on both frequency and time-domain corrections to ensure sound waves from all speakers are synchronized.
Key Features:
  • True Time Domain Correction: Corrects timing discrepancies alongside frequency response adjustments to ensure coherent sound reproduction.
  • Integrated Crossover Design: Facilitates the design and optimization of digital crossovers, enhancing multi-way speaker systems.
  • Intuitive User Interface: While offering complex features, it remains accessible to enthusiasts without deep technical backgrounds.
Usability:
  • Audiolense offers a balance between advanced correction capabilities and ease of use, providing guided processes that help users effectively manage room acoustics.

Key Differences​

  • Technical Depth: Acourate offers the most control and customization but requires the most technical knowledge. Sonarworks, while powerful, focuses more on simplicity and ease of use, making it less daunting for non-experts. Audiolense sits in the middle, offering advanced features with more user-friendly guidance.
  • Correction Focus: Acourate and Audiolense both address phase issues and time-domain corrections comprehensively, which are crucial for accurate soundstage reproduction in audiophile systems. Sonarworks focuses more on achieving a flat frequency response, particularly beneficial in studio environments for mixing and mastering.
  • Target Audience: Acourate is geared towards audiophiles and professionals who need detailed adjustment capabilities. Sonarworks targets studio professionals and music producers looking for reliable sound across devices. Audiolense is suitable for home audio enthusiasts and audio professionals who want detailed audio correction but with easier setup than Acourate.

Conclusion​

Choosing between Acourate, Sonarworks, and Audiolense depends on your specific needs:
  • Professional Studios: Sonarworks is likely the best choice for consistent and accurate monitoring.
  • Home Audio Enthusiasts and Audiophiles: Audiolense offers a good mix of depth and usability, with time-domain correction that enhances the listening experience.
  • Technical Users with Specific Needs: Acourate provides the tools for those who want to delve deeply into audio correction and customization.
Each software caters to different requirements and levels of expertise, so aligning your choice with your audio goals and technical comfort will yield the best results.

4
To clarify and compare Audiolense XO with Audiolense 2.0, we need to understand that these are different editions or versions of the same core software designed for audio correction and speaker optimization.

Audiolense XO​

Audiolense XO is a specialized version of Audiolense focused primarily on crossover design and optimization for advanced audio systems, particularly those involving multi-way speakers.
Key Features:
  • Advanced Crossover Capabilities: Allows users to design and optimize digital crossovers, critical for systems with separate drivers handling different parts of the audio spectrum.
  • Comprehensive Correction: Includes both time and frequency domain corrections to ensure that audio from all speakers is perfectly aligned and coherent.
  • High-Resolution Support: Capable of handling high-resolution audio up to 192 kHz, making it suitable for high-end audiophile systems.
  • Multi-Channel Configuration: Supports complex multi-channel setups, essential for surround sound environments and advanced stereo systems.

Audiolense 2.0​

Audiolense 2.0 refers to a version of the Audiolense software, which could imply a significant update or improvement over previous versions. This version would typically include enhancements across the board, improving both usability and technical capabilities.
Potential Updates Might Include:
  • Enhanced Algorithms: Improved accuracy in room measurement and correction calculations.
  • User Interface Improvements: More intuitive and accessible interface adjustments to simplify the process of setting up and managing room corrections.
  • Expanded Hardware Support: Increased compatibility with a broader range of audio interfaces and measurement microphones.
  • Refined Filter Designs: Enhancements to how FIR and other filters are applied for even more precise room correction.

Comparison and Use Cases​

  • Audiolense XO is more niche and targeted towards users who need specific control over multiple speakers and channels, with a particular focus on crossover design. This makes it ideal for audiophiles and professionals who are designing sophisticated audio setups involving multiple speakers and want to ensure that each speaker's output is meticulously calibrated.
  • Audiolense 2.0 (or any new version update of Audiolense) would generally appeal to a broader audience, improving upon the existing foundation with better usability, more precise correction algorithms, and broader hardware support. It would be suitable for both home audio enthusiasts and professional settings where accurate sound reproduction is critical but may not necessarily require the advanced crossover functionalities of the XO version.

Conclusion​

Choosing between Audiolense XO and Audiolense 2.0 would depend on your specific needs:
  • If your focus is on advanced speaker configurations with complex crossover needs, Audiolense XO is the better choice.
  • If you are looking for the latest improvements in usability and general room correction technology, a newer version like Audiolense 2.0 would be more appropriate.
Both versions highlight Audiolense's commitment to providing top-tier audio correction capabilities, tailored to different segments of the audio community.
Using Finite Impulse Response (FIR) filters for room correction does not inherently limit the dynamic range of an audio system in a way that would be detrimental under normal circumstances. However, there are a few aspects and considerations regarding how FIR filters interact with dynamic range:

1. Bit Depth and Quantization Noise

  • FIR filters require digital signal processing, which operates within the constraints of the system’s bit depth. Every time an audio signal is processed through a digital filter, it undergoes quantization, which can introduce noise if not properly managed. However, with adequate bit depth (24-bit or higher, which is common in modern audio equipment), the impact of quantization noise on dynamic range is minimal.

2. Filter Implementation and Precision

  • The way FIR filters are implemented can affect the signal. High precision in filter coefficients and processing is essential to maintain the integrity of the audio signal. Poorly designed FIR filters or those implemented with insufficient computational accuracy might introduce errors or distortions that could, theoretically, impact dynamic range.

3. Headroom and Gain Structure

  • FIR filters often change the level of certain frequencies to correct the room's acoustic anomalies. Proper gain structure must be managed to ensure there is enough headroom to avoid clipping while maintaining the full dynamic range of the audio content. This involves careful calibration of the audio levels post-filtering.

4. Linear Phase Response

  • One of the advantages of using FIR filters for room correction is their ability to provide a linear phase response, meaning all frequency components of the audio signal are time-aligned. This preserves the waveforms of the original signal more faithfully compared to minimum phase filters, which can alter the phase and potentially affect the perceived clarity and impact of the dynamics.

5. Practical Usage

  • In practical applications, any reduction in dynamic range due to FIR filtering is usually imperceptible. The design and application of FIR filters typically focus on preserving or enhancing the audio quality, including maintaining dynamic range. In high-fidelity audio systems where room correction is applied, the benefits of correcting frequency and phase anomalies generally outweigh any minimal impact on dynamic range.

Conclusion​

In well-designed audio systems and room correction setups using FIR filters, the impact on dynamic range should be negligible, providing the system is properly calibrated and the filters are designed with high precision. The key is to use high-quality equipment and software that can handle the demands of high-resolution audio processing without introducing significant artifacts or noise.

4
4


^That^ looks like a cut-n-paste sales pitch.
FIR would be dandy, but the pre-ringing is something that seems like a bit of an Achilles heal.
 
So here is my question: what if we use linear phase filters, and sit so close to the speakers that there is not much excess phase to speak of. Do we then get to hear what lack of phase distortion sounds like? People who use minimum phase filters say that phase distortion is inaudible. Are they correct?

There are two issues. The first is filter matching in the crossover. That means in two (or more) speakers,the phase response is going to be different in the two channels due to component tolerances. I know some manufacturers stand on their heads to avoid this,but in an analog filter,some problem is inevitable,and there will be both interference and time-delay changes interaurally due to that. Furthermore components drift. Always. Especially bipolar electrolytics and some of the newer technologies. Much more capacitance in a smaller package, but never perfectly stable.

The second is, of course, the phase shift resulting from an IIR crossover. There are designs that are not power complimentary that can avoid this, but at the cost of creating some pretty "interesting" radiation patterns.

A constant delay crossover avoids all of that, but is solely a sampled (i.e. "digital") system.

Is it audible? I believe so, actually, on very "pitchy" sounds, perhaps a clarinet, maybe some very bright brass, or some individual's voices. Is it very audible, probably not, and we are all trained to hear right through it.

In addition to all of that, of course, listening very close means that you are completely in the direct sound range, and the ratio of direct to reverberant is very, very high. This is not true in most listening setups.

There are more issues, but I need to head out to do work on my constant-delay listening system at work.
 
FIR would be dandy, but the pre-ringing is something that seems like a bit of an Achilles heal.

Definitely not, using proper filter design. Been there, done that, works fantastically. Furthermore, it very, very much reduces the "width" of the crossover where both drivers matter, and provides a great deal of protection for the mid and HF drivers.
 
Definitely not, using proper filter design. Been there, done that, works fantastically. Furthermore, it very, very much reduces the "width" of the crossover where both drivers matter, and provides a great deal of protection for the mid and HF drivers.
Are you using symmetric or asymmetric FIRs?
 
Back
Top Bottom