Old_School_Brad
Addicted to Fun and Learning
- Joined
- Sep 10, 2024
- Messages
- 603
- Likes
- 775
Dirac Live takes measurements from multiple positions, including varying heights.The klippel is automating the process and taking some of the variables out of the equation. The Klippel knows the distance between the speaker and the mic, and it know what angles the mic is at relative to the reference axis.
Assuming your using a decent sampling rate, and not doing anything ridiculous with regards to placement of the mic relative to the speaker, you can make some assumptions.
The first sound recorded by the mic will be the sound directly from the speaker and not a reflection. The shortest distance between to points is a strait line (note the speed of sound is constant). A reflection will arrive at the mic later, because it travels a greater distance, and it will arrive at a lower amplitude because it will have transferred energy into the surface it reflected off of. Having the speaker up on a platform, and the mic closer to the speaker than any potential reflective surfaces, just assures the above happens more consistently. It's all pretty strait forward science, and how they can tell the difference between the the speaker and the room.
Amirm normally has the klippel collects between like 500 and 1000 points of data (if memory serves). The data is then post processed, fitted, smoothed, etc before being used to generate various informative plots.
Note the above is a simplified explanation, I'm sure they are using additional techniques to help differentiate between the speaker and the room.
consumer grade DSP with 2 speakers is trying to do something like the above, but most likely they have to try and estimate distance and angles, because you have a human holding a relatively cheap mic, moving the mic around in a none precise manner, and only gathering a few dozen data points at most. If memory serves they generally only take measurements in 2 dimensions not 3 like the klippel. To and extent the simplification is ok, because they are only trying to generate corrections for a specific listening position, not model the speaker.
The main differences are the quality of the equipment, the types of data collected, amount of data collected, and the quality of the data collected.
It might take some time before that's achievable.Maybe when all AVR and Processors incorporate generative AI they will be able to make the sound exactly as we want ask, without having to do anything!
I asked ChatGPT to visualize a house curve for in-room use that mimics the sound characteristics of the Apple AirPods Pro 2.