Speaker testing is predominantly done with a single speaker, including the measurements and listening tests. We don't have quantitative ways to compare the imaging and soundstage potential of different speakers. We understand (or theorize) that level and phase matching between the stereo speakers will impact imaging, and also that the dispersion pattern (and it's interaction with the room) will impact soundstage (as well as imaging to some degree).
Given the opportunity to do stereo A/B testing, we can make comparative assessments such as "speaker A images better than speaker B", or "Speaker B has a wider soundstage than speaker A". However, it would be great to be able to put these attributes on a fixed scale to facilitate comparisons between speakers without having them in the same place at the same time.
I think if we pool our collaborative knowledge and creativity, that we may actually be able to crack this nut.
As a step toward this endeavor, I have created a prototype (and almost certainly flawed, or at least not optimized) scoreable imaging test. The mp3 that I have linked modulates the breadth of pink noise sources over time. At first, the breadth is across the entire range separating the speakers. By 5 seconds it collapses completely to mono. At 10 seconds it expands to 80% of the range between the speakers. At 15 seconds it collapses to mono again. At 20 seconds it spans 64% of the speaker separation. This modulation continues, with 5,15,25,35,45 (anything ending in 5) being completely mono. At 0,10,20,30, (anything ending in 0), the breadth reaches a local maximum, but that span decreases by 20% every time it modulates (100%, 80%, 64%, 51%, 41%, 33%, etc).
At some point, the listener will no longer be able to discern a difference between the mono noise (on the 5s), and the broadened noise (on the 0s). The longer it takes for this difference to become imperceptible, the better the speaker's imaging is. Of course, the room (and probably the listener) will also impact the result, but this might allow a decent level of consistency if performed by the same listener (Amir?) in the same room.
I believe there are 15 modulations in the mp3 linked below, so there's a potential of scoring 15 points. Simply divide the time when the changes to noise breadth became imperceptible by the period of 10 seconds to get the score. There will probably be somewhat of a training effect if people actually try this, as they get better at listening for and identifying the breadth changes.
An improved version would use a computer program to challenge the listener to do ABX testing with one perfectly mono (matched) noise source, and one with varying levels of distortion (differences) between the channels. The program would know the extent of the differences, and would be able to determine how much distortion was perceptible to the listener. A system with good imaging would allow the listener to differentiate even minor distortions from perfection.
Thoughts, comments, help?
Given the opportunity to do stereo A/B testing, we can make comparative assessments such as "speaker A images better than speaker B", or "Speaker B has a wider soundstage than speaker A". However, it would be great to be able to put these attributes on a fixed scale to facilitate comparisons between speakers without having them in the same place at the same time.
I think if we pool our collaborative knowledge and creativity, that we may actually be able to crack this nut.
As a step toward this endeavor, I have created a prototype (and almost certainly flawed, or at least not optimized) scoreable imaging test. The mp3 that I have linked modulates the breadth of pink noise sources over time. At first, the breadth is across the entire range separating the speakers. By 5 seconds it collapses completely to mono. At 10 seconds it expands to 80% of the range between the speakers. At 15 seconds it collapses to mono again. At 20 seconds it spans 64% of the speaker separation. This modulation continues, with 5,15,25,35,45 (anything ending in 5) being completely mono. At 0,10,20,30, (anything ending in 0), the breadth reaches a local maximum, but that span decreases by 20% every time it modulates (100%, 80%, 64%, 51%, 41%, 33%, etc).
At some point, the listener will no longer be able to discern a difference between the mono noise (on the 5s), and the broadened noise (on the 0s). The longer it takes for this difference to become imperceptible, the better the speaker's imaging is. Of course, the room (and probably the listener) will also impact the result, but this might allow a decent level of consistency if performed by the same listener (Amir?) in the same room.
I believe there are 15 modulations in the mp3 linked below, so there's a potential of scoring 15 points. Simply divide the time when the changes to noise breadth became imperceptible by the period of 10 seconds to get the score. There will probably be somewhat of a training effect if people actually try this, as they get better at listening for and identifying the breadth changes.
An improved version would use a computer program to challenge the listener to do ABX testing with one perfectly mono (matched) noise source, and one with varying levels of distortion (differences) between the channels. The program would know the extent of the differences, and would be able to determine how much distortion was perceptible to the listener. A system with good imaging would allow the listener to differentiate even minor distortions from perfection.
Thoughts, comments, help?