• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Research Project: Infinity IL10 Speaker Review & Measurements

GXAlan

Major Contributor
Forum Donor
Joined
Jan 15, 2020
Messages
3,930
Likes
6,071
2800Hz; 24dB/octave
72B6EBA5-7B38-4E1C-8DF1-BE1E0B8F9D3B.jpeg
 

JohnBooty

Addicted to Fun and Learning
Forum Donor
Joined
Jul 24, 2018
Messages
637
Likes
1,595
Location
Philadelphia area
Another issue is the unknown history of this 20+ year old sample of a speaker. It might be 'as new', or it might possess replaced (or even different) drive units, crossover mods, etc. Might have endured physical or electrical stress. Meaning that the logical lineage from this speaker to the Harman tests might be invalid.
In general, yes, it's a concern!

Not an issue in this case. Look at how closely Amir's measurements match those taken by Harman. We can be very certain that this sample hasn't been modified and hasn't degraded on its own.
 

test1223

Addicted to Fun and Learning
Joined
Jan 10, 2020
Messages
512
Likes
523
@amirm If you want to dive deeper into the importance of the non-linear distortions of speakers some IMD test at the listening position might be interesting. You will then also have a reference of the noise in your room which will mask lower distortion at not so high listening levels.
And as I mentioned the Symth device is very useful for doing listening tests since you can do blind test without moving speakers or be afraid of different head positions, since the contribution of non-linear distortion should be smaller.
 

tuga

Major Contributor
Joined
Feb 5, 2020
Messages
3,984
Likes
4,285
Location
Oxford, England
Have you found evidence or is this at the level of hunch?

I could point you to a paper by Genelec about what they call slow listening but you'll dismiss it by saying that even though they are one of the top speaker manufacturers they have no idea what they are talking about when it comes to listening assessment...
 

tuga

Major Contributor
Joined
Feb 5, 2020
Messages
3,984
Likes
4,285
Location
Oxford, England
No, but the reason it's not ludicrous is that the M16's preference rating is higher than the IL10's, all produced solely from spinorama data. In fact, @edechamps calculates there to be a 61% probably that the average listener will prefer the M16, as shown in his very useful matrix comparison chart. @amirm is just part of that 61% majority predicted by the preference ratings.

According to the spin the IL10 rates pretty high though, not justifying its sound poor according to Amir when compared with the M16.
I think people are reading too much into this Estimated Preference thing... I can understand the intellectual attraction of star-ratings and statistics in general but I wouldn't get too carried away.
 

Absolute

Major Contributor
Forum Donor
Joined
Feb 5, 2017
Messages
1,085
Likes
2,131
We must avoid resonances at all costs.
-
Floyd Toole

I'm thinking that tonality will only get you so far, after that resonances will become the most important thing. If that is taken care of, distortion in various forms, perhaps most crucially from the motor/driver system, will likely be next up in the line.
After that I think the remaining factors will likely be that of time-related stuff, like CSD, phase and step-response.

Before we look for answers anywhere else, my money is on the resonances being the primary factor for your objections, Amir.
That being said, I don't think this speaker measures that well dispersion wise, which could be the primary reason after all.
 

tuga

Major Contributor
Joined
Feb 5, 2020
Messages
3,984
Likes
4,285
Location
Oxford, England
This really quite stretch of a conclusion. And because it's worth repeating, as @bobbooo said, the Revel sounded better, but it also scores better, so the formula is still doing quite well for itself in this particular example.

If Amir ends up liking the bose more than the IL10, then I'd start to be a little worried. But even then: one listener, listening to a single speaker at a time (as in not comparative- not talking about stereo), in an uncontrolled setup. As Amir points out, he also has particular sensitivities to distortion.

Spin aside but still following the basic frequency and directivity logic, the Revel M16 also has slightly better directivity control, which could partially explain it sounding cleaner, although the IL10 is certainly still very good.

That said this isn't to say I don't believe there can be other causes for it. I'd be curious if the IL10 is still 'to spec' in other respects. But if I were a betting man, I'd still put my money towards the IL10 over newer speakers that scored significantly lower.

I think you and many others are missing a crucial point: that Amir not only preferred the M16 (which is irrelevant information as far as I am concerned, taste is personal and non-transferable) but far more importantly has notable audible shortcomings which are not reflected either in the spin or in the preference rating.
 
Last edited:

tuga

Major Contributor
Joined
Feb 5, 2020
Messages
3,984
Likes
4,285
Location
Oxford, England
The fact of the matter is @amirm cannot "untrain" himself. Critical/trained listeners are realistically hyper-attuned to find faults, whereas audiophiles, even so-called golden eared ones, are pursuing sound that gives the most enjoyment or emotion. In short, they aren't listening to speakers to tear them to shreds, they are hoping that the next pair of speakers they audition bring them tears of joy.

The fact that many audiophiles aren't listening critically is probably the reason why they keep on going round in circles chasing their tails.
How can you upgrade a piece of equipment if you don't identify its shortcomings?
 

tuga

Major Contributor
Joined
Feb 5, 2020
Messages
3,984
Likes
4,285
Location
Oxford, England
What would you propose that would work better than how Harman does this? I've a few quibbles about how they do this myself, but don't think it grossly inaccurate as an approach to finding what people find to sound best to them.

I am not particularly interested in the preference testing, my criticism is more directed at the listening assessment of performance.
That approach in my view is over-simplified at some levels.
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,793
Likes
37,702
I am not particularly interested in the preference testing, my criticism is more directed at the listening assessment of performance.
That approach in my view is over-simplified at some levels.
You avoided answering my question nicely.

How would you judge speaker quality?

A few ideas of years listening to speakers prior to the modern times when measurements are available in situ.

Generally good speakers sound good in most locations. One of the easiest recommendations to me were Quad ESL63's. Might sound better in some places vs others, but always their good attributes shone forth. Until a friend got some and they sounded dreadful in his room. So much so we checked them out wondering if they were broken. Finally moved them into his dining room as a test. Sure enough they sounded good. He said they sounded good like the one's I owned and those a friend of his owned. He sold them as they didn't work in his room, but this was rare.

Some of the most illuminating times have been when there were two speakers in a room and we could relatively easily swap between them. The relative character stood out so much more clearly. Harman's shuttle mechanism seems nearly ideal with my main caveat being positioning for best low end response. And they of course had better correlation with speakers lacking response below 100 hz. Not a surprise on that really.

Having speakers in your location and swapping between is obviously an excellent ideal. If you can figure out what kind of speaker works best everywhere so you don't even need such comparisons then that is actually even more ideal. I'd feel better if Harman made their own purist mono recordings for use in testing. The very extreme processing of 99%+ of all music is a possible confounding factor.

So to repeat, how would you judge speaker quality?
 
Last edited:

napilopez

Major Contributor
Forum Donor
Joined
Oct 17, 2018
Messages
2,146
Likes
8,719
Location
NYC
I think you and many others are missing a crucial point: that Amir not only preferred the M16 (which is irrelevant information as far as I am concerned, taste is personal and non-transferable) but far more importantly has notable audible shortcomings which are not reflected either in the spin or in the preference rating.

That's where the disagreement is. You say it's not notable in the spin, I gave some possible explanations for both how it *could* very well be in the spin and on/off-axis data. I think neither of us is too concerned with the preference score or Amir's quick preferences over brief listening sessions. But I do think the huge majority of sound can be explained in frequency response and directivity.

As I suggested in my reply to Amir, I think it is very interesting that the most common complaint about the IL10 in the study was that it sounded 'dull.' That's a broad term, but Amir's impressions certainly seem to fall under that umbrella. In fact, in the paper it was the third most 'dull' speaker in the lineup - it just happened the other speakers had seemingly more egregious negatives.

Perhaps even more noticeably, it was also described as having a 'mid depression' and sounding 'mellow' and 'smooth' (two sides of the same coin?) more than any of the 13 speakers tested for Part 1.

However much Amir may have liked the speakers is one thing, but his description for the sound is imo quite reasonable to conclude from the ER curve in the study. In Amir's measurements not so much not so much because the NFS calculates the ER/PIR differently from Harman, so you don't get quite the same scoop in the upper mids.

That said, I do think it'd be interesting to investigate other factors. I think @Absolute could be right with resonances, and I'd be curious to see IMD performance as well to see how much cabinet design and driver performance could be influencing things relative to more modern designs.

I could point you to a paper by Genelec about what they call slow listening but you'll dismiss it by saying that even though they are one of the top speaker manufacturers they have no idea what they are talking about when it comes to listening assessment...

Have you read the paper?
 

Tks

Major Contributor
Joined
Apr 1, 2019
Messages
3,221
Likes
5,497
I just have to say something a bit as an aside..

This was ground breaking research by Sean Olive published back in 2004 with the goal of predicting listener preference using anechoic chamber speaker measurements.

To be "polite" the research did not list actual speaker names/models.

Try as I might though, i could not like this speaker. Again, tonality was right but there is this grunginess and lack of clarity to everything it played. I tried to take the resonances out to fix it but at the end, it was not conclusive, nor did it make much of a difference. I even pulled my wife over to listen and she said there was some small difference with EQ but not enough for her to care

Just have to say a few things on science first..

If it's one thing science does to shoot itself in the foot on top of already being esoteric for the majority of people, it is when you have things like -not- publishing the precise models or SKU's of the devices being tested in this research. Science already gets enough flack from a large number of lay people(especially in the US these days) for being unapproachable (constant paywalls literally to articles also don't help, on top of the massive paywall to get an education to properly be able to decipher some data).

I'm speaking off the cuff, but it seems to me, there needs to be a legislative push to have precise details of all scientific testing done that is published. As for the private sphere, there needs to be an eradication of CRO's (contract research organizations, instead of academic researchers like in universities and such), where contracts are drawn up that allow for the company to review results of research before it's published, and then withhold publishing if it's deemed not beneficial. This kind of nonsense should be kept out of any enterprise concerning scientific understanding.

---------------------------------------------------------------------------------------------------------------------------------------------------

As for the Harman target itself for example (since we know this is what the aproximation of a typical recording/mixing studios have as properties as a room setup), I just don't know, at first I thought it was great, but depending on many other things like genre or mood, it seems to fluctuate (or just based on my preference for that day or time). The OE/IN targets are sometimes very good depending on headphones (IEM's are a complete wash even with the new IE targets, for some reason, then again, there is much to do with canal resonances, occlusions and shit like that I'm not to familiar with to properly comment).

I can't think of a single activity I didn't prefer more or less depending on the period of time. Take food or drink for example. You find something you like; some days it's lifeless in terms of pleasure, while other days it tastes like the best thing ever. This is no bash against Sean's research (the only bash is the non publishing of specific models, this was just a stupid omission based on some political correctness obviously).

It's also not just me, I have four other folks in the same boat (one guy being exclusively headphones only). Some days it's great, other days it's just okay (this is a mix of headphones speakers, and IEM's, with one fellow sporting Genelec 8340's, which I felt the EQ was great with, and now saying that I realize betrays a majority of the tone of my post obviously). RME had it right with their DAC, dedicated Bass/Treble knobs for a quick change in your sound (along with EQ as it's now famously known for supporting on-device, etc..). Companies need to start offering things of this nature, I don't need an endless wave of pure DACs with quad decimal points of distortion free listening as my speakers hover in the whole number percentages for example.

But yeah, idk, it just seems for speakers, room setup is just a fucking nightmare (not a problem to me since I don't do much speaker listening to begin with, nor am I too anal about sound as long as distortion/noise, and such can be minimized to inaudible levels across the volume range). So with that, I'm just not sure how any preference target could hold that much value beyond generalities. Toole's paper on AES shows there's much disagree with when considering a notion of "typical room response".
 

TimVG

Major Contributor
Forum Donor
Joined
Sep 16, 2019
Messages
1,200
Likes
2,648
The on-axis sound features a slight upward tilt. I'd start with one shelf filter and taking out the most prominent treble peak.

1593075917381.png
 

tuga

Major Contributor
Joined
Feb 5, 2020
Messages
3,984
Likes
4,285
Location
Oxford, England
You avoided answering my question nicely.

How would you judge speaker quality?

A few ideas of years listening to speakers prior to the modern times when measurements are available in situ.

Generally good speakers sound good in most locations. One of the easiest recommendations to me were Quad ESL63's. Might sound better in some places vs others, but always their good attributes shone forth. Until a friend got some and they sounded dreadful in his room. So much so we checked them out wondering if they were broken. Finally moved them into his dining room as a test. Sure enough they sounded good. He said they sounded good like the one's I owned and those a friend of his owned. He sold them as they didn't work in his room, but this was rare.

Some of the most illuminating times have been when there were two speakers in a room and we could relatively easily swap between them. The relative character stood out so much more clearly. Harman's shuttle mechanism seems nearly ideal with my main caveat being positioning for best low end response. And they of course had better correlation with speakers lacking response below 100 hz. Not a surprise on that really.

Having speakers in your location and swapping between is obviously an excellent ideal. If you can figure out what kind of speaker works best everywhere so you don't even need such comparisons then that is actually even more ideal. I'd feel better if Harman made their own purist mono recordings for use in testing. The very extreme processing of 99%+ of all music is a possible confounding factor.

So to repeat, how would you judge speaker quality?

One listens (observation) for problems using very familiar fit-for-specific-purpose music programme (and pink noise) over a long period of time.
Then you correlate those findings with measured performance when available and try to identify probable causes.
 

tuga

Major Contributor
Joined
Feb 5, 2020
Messages
3,984
Likes
4,285
Location
Oxford, England
But I do think the huge majority of sound can be explained in frequency response and directivity.

I agree.
But we'd be in the dark with many of the issues that have been surfacing were it not for the THD measurements, as well as CSD and individual driver and port FR plots. And we are still "missing" step-response, IMD and Iin-room FR plots. There are many aspects to speaker performance and the more comprehensive set of measurements will produce the most accurate "picture".

The Spinorama alone is insuficient to characterise performance.
 

restorer-john

Grand Contributor
Joined
Mar 1, 2018
Messages
12,743
Likes
38,992
Location
Gold Coast, Queensland, Australia
Not an issue in this case. Look at how closely Amir's measurements match those taken by Harman. We can be very certain that this sample hasn't been modified and hasn't degraded on its own.

Bass drivers will either get more compliant roll surround/suspension until the surrounds/spiders fail and treble units will stiffen up/ferrofluid dry out and go the other way. Midranges can be a disaster when their surrounds stiffen up.

The fact that the plots are so remarkably close after all these years shows neither has happened. Yet.
 

ctrl

Major Contributor
Forum Donor
Joined
Jan 24, 2020
Messages
1,633
Likes
6,241
Location
.de, DE, DEU
I think you and many others are missing a crucial point: that Amir not only preferred the M16 (which is irrelevant information as far as I am concerned, taste is personal and non-transferable) but far more importantly has notable audible shortcomings which are not relfected neither in the spin nor in the preference rating.
Assuming @amirm's listening impression is correct and the reason for his poor listening impression is based on the high third order harmonic distortions around 1.5kHz.
Then your statement is trivial. Of course, the model only works as long as other factors do not become dominant - nothing else was ever claimed.

For @amirm it seems to be the case here. The 1.5% HD3@86dB are pretty bad, because HD3 is masked less well than HD2 and the masking works less well at sound pressures around 80dB than e.g. around 100dB.

In addition, the decay behaviour in the CSD around 1.2kHz also shows abnormalities.
Unfortunately the CSD was not scaled with the usual 30dB (My usual comment: Klippel should improve the output and always output 30dB scaling). But you can see that after 4ms the resonance was only attenuated by about -15dB. Unfortunately we can not see how the decay behaves beyond 4ms.

That approach in my view is over-simplified at some levels.
This will always be the case, since so many sound-relevant factors show non-linear behavior (distortion, equal-loudness contour, masking, ...), a simple algorithm can never completely map such a thing.
But the CEA2034 is a strong indicator of whether a speaker has the potential to sound very good - If other factors, which are actually well controllable nowadays, are not completely out of control (distortions, decay behavior, ...).
 
Last edited:

edechamps

Addicted to Fun and Learning
Forum Donor
Joined
Nov 21, 2018
Messages
910
Likes
3,621
Location
London, United Kingdom
If it's one thing science does to shoot itself in the foot on top of already being esoteric for the majority of people, it is when you have things like -not- publishing the precise models or SKU's of the devices being tested in this research. Science already gets enough flack from a large number of lay people(especially in the US these days) for being unapproachable (constant paywalls literally to articles also don't help, on top of the massive paywall to get an education to properly be able to decipher some data).

(the only bash is the non publishing of specific models, this was just a stupid omission based on some political correctness obviously)

It's a catch-22 though: if you don't publish model names, people will complain that data is being withheld. If you publish model names, people will complain about conflicts of interest because the highest-rated speaker is from Harman and the study will read like an advertisement, greatly damaging its credibility. It's damned if you do, damned if you don't.
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,793
Likes
37,702
One listens (observation) for problems using very familiar fit-for-specific-purpose music programme (and pink noise) over a long period of time.
Then you correlate those findings with measured performance when available and try to identify probable causes.
I think that is backwards. Backwards as in old method not being the best anymore and a backwards approach. There is enough known and shown to design toward better objectives. Then use your approach to find missed gotchas in a design. As a first line of picking speakers your method isn't a good one.
 

tuga

Major Contributor
Joined
Feb 5, 2020
Messages
3,984
Likes
4,285
Location
Oxford, England
Then your statement is trivial. Of course, the model only works as long as other factors do not become dominant - nothing else was ever claimed.

The problem is than in most cases there are other "factors". Loudspeakers are significantly flawed in comparison to electronics which can be nearly "transparent" (accurate).
 
Top Bottom