• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Research Project: Infinity IL10 Speaker Review & Measurements

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,250
Likes
11,556
Location
Land O’ Lakes, FL
he was very incredulous and said point blank that they were only focused on headphones.
Then maybe they wouldn't mind releasing the data for the 57 other speakers used in the test (as we have the 13 originals). :p

It is worthwhile to look at how his formula changed between the 13 bookshelves and the 70 total.

13 bookshelves:
Screen Shot 2020-06-27 at 2.54.28 PM.png


70 total:
Screen Shot 2020-06-27 at 2.54.36 PM.png

LFX for the bookshelves initially played little value, which makes sense as they all being bookshelves would lend their bass extension to not being too dissimilar.

LFQ (deviations in the bass) goes from near 20% to not factored in at all, which seems odd to me.

I have no clue why NBD_ON replaced AAD_ON. Maybe resonances became more of an audible issue.

Let's look at the correlation for all these for the 13 bookshelves:
Screen Shot 2020-06-27 at 2.47.30 PM.png


We now know that Smoothness heavily favors tilt, such that two on-axis responses that are both neutral-ish but have different degrees of jaggedness won't score too dissimilar. So having a correlation of <0.2 makes sense, equally so with the Listening Window as that usually is only slightly tilted. What doesn't make sense is if it is so poorly correlated, why did it make up >25% of the original model, which was highly accurate?

Smoothness on the Sound Power was swapped for the Smoothness of the PIR, which the above graph backs up doing, but the % was dropped considerably. Knowing what we know, this is likely due to many of the tower speakers being 3-way and thus having wider directivity, which in turn reduces the tilt and makes it less accurate, the paper states that ideal slopes could be tied to directivity.

Now that we have the 13 bookshelf Spins and their subjective rankings, at least some analysis can be done to see if another formula can be made which does not rely on slope and yet has high probability, which in theory can be used for tower speakers as well. However, as pointed out, this Infinity was rated as being the best of the group of 13, yet it was only given a 6.16, so trying to predict the scores of better measuring speakers is less accurate.
 

Vladimir Filevski

Addicted to Fun and Learning
Joined
Mar 22, 2020
Messages
564
Likes
756
OK folks, please stop bashing and let Amir to continue his reviews! You will not find on any other place such state-of-the-art measurements and honest subjective impressions!
Edit: I am not referring to the last two posts.
 

RayDunzl

Grand Contributor
Central Scrutinizer
Joined
Mar 9, 2016
Messages
13,250
Likes
17,192
Location
Riverview FL
Last edited:

pozz

Слава Україні
Forum Donor
Editor
Joined
May 21, 2019
Messages
4,036
Likes
6,827
This is not remotely the case. I said it in black and white even in the article:

View attachment 70940

I have told of many catastrophic failures in my judgement in my own field of expertise, telling two files that were identical as being different for example. So no, this is not about me. It is about you all not understanding how we make forward progress in the industry without formal studies but with trained listeners. And how their opinion is trusted to make advancements in design and engineering products. It is a grave mistake then to say no one should listen to them and should just sit there until the next bit of research is published a decade or more later. Or spend months on an expensive double blind study.

Now, if I were saying two cables sounded different that would be another thing. Here, I am saying distortion in such high amounts cannot in any way or shape be ignored as inaudible. Basic psychoacoustic research easily stipulates that these are audible issues.

You want more proof? Here is a picture I took while at Harman:

View attachment 70941

This is a laser inframeter used to determine distortion in drivers. Why would Harman do this if distortion doesn't matter?

Here is a note from Kevin Voecks in the stereophile review of Revel M106:

View attachment 70942

You think they had a double blind test that showed importance of distortion? If so, then my case is made that distortion matters.

If you think they didn't have a double blind test to show the importance of distortion, how come the went through such expense and effort to reduce distortion?

When asked about the most important development in the last 20 years of speaker development, Kevin pointed to Klippel system's distortion modelling as the top two reasons speakers have improved:


Again, he points to no double blind controlled study or paper for that. The Klippel system is super expensive. Why would they invest in that if distortion doesn't matter, nor does opinion of internal listeners and experts?

So no, it is not about my listening impressions as being 100% accurate. This argument is about some of you being dismissive of them altogether. The harm in that is large.

Please listen to someone's experience with respect to both research and development. Don't be dismissive with some basics we all know like "you are not immune to bias." Well duh!

What you haven't shown is why what I, as a trained listener in detecting minor impairments, can't possibly be right. The evidence that I could be right is strong. Not definitive yet. But strong.
Hm. I always took distortion testing of drivers, including advanced analysis techniques, as being done for mechanical reasons, to understand and perfect driver behaviour, to test materials and effects of changes to the design. I also considered the motivation for sharpening those techniques to make planning, assembly and troubleshooting easier. There's no way to have tight product tolerances if you don't have good testing equipment. It's not unusual for one of the machines on the assembly line to function slightly wrong and to have no idea until a repeat defect is discovered in QC. The better the analytical tools, the easier it is to find the source of the fault.

In other words I thought that the elimination of distortion is secondary, a sort of by-product of perfecting manufacturing itself, the drive to reach zero % being bundled into the latter. I thought that any intent to reduce distortion to below audible levels would be based on similar engineering-type motivations.

I know that manufacturers of cellphones and tablets, fully aware that distortion and clipping is unavoidable, conduct listening tests to understand what kind of distortion profiles are the least offensive. It would make sense that speaker manufacturers do the same thing and have internal standards on what is acceptable vs. what isn't, and I'd like to see those documents in Revel's case. It would help me form a clearer opinion.
 
Last edited:

pozz

Слава Україні
Forum Donor
Editor
Joined
May 21, 2019
Messages
4,036
Likes
6,827
Okay, so I am not a trained listener. I have two speakers done with the Harman design guidelines. The original version of the LSR 305 and the older Revel F12s. Both sound very good. It is likely both score alright by the scoring formula, but likely not near the top. Both are very good especially for their price, and even without considering price are far better than average. I prefer one subjectively by say 25% over the other. I can use and enjoy both, but one is preferred consistently by me. I could describe how the sound differs to my ears. I could do in room measures with REW.

Is this worth anything? Can we get the spins for both of those? Would it be worth anything then?

If I obtained and listened to the IL10 and the M16 would my opinion be worth anything? If we get a 2nd trained listener and he/she has the opposite opinion on these two what would you think at that point? That your training is better or that both your opinions cancel out and mean nothing?
Having the spin data, your impressions and those volunteered by others would be meaningful.
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,766
Likes
37,622
Isn't there a member on the other side of town doing measurements?

I forget his name.

Maybe it was @hardisj
A little further than the other side of town, but within a reasonable distance I suppose. Two hrs or so travel time.
 

patate91

Active Member
Joined
Apr 14, 2019
Messages
253
Likes
137
This is not remotely the case. I said it in black and white even in the article:

View attachment 70940

I have told of many catastrophic failures in my judgement in my own field of expertise, telling two files that were identical as being different for example. So no, this is not about me. It is about you all not understanding how we make forward progress in the industry without formal studies but with trained listeners. And how their opinion is trusted to make advancements in design and engineering products. It is a grave mistake then to say no one should listen to them and should just sit there until the next bit of research is published a decade or more later. Or spend months on an expensive double blind study.

Now, if I were saying two cables sounded different that would be another thing. Here, I am saying distortion in such high amounts cannot in any way or shape be ignored as inaudible. Basic psychoacoustic research easily stipulates that these are audible issues.

You want more proof? Here is a picture I took while at Harman:

View attachment 70941

This is a laser inframeter used to determine distortion in drivers. Why would Harman do this if distortion doesn't matter?

Here is a note from Kevin Voecks in the stereophile review of Revel M106:

View attachment 70942

You think they had a double blind test that showed importance of distortion? If so, then my case is made that distortion matters.

If you think they didn't have a double blind test to show the importance of distortion, how come the went through such expense and effort to reduce distortion?

When asked about the most important development in the last 20 years of speaker development, Kevin pointed to Klippel system's distortion modelling as the top two reasons speakers have improved:


Again, he points to no double blind controlled study or paper for that. The Klippel system is super expensive. Why would they invest in that if distortion doesn't matter, nor does opinion of internal listeners and experts?

So no, it is not about my listening impressions as being 100% accurate. This argument is about some of you being dismissive of them altogether. The harm in that is large.

Please listen to someone's experience with respect to both research and development. Don't be dismissive with some basics we all know like "you are not immune to bias." Well duh!

What you haven't shown is why what I, as a trained listener in detecting minor impairments, can't possibly be right. The evidence that I could be right is strong. Not definitive yet. But strong.

I don't have the burden of proof , this is your task to point out how what you hear correlate with the data to released and the data Olive produced.

That's what we are asking. You just never anwser this question.
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,663
Likes
240,943
Location
Seattle Area
I don't have the burden of proof ,
Well you do actually. You want answers from me, you have to make a good case for it. "I don't hear you" is not it.

this is your task to point out how what you hear correlate with the data to released and the data Olive produced.

That's what we are asking. You just never anwser this question.
It is not my job to make you understand the research and how it relates to my work. Have you even read the paper? If so, did you skip over this section?

1593305959275.png


1. My listening room is not the same as the research. The author talks about future work to address this:

1593306311008.png


2. My acuity with respect to hearing non-linear distortions most likely was not examined in the research. To wit, there is even commentary that some of the testers did not even know what the term "distortion" meant. On directivity, the author again accepts the need for more research:
1593306379542.png


3. A number of speakers I have tested and liked did not even exist at the time of the study.

4. Most important: getting a group of listeners to provide numeric scores for overall sound of a speaker is tricky business. I know, I have take the the same test twice. You hear a speaker. OK the vocal sounds a bit different than what you expected. Do you vote 5, 6 or 7 in scale 1 to 10?

Here is the researcher himself:
1593306256786.png


All of this leaves room for some percentage of the speakers we test to not follow the formula proposed in the study. And certainly in my case as a listener.

In many other cases I have praised and recommended speakers that follow the conclusions of the study. Indeed percentage of time I don't recommend a speaker is far smaller than those I do (that have good objective results).

Bottom line: you have to stop being dismissive this way. You are not making any point I don't understand or know.
 

youngho

Senior Member
Joined
Apr 21, 2019
Messages
487
Likes
800
I don't have the burden of proof , this is your task to point out how what you hear correlate with the data to released and the data Olive produced.

That's what we are asking. You just never anwser this question.

Dude, maybe you should just stop posting here. Start your own thread with your measurements and listening impressions, maybe your own forum. Contribute something positive and constructive. Please.
 

ace_xp2

Member
Joined
Apr 5, 2020
Messages
62
Likes
61
Unless I'm mistaken, this all comes from Amirs supposition that the distortion may play a part in his dislike of the sound no? This isn't an assertive claim that the two possibilities he posted are the only two possible. And indeed, given that one of the two is placebo, that's a pretty open primary thought anyways.

Unless I've misread, it was my impression that distortion was simply what he found most likely, not that it simply must be that and only that. Amir, you're open to the possibility that the other variations (vs. m16) such as DI or in room response might indeed be the actual factor no? Not discounting that it may indeed be distortion, or even the combination of all three.
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,663
Likes
240,943
Location
Seattle Area
Unless I'm mistaken, this all comes from Amirs supposition that the distortion may play a part in his dislike of the sound no? This isn't an assertive claim that the two possibilities he posted are the only two possible. And indeed, given that one of the two is placebo, that's a pretty open primary thought anyways.

Unless I've misread, it was my impression that distortion was simply what he found most likely, not that it simply must be that and only that. Amir, you're open to the possibility that the other variations (vs. m16) such as DI or in room response might indeed be the actual factor no? Not discounting that it may indeed be distortion, or even the combination of all three.
Very much so. As I keep saying, this is an evolving theory of mine. I need a lot more data points before asserting it strongly.
 

patate91

Active Member
Joined
Apr 14, 2019
Messages
253
Likes
137
Well you do actually. You want answers from me, you have to make a good case for it. "I don't hear you" is not it.


It is not my job to make you understand the research and how it relates to my work. Have you even read the paper? If so, did you skip over this section?

View attachment 70989

1. My listening room is not the same as the research. The author talks about future work to address this:

View attachment 70991

2. My acuity with respect to hearing non-linear distortions most likely was not examined in the research. To wit, there is even commentary that some of the testers did not even know what the term "distortion" meant. On directivity, the author again accepts the need for more research:
View attachment 70992

3. A number of speakers I have tested and liked did not even exist at the time of the study.

4. Most important: getting a group of listeners to provide numeric scores for overall sound of a speaker is tricky business. I know, I have take the the same test twice. You hear a speaker. OK the vocal sounds a bit different than what you expected. Do you vote 5, 6 or 7 in scale 1 to 10?

Here is the researcher himself:
View attachment 70990

All of this leaves room for some percentage of the speakers we test to not follow the formula proposed in the study. And certainly in my case as a listener.

In many other cases I have praised and recommended speakers that follow the conclusions of the study. Indeed percentage of time I don't recommend a speaker is far smaller than those I do (that have good objective results).

Bottom line: you have to stop being dismissive this way. You are not making any point I don't understand or know.

Ok what can I prove if :

I have not listen to speaker.
I didn't claim that there's "grunginess" or stuff like that.
I didn't measure the speaker.
I didn't share any data.

On your side :

You have or had the speaker.
Toi claimed things that you've heard.
You measured the speaker.
You shared data.

Now you are also a trained listener, you have a lot of knowledge in audio and how it works. You have all the equipment to run valid measurement, and you know how to tell conclusion based on them. You run a forum that vas science tag on it.

I'm not angry or have any bad intentions, I don't doubt about your abilities as a trained listener. I and other members just want to how can we correlate what you've heard and discribed with the data you shared.

This could help the community for futur reviews. This is a legitimate question and in line with a more scientific approach. Since we don't have any anwser at this question it just raised unwanted suspicion and skeptism.
 

napilopez

Major Contributor
Forum Donor
Joined
Oct 17, 2018
Messages
2,146
Likes
8,717
Location
NYC
Obviously the two speakers measured by Harman and Amir are not the same unit, and the latter is now close to 20 years old, but seeing as the spins are otherwise very similar, this extra jaggedness could (at least partially) be responsible for a difference (decrease) in preference score to that calculated from Harman's measurements of 0.5.

I don't think that bit of jaggedness would affect the score that much; hard to imagine it'd do more than 0.1 points, but I'd have to mess around with the curve to be sure. That said, though I don't necessarily think it's the case here, I have wondered before whether a little jaggedness in the direct sound can affect our sense of 'refinement.' I know the harman research showed there's an improvement in preference correlation going from 1/3rd octave to 1/20th octave resolution.
I want to make sure this part of the research which was posted earlier is not forgotten:

View attachment 70889

In the controlled double blind test, testers most often said that the speaker was "dull" and or had "mid-depression." If you were the testers that said those, would you come out of the listening tests praising the speaker as being great? Or would it be likely that you would vote as not being a good experience as I did?

This is the second ranked speaker:

View attachment 70887

We again see highly negative comments such as "Dull" and "Muffled." Does it not reason that these speakers were better than the rest but by themselves were not model of perfection?

This is the third ranked speaker:
View attachment 70888

Now the same speaker in said controlled tests is both Bright and Warm? And has weak bass but also great bass?

This data tells us the verdicts were not consistent with each other, or with the final ranking. What this means is that you need a "second opinion" for the measurements and that is what I am providing for you all. And the fact that my outcome is inconsistent with the score does not mean at all that it must be due to bias. It could very well be speaking on behalf of the people who tested blind.

While as you know I think these subjective impressions are a very useful part of this paper, I do want to point out one important thing about these comments, especially for the people following along without access to the paper.

The 'most frequent comments' can be misleading if you look at them without comparing the quantity of comments relative to other speakers. We could've stuck a Salon2 in there and just a single person in a single listening session could have said "weak bass, tizzy, dull" and those would show up as most frequent comments.

For example, with the IL10, there were 16 comments of 'mid depression,' more than any other speaker. But that could just as well be because it was the most audible flaw in an otherwise neutral sounding speaker. By comparison speaker 4 had 74 comments of sounding bright and speaker 10 had 54 comments of "image not focused"

It just happens that it seems people are far more prone to comment on flaws than they are to comment on good sound. Or at least they are in this particular test. Here are the most frequent comments in the study as a whole:

1593316995377.png


Out of the top 20 most recurring comments, only 3 of them seem explicitly positive(Great Bass, Balanced, Neutral), maybe 4 if you count 'forward' which can go either ways.

As Dr Toole says "the best sounding audio product is the one that exhibits the least audible flaws," not necessarily the one that people feel like praising.

But again, dull and mid-depression were nevertheless qualities people picked up on. If good speakers are the ones that exhibit few audible flaws, it also makes sense that if Amir has a significantly above-average acuity for detecting a flaw like distortion, then the speaker wouldn't sound all that good.

Side-note: Interesting how there's a fairly positive correlation between 'dull' and overall preference, but it doesn't surprise me.
 

tuga

Major Contributor
Joined
Feb 5, 2020
Messages
3,984
Likes
4,285
Location
Oxford, England
It just happens that it seems people are far more prone to comment on flaws than they are to comment on good sound.

That is the whole point of objectively evaluating a loudspeaker through critical listening – pointing out the flaws.

Weaving a web of praise and pretty qualities is the realm of hi-fi magazine infomercials...
 
Last edited:

tuga

Major Contributor
Joined
Feb 5, 2020
Messages
3,984
Likes
4,285
Location
Oxford, England
I think that it would be productive for @amirm to perform more extended listening tests both in mono and in stereo with a wide assortment of material including pink noise to try and hypothesise possible causes of the "grunginess".
Further measurements including IMD as well as FR and CSD of the individual drivers might help too.

I would also like to better understand what he means by "grunginess". Is it "grainy" treble and/or something else?

Maybe try a different amplifier too.
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,663
Likes
240,943
Location
Seattle Area
I would also like to better understand what he means by "grunginess". Is it "grainy" treble and/or something else?
It is lack of clarity in mid to high frequency notes. I am trying hard not to say they sound "distorted" until I know for sure.
 

tuga

Major Contributor
Joined
Feb 5, 2020
Messages
3,984
Likes
4,285
Location
Oxford, England
Why? Same $25K amplifier has been driving all other speakers I have been evaluating.

Just to make sure that the cause it is not upstream. Not really essential but might be worth a try if not too complicated.
 
Top Bottom