• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Dynaudio Special Forty - Review & Measurements by Erin

pjug

Major Contributor
Forum Donor
Joined
Feb 2, 2019
Messages
1,776
Likes
1,562
What are you talking about? Seriously. (this will be funny to hear your answer)
You see his same posts on all the speaker threads now, that KEF solved everything so don't consider anything that does not measure the same as those. Even at any price level.
 
OP
VintageFlanker

VintageFlanker

Major Contributor
Forum Donor
Joined
Sep 20, 2018
Messages
4,995
Likes
20,095
Location
Paris
You see his same posts on all the speaker threads now, that KEF solved everything so don't consider anything that does not measure the same as those. Even at any price level.
I thought it was more like "buy absolutely nothing else than Genelec Ones". Like the would be only one speaker to rule them all...
e49b5f76c2cb203dc8f6759aefc438c9.gif
 

fineMen

Major Contributor
Joined
Oct 31, 2021
Messages
1,504
Likes
680
What are you talking about? Seriously. (this will be funny to hear your answer)
Do you know the standard that describes the spinorama? Do you know the corresponding recommendation on how to evaluate the results?
If there is a standard, what is the point of producing products that do not meet the standard? And at a price for which products are available that do comply with the standard. Especially if the latter meet the standard to such an extent that further improvement is not worthwhile.

Of course, everyone is allowed to deviate from the standard as they see fit. But then they have to ask themselves what they don't like about the standard.

But I don't see anything like that at Dynaudio. This product is again a deviation from the standard, whose basis I see in a certain, quite simple, inability. So I just do not suspect any intention behind it.

As only consuming customers, i suggest us not to support such things any further. Nobody needs something like this.

Could have come from Chat GPT, but: Translated with www.DeepL.com/Translator (free version)
You see his same posts on all the speaker threads now, that KEF solved everything so don't consider anything that does not measure the same as those. Even at any price level.
Nope, I praise Genelec and Neuman likewise. And as you say, "xyz solved everything", approved. They did. Won't you stop to seek out for "better" speakers? You cannot have it "better" sounding than in the studio, period. This is an instantly deadly thought for an audiophile, I know. Hope it didn't hurt anybody!
 
Last edited:

GXAlan

Major Contributor
Forum Donor
Joined
Jan 15, 2020
Messages
3,924
Likes
6,058
If there is a standard, what is the point of producing products that do not meet the standard?

How many people were used to create the standard? Is this representative of the membership here?

What does the standard say? (What is the effect size? MCID?)

Why didn’t the highest numerical scoring product win?

 

fineMen

Major Contributor
Joined
Oct 31, 2021
Messages
1,504
Likes
680
How many people were used to create the standard? Is this representative of the membership here?

What does the standard say? (What is the effect size? MCID?)

Why didn’t the highest numerical scoring product win?

First of all, spinorama is not Harman's score. Harman's score is a recommendation for evaluating the objectively measured data in regard to human perception. I often critizised--all people always ignore it diliberately for presumably good reason, the score with it's preferrence for just preferrence. The human factor is not fully understood.

Second to that I fought back against arrogantly--at least I felt that way, dismissing speakers which intentionally depart from what is told to be at least, well, 'good' on this board.

Third, you got my point. You cannot accept it. More power to you! I mean it.
 

tmtomh

Major Contributor
Forum Donor
Joined
Aug 14, 2018
Messages
2,773
Likes
8,156
How many people were used to create the standard? Is this representative of the membership here?

What does the standard say? (What is the effect size? MCID?)

Why didn’t the highest numerical scoring product win?


I've tried to be respectful of the work you put into this, but this is not the first, or the second, time you've linked back to the set of 6 recordings you shared as if you ran an actual valid test. Just stop - it's inaccurate and actively misleading for you to keep linking to your test as if it shows (or rules out) anything at all.

For those reading this thread who might not be familiar with the thread Alan has linked to, he used a room-measurement mic to do in-room/in-space recordings of six different speakers. He did this in three different spaces/rooms, and apparently even with speakers in the same room he did the recordings from different locations in the room. He asked folks to listen for fun and many of us gave our feedback comparing the sonics of the different recordings - all of which are low quality recordings, by hi-fi standards. (I don't say the recordings were poor to criticize Alan - that's not the issue that I'm responding to here.)

I assume it requires no explanation for most members here of the many reasons why this comparison tells us exactly nothing of use about speaker performance or listener preference for different speaker response profiles.
 
Last edited:

GXAlan

Major Contributor
Forum Donor
Joined
Jan 15, 2020
Messages
3,924
Likes
6,058
this comparison tells us exactly nothing of use about speaker performance or listener preference for different speaker response profiles.

I agree wholeheartedly with this part of your comment but you seem to be imparting a perspective of intent/meaning to my series of questions than is intended. My questions are to make sure the Preference Score is understood comprehensively, with science and not hand waving.

If preference score was all that was needed to capture speaker performance, Dr. Toole wouldn't have had to write a 490 page, 2.58 pound book on sound reproduction.

When recommending a new computer or laptop to friends or family, if I don't want to be life-long tech support for that person, I'll make a conservative/safe recommendation even though it may not be the right choice for me personally. Along those same lines, when recommending a speaker to a friend, going with something that has a high preference score generally ensures that you'll have a happy friend. When designing a speaker for the market, going with something that has a high preference score generally ensures that you'll have favorable reviews.

Companies like Bowers & Wilkins, Dynaudio, Magnepan don't design to the preference score "standard." You could argue that B&W and Dynaudio have seen ownership changes over the years, but you cannot see this as a negative when Harman was also seen ownership changes. You have to be an attractive aquisition target to be acquired. No one is eager to buy the assets of Oceangate. Magnepan has been in continuous operation since 1969.

To address the question of: "what is the point of producing products that do not meet the standard?"

My answers to my own questions:
How many people were used to create the standard? Is this representative of the membership here? What does the standard say? (What is the effect size? MCID?)

The preference score was designed around a study with a sample of 42 listeners, of which only 28 were actually used for the development of the preference score.
Subsequent studies conducted by Harman have shown that it's generally still valid. But it doesn't account for outliers.

A fair question is why 14 of the original study group were ignored. These were listeners who had "moderate or high judgment variability." These are customers too. Was the variability between identical speakers playing identical music, where one day the listener like it and the other day the listener did not? Or was the variability that they really liked that speaker with some content and really didn't like it with other content? It's not well defined.

What's so special about those 28 people? "These were the listeners whose " fidelity ratings " showed the greatest consistency within individuals and the closest agreement across the group of individuals." That creates potential for selection bias if the audiophiles you are testing are those who a priori prefer and listen to smoothness in frequency response.

That is, John Doe liked speaker A >> B > C > D, William Doe liked speaker A >> B > C >> D, Jane Doe liked B >> C >> A > D and Janet Doe liked B >> D > C >> A, they wouldn't have agreement across the group.

"Not all listeners auditioned all loudspeakers and not all loudspeakers were included in each experiment."
Maybe the judgment variability only matters for certain speakers -- but then the whole group was thrown out.

Dr. Toole himself acknowledges that Bose, Briggs, and Harwood have done quality research to show that reverberant speakers camouflage peaks in response and disguise the audible effects of technical imperfections and how little low-Q resonance actually is audible.

In the worst-case scenario, the 14 who were ignored should have been included. That means the majority of people adhere to the Preference Score as the standard of choice. It's the logical business decision as well. However, it also means as much as 1/3rd of the market may end up preferring something different.

The companies making products that "do not meet the standard" are going after this 1/3rd of the market.

We can actually go further with Dr. Toole's direct statements:
"There is a trade-off, it seems, between the loudspeaker directivity required to preserve the illusion of truly compact sound sources in specifically localizable stereo images and that required to give the listener the impression of being immersed in another acoustic space."

The preference score helps to guide the "ideal" balance in trade-off for the majority of people who had agreement across the group and majority of people Harman has sampled, but clearly, it is well within science to recognize that individuals may lean one way or another toward a difference in preference.

In one extreme direction of the trade-off, you have Bose. His research showed that the "spatial property of the sound incident upon a listener is a parameter ranking in importance with the frequency spectrum of the incident energy for the subjective appreciation of music." In the other direction, you have headphones which offer vanishingly low distortion and the ability to fine tune the frequency response but eliminate the impression of being immersed in another acoustic space.

Why didn’t the highest numerical scoring product win?
The preference score is a predictor of speaker likeability based upon monaural listening tests done with standardized music selection in a standardized room from a sub-selection of customers who have a group-consistent ranking of preference between speakers which also represent at least 2/3rds majority of all customers.

My recordings didn't reflect this. My microphone offers "perfect" consistency within the "individual." It doesn't hear differently from one day to the next. It just doesn't hear what you hear, and once you change rooms and have a specific musical sample, the results can vary.

The preference score is a very powerful, industry changing standard. But it's not an immutable law of the Universe.

None of what I'm saying is anti-science or anti-preference score.

First of all, spinorama is not Harman's score. Harman's score is a recommendation for evaluating the objectively measured data in regard to human perception.

I guess I don't understand your comment that "This product is again a deviation from the standard, whose basis I see in a certain, quite simple, inability."

The spinorama is a consistent method of measurement. I don't think Dynaudio or any other company will say that the spinorama data itself is of any question. The only question is what your "goals" should be for the data collected by the spinorama measurement process. Dr. Toole's preference score is a majority-validated, science-driven answer to the goals of those measurements -- but majority isn't all.

I'm not a historian, but if I'm not mistaken, no one since the founding fathers has won the US presidency with a 2/3rds majority in the popular vote. If we accept that reasonable people can vote for the losing candidate (again, talking about the last 100 years and not specific election years), it's reasonable to accept that reasonable audiophiles may prefer speakers that don't match the popular choice.
 

tmtomh

Major Contributor
Forum Donor
Joined
Aug 14, 2018
Messages
2,773
Likes
8,156
I agree wholeheartedly with this part of your comment but you seem to be imparting a perspective of intent/meaning to my series of questions than is intended. My questions are to make sure the Preference Score is understood comprehensively, with science and not hand waving.

If preference score was all that was needed to capture speaker performance, Dr. Toole wouldn't have had to write a 490 page, 2.58 pound book on sound reproduction.

When recommending a new computer or laptop to friends or family, if I don't want to be life-long tech support for that person, I'll make a conservative/safe recommendation even though it may not be the right choice for me personally. Along those same lines, when recommending a speaker to a friend, going with something that has a high preference score generally ensures that you'll have a happy friend. When designing a speaker for the market, going with something that has a high preference score generally ensures that you'll have favorable reviews.

Companies like Bowers & Wilkins, Dynaudio, Magnepan don't design to the preference score "standard." You could argue that B&W and Dynaudio have seen ownership changes over the years, but you cannot see this as a negative when Harman was also seen ownership changes. You have to be an attractive aquisition target to be acquired. No one is eager to buy the assets of Oceangate. Magnepan has been in continuous operation since 1969.

To address the question of: "what is the point of producing products that do not meet the standard?"

My answers to my own questions:
How many people were used to create the standard? Is this representative of the membership here? What does the standard say? (What is the effect size? MCID?)

The preference score was designed around a study with a sample of 42 listeners, of which only 28 were actually used for the development of the preference score.
Subsequent studies conducted by Harman have shown that it's generally still valid. But it doesn't account for outliers.

A fair question is why 14 of the original study group were ignored. These were listeners who had "moderate or high judgment variability." These are customers too. Was the variability between identical speakers playing identical music, where one day the listener like it and the other day the listener did not? Or was the variability that they really liked that speaker with some content and really didn't like it with other content? It's not well defined.

What's so special about those 28 people? "These were the listeners whose " fidelity ratings " showed the greatest consistency within individuals and the closest agreement across the group of individuals." That creates potential for selection bias if the audiophiles you are testing are those who a priori prefer and listen to smoothness in frequency response.

That is, John Doe liked speaker A >> B > C > D, William Doe liked speaker A >> B > C >> D, Jane Doe liked B >> C >> A > D and Janet Doe liked B >> D > C >> A, they wouldn't have agreement across the group.

"Not all listeners auditioned all loudspeakers and not all loudspeakers were included in each experiment."
Maybe the judgment variability only matters for certain speakers -- but then the whole group was thrown out.

Dr. Toole himself acknowledges that Bose, Briggs, and Harwood have done quality research to show that reverberant speakers camouflage peaks in response and disguise the audible effects of technical imperfections and how little low-Q resonance actually is audible.

In the worst-case scenario, the 14 who were ignored should have been included. That means the majority of people adhere to the Preference Score as the standard of choice. It's the logical business decision as well. However, it also means as much as 1/3rd of the market may end up preferring something different.

The companies making products that "do not meet the standard" are going after this 1/3rd of the market.

We can actually go further with Dr. Toole's direct statements:
"There is a trade-off, it seems, between the loudspeaker directivity required to preserve the illusion of truly compact sound sources in specifically localizable stereo images and that required to give the listener the impression of being immersed in another acoustic space."

The preference score helps to guide the "ideal" balance in trade-off for the majority of people who had agreement across the group and majority of people Harman has sampled, but clearly, it is well within science to recognize that individuals may lean one way or another toward a difference in preference.

In one extreme direction of the trade-off, you have Bose. His research showed that the "spatial property of the sound incident upon a listener is a parameter ranking in importance with the frequency spectrum of the incident energy for the subjective appreciation of music." In the other direction, you have headphones which offer vanishingly low distortion and the ability to fine tune the frequency response but eliminate the impression of being immersed in another acoustic space.

Why didn’t the highest numerical scoring product win?
The preference score is a predictor of speaker likeability based upon monaural listening tests done with standardized music selection in a standardized room from a sub-selection of customers who have a group-consistent ranking of preference between speakers which also represent at least 2/3rds majority of all customers.

My recordings didn't reflect this. My microphone offers "perfect" consistency within the "individual." It doesn't hear differently from one day to the next. It just doesn't hear what you hear, and once you change rooms and have a specific musical sample, the results can vary.

The preference score is a very powerful, industry changing standard. But it's not an immutable law of the Universe.

None of what I'm saying is anti-science or anti-preference score.



I guess I don't understand your comment that "This product is again a deviation from the standard, whose basis I see in a certain, quite simple, inability."

The spinorama is a consistent method of measurement. I don't think Dynaudio or any other company will say that the spinorama data itself is of any question. The only question is what your "goals" should be for the data collected by the spinorama measurement process. Dr. Toole's preference score is a majority-validated, science-driven answer to the goals of those measurements -- but majority isn't all.

I'm not a historian, but if I'm not mistaken, no one since the founding fathers has won the US presidency with a 2/3rds majority in the popular vote. If we accept that reasonable people can vote for the losing candidate (again, talking about the last 100 years and not specific election years), it's reasonable to accept that reasonable audiophiles may prefer speakers that don't match the popular choice.

I don't have any disagreement with any of the thoughtful, often well-taken comments you have written in response to me here. I would only reiterate the main point of my last comment, which is that the comparison you linked to has absolutely no relevance to anything you've written in this longer comment, and your "why didn't the best score win" question you posted about your comparison also has no relevant, because the obvious answers to your question have nothing to do with whether or not the preference socre is a good guide to listener preference. I just think you should stop linking/referring to your comparison as if it meant anything beyond a fun activity.
 

fineMen

Major Contributor
Joined
Oct 31, 2021
Messages
1,504
Likes
680
To address the question of: "what is the point of producing products that do not meet the standard?"

My answers to my own questions:
How many people were used to create the standard? Is this representative of the membership here? What does the standard say? (What is the effect size? MCID?)

The preference score was designed around a study with a sample of ...
Thank you so much. You prove that there is intelligent life on planet earth. To parts of your argumentation I would agree, other parts make me rethink my stance.

Regarding the particular speaker in discussion, I could point to its weaknesses that, from my perspective as a former DIYer, are indicative of not willingly chosing a specific sound, but just letting it go. As you mentioned history, agreed, Dynaudio was once innovative, a shooting star. Today it appears to me that the brand already left its peak for a longer stretch of its ballistic trajectory.

Fun question: if there was only Neuman or Genelec to chose from, what would you miss? Wood, as a masquerading veneer? Sorry, veneer masquerading as wood?
 

GXAlan

Major Contributor
Forum Donor
Joined
Jan 15, 2020
Messages
3,924
Likes
6,058
Fun question: if there was only Neuman or Genelec to chose from, what would you miss? Wood, as a masquerading veneer? Sorry, veneer masquerading as wood?

I have the JBL 708P which isn't as good as a Neumann/Genelec but still a reasonable representative of the "monitor sound".
The Genelec 8050b has more bass extension and has flatter dispersion above 9 kHz, but if you look at the two, they're very similar.

1690578410204.png
1690578428329.png
1690578456033.png



I also have the JBL XPL90 in the same room with the same sources. The XPL90 rolls off the bass even further, is reasonable with the frequency response, but has a much wider dispersion instead of the usual "steady upward slope" for the directivity index



1690578780276.png
1690578762555.png
1690578803178.png



In my room, the setup is:
Code:
Marantz SA-10 -->  Marantz PM-10 ---speaker out--> JBL XPL90
                                 -----REC Out----> Schiit Freya N (passive) --> JBL 708P

The frequency response is quite close between the two, except for the bass as we expect
1690580704859.png


I volume match and then use the remote controls to mute to switch back-and-forth in an unblinded fashion although the difference in dispersion is easily ABX'able.

The 708P is a superior speaker for gaining insight into a recording.

For something like Hiromi's Silver Lining Suite
The 708P is the clear winner. It sounds intimate, close and authentic. It's a great recording.

However, for something like Aimi Kobayashi's take on Chopin's Preludes
The XPL90 more closely represents the sound of a being seated at the piano bench. It sounds less focused and wider in soundstage -- a weakness for ultra-precision imaging but a benefit for musical enjoyment of that album.

There is a lot of art and science to the recording of pianos, and it's likely that the two recordings are mixed differently

A grand piano's dispersion isn't like a speaker or a microphone.

1690579829998.png
1690579851705.png


And it may be as simple as dumb luck that the combination of the recording method + my JBL XPL90 sounds better, but I am universally consistent in my preference of speaker for a given album, but my speaker preferences are very different with different albums.

It would be nice to be able to sell one of the speakers if one was consistently better than the other, but the reality is that having both options lets me do a quick comparison between the two with the first track I'm listening to and then I can enjoy the rest of the album on the better setup.

I'd be happy to make some recordings of the two speaker setups once I have a better grasp of binaural recording. The speakers sound very different in person despite the very similar frequency response plot. We have Amir-quality spin data on both of these speakers and the biggest difference is the directivity difference.

It's also important to note what Amir said about the XPL90.
"I think this is the first speaker I have liked that has some clear response errors. What it gets right must be what I pay attention to and in that regard, this was an enjoyable speaker."
 

fineMen

Major Contributor
Joined
Oct 31, 2021
Messages
1,504
Likes
680
The frequency response is quite close between the two, except for the bass as we expect
The reason seems to be, that the room's acoustics overwhelmes the little differences of the speakers. But as Toole often said, it is not so much the in-room reverberant frequency response, but the direct sound. People prefer (that is not the Harman score) a flat and smooth direct sound, while the in-room part is something that people seem to get used to rather quickly. The latter leads to the conclusion that directivity is not that big of a deal, but one cannot neglect it either. Wide versus narrow is not decided. (Depends on weighting first reflections in, esp/ with stereo, no data available yet.)

The dispersion pattern of speakers is in no way related to the dispersion pattern of instruments.

I want to come back to the Dynaudio, because that sparked the conversation. Erin discusses the problems of the model in discussion in his video from minute 10 on. We see resonances all over the place, accompanied by higher order (the nasty type) distortion. Especially the cone edge aka surrond resonance will, and the measurement show that all to clearly, introduce tons of intermodulation.

Erin says, its anyway o/k, becuase he doesn't hear it. Toole says that resonances are the root evil. I say that intermodulation is one of the most critical part in speaker performance. Erin versus the two of us ;-)

I asked, what if there was only Genelec and Neuman to chose from. What would you miss? Of course a 'passive' speaker. So again let's take the KEF R3. It is way cheaper than the Dynaudio, and alas, it it so far better than the Dynaudio that even a remote comparison would feel unfair.

You may want to maintain something in your house. Get the toolbox and by chance you grep that tool which was a bit more expensive than common el' cheapo offers. Do you remember how the feel of satisfaction creeps in handling the tool? It works so well, it just fits the need. You might get a bit sad when the job is done, because it went so easy and quick, and that precious time with a good and thought through product is over.

If you dare, compare a good speaker with a decidedly sloppy design from the second last decade (at best). Sorry for that.
 
Last edited:
Joined
Jul 21, 2021
Messages
20
Likes
36
Companies like Bowers & Wilkins, Dynaudio, Magnepan don't design to the preference score "standard."

But is this statement actually true? I mean, the speaker discussed in this topic still has the preference score of 5.0, and with a sub it's 6.9. This is a fairly good score. And yes, it could have been better, but this is not like Zu Audio or similar level of disregard for the preference score standard.

I feel this is more of the case of "we like the good ol' design" kind of thing, but this old design was still aimed to achieve a flat measuring speaker.
 

tmtomh

Major Contributor
Forum Donor
Joined
Aug 14, 2018
Messages
2,773
Likes
8,156
But is this statement actually true? I mean, the speaker discussed in this topic still has the preference score of 5.0, and with a sub it's 6.9. This is a fairly good score. And yes, it could have been better, but this is not like Zu Audio or similar level of disregard for the preference score standard.

I feel this is more of the case of "we like the good ol' design" kind of thing, but this old design was still aimed to achieve a flat measuring speaker.

Does B&W aim for flat response?
 

fineMen

Major Contributor
Joined
Oct 31, 2021
Messages
1,504
Likes
680
But is this statement actually true? I mean, the speaker discussed in this topic still has the preference score of 5.0, and with a sub it's 6.9. This is a fairly good score. And yes, it could have been better, but this is not like Zu Audio ...
Sure, the amount by which it departs from 'good', measured by Dr. Olive's score, is not that much. I'm only concerned about the lucky bag of mysterious resonances together with directivity and linearity errors, emphasized by distortion of many kinds. How do you put in in English, a mixed bag? Why not face it, worth the asking price? For sure not the least ... for me personally of course.
 
Joined
Jul 21, 2021
Messages
20
Likes
36
Does B&W aim for flat response?
I don't know about B&W, I'm talking about Dynaudio specifically. From the measurements that we have, it looks like they target the flat line, they just don't execute it so well.

And also all those speakers are pre-Jupiter era, I'm really curious about how their current models measure.

Sure, the amount by which it departs from 'good', measured by Dr. Olive's score, is not that much. I'm only concerned about the lucky bag of mysterious resonances together with directivity and linearity errors, emphasized by distortion of many kinds. How do you put in in English, a mixed bag? Why not face it, worth the asking price? For sure not the least ... for me personally of course.
Oh, it's not worth the price, that's for sure. There are many cheaper speakers with the same or even higher preference score.
 
Top Bottom