• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Popular Hi-Fi's subjective evaluation of 30 speakers

tuga

Major Contributor
Joined
Feb 5, 2020
Messages
3,984
Likes
4,363
Location
Oxford, England
An interesting report on 30 speakers including technical and subjective evaluation of 30 speakers published by '70s. (now complete thanks to @Putter )
The first pages describe the methodology used in the subjective evaluation:

KNBtWsJ.png
 

Attachments

Last edited:
I have never seen this before , but as someone who bought B&W DM5's with young ears (1980) I approve it!
 
I've experience with several speakers, and my estimations would differ radically from their ranking chart. But I of course didn't use their blinded method of comparison.
 
I've experience with several speakers, and my estimations would differ radically from their ranking chart. But I of course didn't use their blinded method of comparison.

If they were blinded then the result of the test is void.

Joke aside, I agree with your opinion.
 
I was only half joking about the article. I did buy the DM5's, but would have preferred either the Spendor BC1 or the Quad's. My first experience of real high fidelity was walking into a small shop on the near north side of Chicago (The Audiophile) in 1973. Early Audio Research tube amps on display in the window and a woman singing in the rearmost of three small rooms. Only it wasn't a woman singing, it was the Spendor's. I had never heard anything sound that real before and was hooked, still am.
 
I knew I had seen this before in my files. Attached is the full test.

Thanks, I've put the pages in the correct order and replaced the file in the original message.
 
I actually had a set of Studiocraft 330s in the mid '70s. They were given to me by someone who didn't want them. Possibly one of the worst speakers I ever owned. That it rated higher than the Quad ESL is surprising.

The test was not really about musical reproduction. They recorded some speech (test phrases), an acoustic guitar (not sure the program), a high hat cymbal, a bass pedal drum, and a music box. In a hotel room, panelists were told to rate the recording with a 'live' facsimile.

Each of the separate non-musical sections were recorded in mono (no limiting etc) on open reel tape, and then in tape gaps identical 'live' sounds were presented. The panelists were listening to one loudspeaker, mono, and comparing that to the equivalent 'live' sound.

Whatever the results, and however the test was conducted, the LS3/5a has always been considered a respectable loudspeaker. That the Pioneer HPM60 equaled it is something few realized, back in the day. :)
 

I'd read that years ago. Hardly peer-review-paper quality :) but I do agree with the reasoning.


P.S.: I'm not sure that the By today’s standards, the live-versus-recorded tests performed to date lack the necessary scientific controls and rigor to consider their results or conclusions accurate, repeatable and valid. applies to this case though, nor to the BBC research.
 
I'd read that years ago. Hardly peer-review-paper quality :) but I do agree with the reasoning.
Peer reviewed? Recognizing your smiley face, we should nevertheless keep it in perspective. This was a blog post. That said, Sean Olive is a respected audio scientist and his considered opinion is worth considering. [Edit: I think you are talking the original paper, and not Olive's blog post, in which case I apologize for misinterpreting your intention.]

Anent the AR/Dyna tests (and others like it) one problem is the acoustic venue. You place musicians (or a loudspeaker) in a large hall and what do you get? The reproduced sound blends and merges with multiple reflections, losing much tonal specificity. Anyone who has ever gone to a live event (sonata, small ensemble, or orchestra) soon realizes that living room type loudspeaker phenomena just ain't there. Imaging, front to back depth, and all the rest.

I've mentioned this before, but one of the rag mags (I think it was Stereophile but it could have been Harry Pearson's sometimes quarterly) hired a symphony oboe player to review amps and CD players. The idea was that this guy, sitting two rows behind the string section, would definitely know what live music sounded like. But as some astute readers pointed out, "Yeah, he knows what his oboe sounds like surrounded by the rest of the orchestra, but does he know what the orchestra sounds like to the guy sitting in Row H Seat 12 of the hall?

Also, the LvR (live v recorded) event is necessarily limited to a single instrument, possibly a few. To reproduce the 'live' sound of a symphony orchestra would require tremendous wattage and tremendous speakerage. I guess you could use multiple loudspeakers and multiple amps. The Grateful Dead wall of sound kind of thing.
 
I've mentioned this before, but one of the rag mags (I think it was Stereophile but it could have been Harry Pearson's sometimes quarterly) hired a symphony oboe player to review amps and CD players. The idea was that this guy, sitting two rows behind the string section, would definitely know what live music sounded like.

He'd also be hearing-impaired from constant exposure to high SPLs. And in this particular instance, also rather corrupt.
 
I knew I had seen this before in my files. Attached is the full test.

Very interesting, thanks a lot. What strikes me here is how similar this is to how people review speakers today. The graphs are about frequency response and vertical/horizontal dispersion, which essentially is what the Klippel measures here at ASR so far. They include pair matching, which is also essential for stereo imaging. So how much has really changed since then? (I think this also calls into question the claim we hear now and then that "nobody knew anything back then, before systematic preference testing showed everyone the right way to do it"...). If only hifi reviewers had continued to review speakers blindly, we would probably be at a different place today...

If we assume that this blind test has some validity to it, it's interesting to think about what those tiny JR 149 may have done right. They are small speakers, with a smaller than average tweeter, so dispersion is probably fairly wide. Relatively point sourcy as the drivers are tight together. Thin walls in the bbc tradition, which may have brought cabinet resonance down and away from the presence region. Round - not square - cabinet/edges, which may have avoided some edge diffraction. I don't know. But again and again, when I read and see stuff from back in the day from KEF, Klipsch, etc, it strikes that these guys knew what they were doing. Paul Klipsch advocated active crossovers way back in the 50s!
 
Last edited:
I actually had a set of Studiocraft 330s in the mid '70s. They were given to me by someone who didn't want them. Possibly one of the worst speakers I ever owned. That it rated higher than the Quad ESL is surprising.
Well, based on the measurements (the on-axis response reminds me of some Grado headphones) they would certainly go in my bottom 5 in this test, alongside the JBL L36 (brrr), KLH CB10 and Toshiba SS-470 and whatnot. (Sony SS5050 is one of the big what-ifs, its peculiar midrange hump - possibly a baffle step issue - looks like it could be EQ'd away quite easily. The Wharfedale Airedale SP seems to have a driver sensitivity mismatch issue compromising its midrange. The Goodmans Achromat needs considerable toe-in. The BIC Venturi only suffered in listening tests due to mounting; I am quite sure that this relatively decent-measuring model would have done better when stand-mounted.) Quad ESL off-axis response is scary stuff. :eek: Basically all the models with multiple tweeter units have issues.

The spread in test results sure is enormous. Some of these would hold up well even today, though well-damped surroundings would probably be advisable to keep the mush at bay.

BTW, the pages for 3 speakers appear to be missing, among them Pioneer HPM-60 and Ortofon P45.
 
Back
Top Bottom