• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Have You Used Crutchfield's Speaker Comparison?

Jim Shaw

Addicted to Fun and Learning
Forum Donor
Joined
Mar 16, 2021
Messages
616
Likes
1,159
Location
North central USA
I'm interested to know others' experiences if/when they use Crutchfield's speaker comparison.

I spent a half-hour with it this morning, and I was surprised at how useful it was to me. Well, not so useful as how much it lined up with my experiences with three known-to-me speaker models. Of course, my data is entirely anecdotal (unapologetically so).

I compared my Klipsch RP-600M's (retired, but still hanging around), my Elac DBR62's (the current speakers in my listening room), The Kef Q150's (because a friend has them), and (just for another data point), the Revel M106's. The Revel's I have not heard in person, but they are probably Revel's closest in price point and I respect the brand.

Since I generally listen mostly to classical solo and symphonic, and modern jazz, I picked those built-in tracks. I also casually listened to the other tracks C'field provides.

My way too quick experience demonstrated that it is clear (as day) why I prefer The Elacs. The test showed very clearly why I nearly hated the previous RP600M's. I tried to suffer through these for over a year, before I retired them, face down, to the utility room shelves. I'll make someone a deal on them. ;)

To me, the Revels sounded closely alike the Elacs. The Kef 150's were, to me, thinner and not what I'd pick as a keeper, although I could probably get along with them if I had to.

Anyway, the Crutchfield comparator worked well for me. I used my AKG headphones but also listened on my desktop nearfield monitors. The results were similar. Of course, it's just one man's view. Have you tried it?
https://www.crutchfield.com/S-14nMJI8PrFC/speakercompare/?omnews=15654156
 

Spkrdctr

Major Contributor
Joined
Apr 22, 2021
Messages
2,220
Likes
2,942
I have to wait for my headphones to come back from Amir. I think he is getting ready to test them. Then I will give Crutchfield a spin. :)
 

kyle_neuron

Active Member
Joined
Jun 18, 2021
Messages
149
Likes
254
That sure is a lot of marketing spin on ‘we take some impulse responses of the box and apply them via FIR convolution’.

However this method has been used in many pieces of comparative research. You should be aware of the anechoic measurement inaccuracies such as the LF cutoff of the room (I sometimes work in a much larger chamber, and even that has a 100 Hz limit) plus the issues that may be introduced with any high-order ambisonic or similar ‘room simulation’ with a generic head-related transfer function.

I’d actually be curious to see how a spherical impulse response derived from the Klippel NFS testing works for this purpose. If the balloon data is saved, then it should be relatively simple to translate that into a SOFA file which includes the IR data at multiple angles and distances.

The SOFA format is open source and an AES standard, so there are plenty of tools out there to render using them. The file can equally describe sources and HRTFs too.

The final step is simple headphone linearisation EQ, of which there are plenty of examples on this site already.
 
OP
Jim Shaw

Jim Shaw

Addicted to Fun and Learning
Forum Donor
Joined
Mar 16, 2021
Messages
616
Likes
1,159
Location
North central USA
That sure is a lot of marketing spin on ‘we take some impulse responses of the box and apply them via FIR convolution’.

However this method has been used in many pieces of comparative research. You should be aware of the anechoic measurement inaccuracies such as the LF cutoff of the room (I sometimes work in a much larger chamber, and even that has a 100 Hz limit) plus the issues that may be introduced with any high-order ambisonic or similar ‘room simulation’ with a generic head-related transfer function.

I’d actually be curious to see how a spherical impulse response derived from the Klippel NFS testing works for this purpose. If the balloon data is saved, then it should be relatively simple to translate that into a SOFA file which includes the IR data at multiple angles and distances.

The SOFA format is open source and an AES standard, so there are plenty of tools out there to render using them. The file can equally describe sources and HRTFs too.

The final step is simple headphone linearisation EQ, of which there are plenty of examples on this site already.
If Floyd Toole had written in this form, he'd still be carrying speakers as a grad student/intern. ;)
 

NTK

Major Contributor
Forum Donor
Joined
Aug 11, 2019
Messages
2,707
Likes
5,972
Location
US East
That sure is a lot of marketing spin on ‘we take some impulse responses of the box and apply them via FIR convolution’.

However this method has been used in many pieces of comparative research. You should be aware of the anechoic measurement inaccuracies such as the LF cutoff of the room (I sometimes work in a much larger chamber, and even that has a 100 Hz limit) plus the issues that may be introduced with any high-order ambisonic or similar ‘room simulation’ with a generic head-related transfer function.

I’d actually be curious to see how a spherical impulse response derived from the Klippel NFS testing works for this purpose. If the balloon data is saved, then it should be relatively simple to translate that into a SOFA file which includes the IR data at multiple angles and distances.

The SOFA format is open source and an AES standard, so there are plenty of tools out there to render using them. The file can equally describe sources and HRTFs too.

The final step is simple headphone linearisation EQ, of which there are plenty of examples on this site already.
If you have watched @hardisj's YouTube interview with Klippel's Christian Bellmann, Christian demonstrated this exact capability (auralization using the impulse responses the Klippel NFS measured). I time-stamped that part of the video, which starts at ~1:02:30.

 

kyle_neuron

Active Member
Joined
Jun 18, 2021
Messages
149
Likes
254
If you have watched @hardisj's YouTube interview with Klippel's Christian Bellmann, Christian demonstrated this exact capability (auralization using the impulse responses the Klippel NFS measured). I time-stamped that part of the video, which starts at ~1:02:30.


I had watched that video, but at two hours it was partly on in the background while I did other things so I hadn’t seen the method for the IR separation. Which is really cool! I don’t know how much effort it would be for @hardisj or @amirm to add to their already extensive process, but on-axis and off-axis/listening window free-field IR data would be great for a ‘roll your own’ auditioning tool that can be used with any music you own already.

Perhaps it’s possible to add IR data from existing measured speakers of interest to their threads? I’m more interested in the weird than the wonderful, personally!

Since most people don’t listen on-axis, having the choice of 10 or 20 degrees with a slight height variation might allow for a more accurate representation of the real-world result :)

Luckily it’s quite easy to measure a room impulse response with a cheap microphone, so you could even add your own listening location into the process. The in-ear Sennheiser Ambeo headset can be found for $50 now, and the mics on that give a passable binaural measurement for an even better result.

Or you could use one of the many extensive open-access research libraries such as the OpenAir and MAIR ones from some UK universities:
https://pure.york.ac.uk/portal/en/d...ry(c2fec4e3-7e29-4ca9-a9b2-875494fc311e).html
https://github.com/APL-Huddersfield/MAIR-Library-and-Renderer

For rendering you have Foobar2000, of course, or they can be loaded into EqualizerAPO. Or you can go the ‘whole hog’ and use the example binaural renderer app that comes with the EU-funded (and amazing) 3D Tune-In Toolkit:
https://github.com/3DTune-In/3dti_AudioToolkit

With that you can animate or position virtual sources in 3D space using a great high-order ambisonic implementation, as well as simulating the effect of hearing loss or different size heads and torsos with the generic ‘snowman’ HRTF generator that’s built in.

Not to take away from the convenience of an easy-to-use web service that has a nice WebAudio implementation - time is money after all - but I think experimentation and understanding what’s at work in this process is more in the spirit of open data and research that’s behind much of the great resource of information that’s on this site already :)
 

AudioKnob

Member
Joined
Jan 1, 2021
Messages
26
Likes
10
Location
Omaha
I think it's compelling technology that will improve. BTW, the DBR62 excel in Jazz reproduction. How do you have them positioned?
 
OP
Jim Shaw

Jim Shaw

Addicted to Fun and Learning
Forum Donor
Joined
Mar 16, 2021
Messages
616
Likes
1,159
Location
North central USA
BTW, the DBR62 excels in Jazz reproduction. How do you have them positioned?
They do well with almost any acoustic, well-recorded music. I don't know much about rock, R&B, blues, metal, and such. That's because I don't listen to these seriously except with visitors. More importantly, such recordings give no clue as to what the music was like before it was hammered, salted, battered, browned, and broiled by some mixer engineer at the whim of some wall-of-sound producer. But jazz, chamber, classical, and acoustic instrumental solos do sound very honest on the DBR62s. I place much emphasis, in test listening, on piano solos. No, not Elton's John's closed-lid Yamaha.

Positioning? My listening room is very unusual, compared to what most must face. Picture a 24' wide, 32' long, 17' high room which is very unsymmetrical. Part of an open floor plan, one long wall is open to three other spaces. The rear wall, at 20'back, is open to a loft room with a broad opening. In the front corner of the big space is a freestanding fireplace facing at 45 degrees. The speakers are on either side of the fireplace, about 9' apart, facing also at 45 degrees into the space. They are arranged to be 3' from the nearest wall, almost on-axis with my chair. My favorite listening spot is at about 12' from each speaker, forming a triangle of 9' x 12' x 12'. The floor is carpeted, there are two sets of 16' wide windows of 17' high glass, and nothing resembles the rectangular box that you see all over Floyd Toole's (and all others') diagrams. Further, there's a 6' grand piano about 18' back from the speakers, and upholstered couches and chairs -- most placed at 45 degrees to the room axis. Calculating the Eigentones would be challenging, and they aren't at all annoying to me. Using my 'one ear' listening technique, I find almost no annoying resonances. However, the sound stage is entirely between the speakers. I play the system flat.

There are subwoofers in the front right room corner and halfway back along the opposite sidewall. If you picture this, you see that it goes against everything taught by the research guys, yet sounds quite good. I have REW, but I don't have a good measurement microphone -- so it is all anecdotal.

One might suspect that some left-handed, symmetry-hating designer did this. Yes, I did.
[And you can blame the length of this answer on Windows' wonderful voice dictation system.]
 

AudioKnob

Member
Joined
Jan 1, 2021
Messages
26
Likes
10
Location
Omaha
Haha ... it works well. That's a large space for a small speaker. Andrew did a nice job on these. I continue to be impressed. Though, in some respects it's hard to believe, that rock (especially metal) if find less appealing. Perhaps it's the music, per se or how it's recorded. You know, that's probably it ... I forget how revealing the Elacs can be; garbage in sort of thing. Art Pepper, recorded 40 years ago fools me that I'm in the studio: amazing. I blithly think everything I throw on should sound the same. Nope. Glad ya like em too, Jim!
 
OP
Jim Shaw

Jim Shaw

Addicted to Fun and Learning
Forum Donor
Joined
Mar 16, 2021
Messages
616
Likes
1,159
Location
North central USA
Art Pepper, recorded 40 years ago fools me that I'm in the studio: amazing. I blithely think everything I throw on should sound the same. Nope. Glad ya like em too, Jim!

Art Pepper... now there's a name I haven't heard in decades. I gotta resurrect some of that stuff. Gratzi.
 

MediumRare

Major Contributor
Forum Donor
Joined
Sep 17, 2019
Messages
1,955
Likes
2,283
Location
Chicago
At first I was impressed, but then I loaded my own track that was IMO a better full range and dynamic test track. It seems to me it basically applies an EQ for each speaker, but can't simulate soundstage, compression, distortion. So in the end it didn't seem to me useful to actually compare speakers.
 

kyle_neuron

Active Member
Joined
Jun 18, 2021
Messages
149
Likes
254
At first I was impressed, but then I loaded my own track that was IMO a better full range and dynamic test track. It seems to me it basically applies an EQ for each speaker, but can't simulate soundstage, compression, distortion. So in the end it didn't seem to me useful to actually compare speakers.

In theory, convolution via binaural impulse responses capture with a HATS (head and torso simulator) using super-linear microphones *should* capture everything about the speaker’s behaviour. Especially if recorded in an anechoic environment, with very good signal to noise ratio.

But with all of these things, your mileage will vary; especially when the distribution and reproduction chain is included in the system.

Just using binaural impulse responses for L&R alone doesn’t seem to fix the “in head” feeling that comes with headphones, for example. There’s various approaches to make headphone reproduction more like listening to speakers, but they don’t all work equally for all people nor all content.

Plus it’s a linear time-invariant system, once measured. So it will only capture the non-linear distortions of the host speakers if it’s a direct recording of track playback, rather than a set of impulse responses.

You could record the impulses at many different drive levels, and interpolate between them based on the convolved track playback level, but I’ve not seen any system that offers this. Is it possible with Equalizer APO to do “if this then that” style processing?

Perhaps I should make some space to code something…

The biggest thing for me is that you’re listening with “someone else’s ears”. There’s debate around how much difference that makes if one of the leading generic HRTFs are used - the ones from Google or Facebook for example - as I know a few people who got custom HRTFs made via Genelec or others and say they can’t hear a difference.

However for me personally, the loss of my own pinna cues really affects my elevation and front-back perception. I can see how that would really impact
 

Andysu

Major Contributor
Joined
Dec 7, 2019
Messages
2,941
Likes
1,539
came across this on another site topic and had listen to compare four speakers . i used x5 five screen wide JBL 4673A , mono centre channel setting rather than stereo .
i selected klipsch , the horn and heard high loudness difference . bass was okay not boomy could have EQ filters to adjust it . the other two speakers , kef , jamo seemed sound the same . the polk had bass increase maybe due to driver , box size or passive crossover filters . but the klipsch if has binding posts for active or dsp crossover could make it sound better ? there is actual size of the speakers the music selection and my JBL pro cinema size speakers vs tiny speakers . so the test was okay , just can't mimic the size of the speaker unless i placed smaller JBL control 5 or 5plus to compare with the smaller speaker and move that smaller speaker of mine around the room , placed on the floor or standing on something ?

they need to include " return of the jedi " the ranco scene in these videos as there was no movie . these hi-fi dealers think speakers are just used for music .

otherwise not a bad test a lot of effort went into this testing and i guess they still add more speakers headphones i not sure if amplifiers as that is possible . but needs to have pink noise tester with frequency sweep and possible variable sine wave tone to check if the speakers have or produce any harmonic frequencies .
and laserdisc original of "star wars return of the jedi " the ranco scene and other movie soundtracks as well for dialog effects music ( dme ) compare between common matched LCR or unmatched L c R and surrounds . i guess sub bass as well ?




spktest.jpg
 
Top Bottom