I have a brain in the middle that interferes with this process, unfortunatelyOne just has to slide into one ear and out the other.
I have a brain in the middle that interferes with this process, unfortunatelyOne just has to slide into one ear and out the other.
Right! Synthesize is probably an overreach, not to alienate oneself completely is maybe a better term. We all agree audio is a mighty cool thing!as in thesis-antithesis-synthesis?
Yeah but you left out the published reason in that paper (Part I, page 5, section 3.2). I've bolded it just because I can:Sorry, but you clearly have not read or understood the Harman research. Let me explain briefly, and you're welcome to verify it for yourself (and then apologize).
The 1994 paper "A Method for Training Listeners" that you are attempting to cite is the Harman paper that tested the ability of listeners to reliably identify spectral distortions that had been introduced into different musical genres. It can be considered preliminary work that informed the use of rock in the subsequent research that ultimately correlated loudspeaker measurements with listener prerferences.
The 2 seminal Harman papers (AES 116th 2004, AES 117th 2004) that definitively linked loudspeaker measurements and listener preferences were:
A Multiple Regression Model for Predicting Loudspeaker Preference Using Objective Measurements, Parts I and II.
The tracks used in this research were reported on page 5, table 2:
View attachment 423653
In case you're not familiar with these tracks, here is the music genre for each:
Artist Track Music Genre James Taylor That’s Why I’m Here Folk rock Little Feat Hangin’ On to the Good Times Blues rock Tracy Champan Fast Car Folk/Blues rock Jennifer Warnes Bird on a Wire Country rock
I have bolded the word "rock" so you can find it easily.
The programs were selected on the basis of their ability to reveal spectral and preferential differences between different loudspeakers in over 100 different listening tests and various listener training exercises.
How does this follow? At most it invites the question "are the results applicable to other genres of music?" The mere absence of data from other genres does not prove the results are not applicable to those genres, and as noted by others, Harman appears to have put the time in to show that results from using rock music are generalizable, and that's exactly why they chose rock music for their listening tests.Using only rock music in a study protocol limits the generalizability of the study findings to rock music.
Correct. The study results are not necessarily applicable to other, non-studied, genres of music, and you should be asking that exact question.How does this follow? At most it invites the question "are the results applicable to other genres of music?"
They have not done this, no.Harman appears to have put the time in to show that results from using rock music are generalizable, and that's exactly why they chose rock music for their listening tests.
EXACTLY!! BINGO!!! And THIS is why the demonstrated relationship between measurements and listener preferences may not be fully generalizable to different types of music. As you point out, the type of playback music influences listener preferences of the same speaker. Scroll up to see my rap/hip-hop illustration.A speaker with hyped bass and treble and sucked out midrange will often sound perfectly fine with electronic music and terrible with rock or jazz,
Well, the Harman numbers in this context are predicting how preferred a given speaker is going to be among a group of people, not what the speaker sounds like.
I don't think anyone here (including me) disagrees that measurements can give you an idea of what a speaker sounds like. For instance, you can easily figure out "no bass" or "bright" from FR charts, particularly if they are obvious. But more precisely, what we ultimately want to do is to predict perceived sound quality from measurements. Harman chose to target sound quality by means of blinded listener ratings (or "preferences"). I agree with this. Perhaps you know better than Harman?If you show me a batman frequency response curve, or one missing bass, or one with elevated treble, I can tell you something about how the speaker will sound. That's a different exercise than predicting whether people would like the speaker.
The paper of course has a section and table data on effect of program. In single- and multiway ANOVAs (analyses of variance) for Preference and Timbre ratings, 'program' was a insignificant factor in every combination, statistically. It had a small but significant effect on Distortion rating, (p=0.0288) with the Chapman and Taylor samples eliciting the worst effect (posited due to their high amount of LF energy -- which of course will stress a speaker the most), and a bit larger effect analyzed in combination with Loudspeaker (p=0.0132) mainly seen in 2 out of the 13 speakers when using 2 out of the 4 programs.How does this follow? At most it invites the question "are the results applicable to other genres of music?" The mere absence of data from other genres does not prove the results are not applicable to those genres, and as noted by others, Harman appears to have put the time in to show that results from using rock music are generalizable, and that's exactly why they chose rock music for their listening tests.
In my subjective experience, it's typically only poorly designed speakers or headphones with significantly coloured frequency response that have issues excelling with some genres and falling short with others. A speaker with hyped bass and treble and sucked out midrange will often sound perfectly fine with electronic music and terrible with rock or jazz, for example, but well-designed, neutral speakers sound good with everything.
Correct. The study results are not necessarily applicable to other, non-studied, genres of music, and you should be asking that exact question.
This is not correct. There's no reason to believe other genres of music are different in any way from the program material used in the studies in any way that wouldn't make the findings applicable. The program material was chosen for specific reasons that you are ignoring in favor of focusing on the fact that they fit a very loose, imprecise definition of a certain genre. Genre labels are rather subjective by definition and it's difficult to imagine a proper study basing its program selection based on genre in the first place. Who defines what genre the music belongs to and on what basis is it segregated?Correct. The study results are not necessarily applicable to other, non-studied, genres of music, and you should be asking that exact question.
You may want to re-read my conversation, I don't think you're following. Yes, there were other papers (and probably a lot of unpublished worked) that supported the use of rock music for research purposes.Yeah but you left out the published reason in that paper (Part I, page 5, section 3.2). I've bolded it just because I can:
So it seems there was quite a bit of 'preliminary work' done on choice of program, even before this 'preliminary work'.
And, you know, Sean Olive is actually a member here at ASR, and he's been known to reply when his work is discussed. Why not go to the authority himself?
This is correct and that's why rock music was chosen. But choosing rock music for a study protocol that allows your listeners to distinguish their preferences between loudspeakers is not the same thing as complete generalizability of the research findings performed on rock music to apply equally to non-rock music. Do you understand what I'm saying, @krabapple ?Or maybe the other genres simply weren't as good in their ability to reveal spectral and preferential differences between different loudspeakers in over 100 different listening tests and various listener training exercises?
Ask Dr. Olive.
You may wish to spend some time looking up "generalizability" and research studies. If you disagree with how scientists interpret research, and you want to say "I don't wish to interpret human subjects research in the same way that researchers and scientists do," that's fine, not much I can do about that. But I feel like there's not much more I can say to respond at this point.This is not correct. There's no reason to believe other genres of music are different in any way from the program material used in the studies in any way that wouldn't make the findings applicable. The program material was chosen for specific reasons that you are ignoring in favor of focusing on the fact that they fit a very loose, imprecise definition of a certain genre. Genre labels are rather subjective by definition and it's difficult to imagine a proper study basing its program selection based on genre in the first place. Who defines what genre the music belongs to and on what basis is it segregated?
You may want to re-read my conversation, I don't think you're following. Yes, there were other papers (and probably a lot of unpublished worked) that supported the use of rock music for research purposes.
Also, I have deep respect for the luminaries in this field that have advanced the general knowledge around measurements, their interpretation, and relationship with sound quality. At the same time, I have a problem with wasting other people's time. If you wish to bother Sean Olive, I can't stop you, but perhaps you can start by asking a specific question that you think is an appropriate use of his time.
You can't proclaim that it's not "generalizable" because it focuses on a specific genre of music (according to you, the actual research didn't aim to focus on a particular genre). There is no such principle that you can apply in a blanket manner, this is just a whipping boy you've focused on because you apparently don't like the outcome of the research. What else do they need to do to ensure the results are "generalizable"? Why is genre the big stumbling block here? What is it about a loose, subjective genre label that makes the music so different that a different loudspeaker result might be preferred to that shown to be preferred in all high-quality research that I'm aware of?You may wish to spend some time looking up "generalizability" and research studies. If you disagree with how scientists interpret research, and you want to say "I don't wish to interpret human subjects research in the same way that researchers and scientists do," that's fine, not much I can do about that. But I feel like there's not much more I can say to respond at this point.
Wow man, I think the feeling is mutual. You believe that the type of music doesn't affect loudspeaker preferences. Roger that.You're the one who made a hobby horse about the supposedly unanswered questions left by the use of an all-'rock' program (and I smile to call "Fast Car" a 'rock' song) and you have the supercilious act down pat, for which I can hardly fault you, so just who is wasting whose time?
You selectively omitted the second half of the sentence, which said "but well-designed, neutral speakers sound good with everything".EXACTLY!! BINGO!!! And THIS is why the demonstrated relationship between measurements and listener preferences may not be fully generalizable to different types of music. As you point out, the type of playback music influences listener preferences of the same speaker. Scroll up to see my rap/hip-hop illustration.
I understand your perspective, thank you.You can't proclaim that it's not "generalizable" because it focuses on a specific genre of music (according to you, the actual research didn't aim to focus on a particular genre).
Technically I agree, in the strictest sense. In practice rock music is a bit of a "worst case scenario" for a loudspeaker with audible flaws. Also, they did check different genres against each other, as @NTK points out.Correct. The study results are not necessarily applicable to other, non-studied, genres of music, and you should be asking that exact question.
Glad we agree at a basic level. I don't know that I can say that rock music is a worst case scenario without more evidence, and the study referenced earlier where different genres were studied was simply to determine which music allowed listeners the best ability to discriminate between speakers.Technically I agree, in the strictest sense. In practice rock music is a bit of a "worst case scenario" for a loudspeaker with audible flaws. Also, they did check different genres against each other, as @NTK points out.
Harman's findings were not misleading at all. They were groundbreaking and revolutionary and disproved several prevailing theories at the time. But my concern is that well-meaning enthusiasts seem to think the published research says something that it actually doesn't. Computerized analysis of spin measurements was able to predict 74% of the variation in listener preference ratings is not the same thing as "We can eyeball FR charts and reliably determine how good a speaker will sound."If Harman's result was misleading, we would see it contradicted with some regularity in real life.
That's just the thing. People out there don't necessarily agree 100% of the time whether Speaker A sounds better than Speaker B. Could it be a function of the music? Maybe, maybe not. But consider my rap/hip-hop example, where bass performance would probably have a larger effect on preference ratings than, say, non-organ classical music.In other words, rock fans and fans of other music would disagree about the quality of speakers that are considered good both objectively and subjectively.
I see what you mean, and I'm also not sure how reliable that is as an indicator.I haven't seen that myself... I haven't been looking for it, but I also can't recall anyone saying something like "I returned my Genelec 8361s, they're good for rock but suck for Jazz..." I'm sure it happens but if there were a big gap in these results it would become obvious over time.
Not super reliable, it would be just enough to suggest whether Harman was way off base.I'm also not sure how reliable that is as an indicator.
Oh, certainly. I have KEF and Genelec in my place like a good little ASR soldier, but not everyone cares for them.People out there don't necessarily agree 100% of the time whether Speaker A sounds better than Speaker B
I think accurate discrimination and judging quality are almost the same thing. Most speaker flaws correspond to fixed frequencies, so to reveal the most flaws, you can simply use the most frequencies. This is why pink noise beats rock music, but it's why rock music is close to PN in that chart.I don't know that I can say that rock music is a worst case scenario without more evidence, and the study referenced earlier where different genres were studied was simply to determine which music allowed listeners the best ability to discriminate between speakers.
I will still say, I think these are two very different kinds of judgment.Computerized analysis of spin measurements was able to predict 74% of the variation in listener preference ratings. "We can look at FR charts and reliably determine how good a speaker will sound."