as long as you are somewhat close to the average for diffuse field to ear canal blocked entrance
How would you know if the earphones are somewhat close or not?
Measurement??
It goes in circles
as long as you are somewhat close to the average for diffuse field to ear canal blocked entrance
I'm talking about your ears, not the earphonesHow would you know if the earphones are somewhat close or not?
Measurement??
It goes in circles
Ears have even more variable frequency responseI'm talking about your ears, not the earphones
To get the best dynamics out of them you need an desktop amp. Of course you will get them loud enough out of a phone, but how much will you enjoy it compared to dongles?Stupid question… but can you run these zero straight out of a phone/tablet or would it require an apple dongle?
It should be fine. Apple dongle works great though. It's been reviewed here.Stupid question… but can you run these zero straight out of a phone/tablet or would it require an apple dongle?
What. No you don't need a desktop amp. These IEMs would be easy to run. A desktop amp would be overkill.To get the best dynamics out of them you need an desktop amp. Of course you will get them loud enough out of a phone, but how much will you enjoy it compared to dongles?
I mean not all the information can be interpreted for the Monarch. Crin puts the Monarch as a highly technical IEM, but he also said Red wasnt the most detailed or technical performing IEM. Theres certain things that still cant be measured or interpreted w/ the measurement data we have currently. The only real useful IEM metric we have is in the FR, and arguably the distortion measurements arent that useful yet. Detail isnt really something that can be measured currently and is considered a subjective concept.This comparison of a $55 IEM and a $1000 IEM does not look very favorable for the expensive one (which happens not even to have smoother treble for the extra $945)
Uh huh.Crin puts the Monarch as a highly technical IEM
Yeah, it's called pricing bias. But even that's measurable.Theres certain things that still cant be measured or interpreted w/ the measurement data we have currently.
Well, for example in this work: Sean E. Olive, Todd Welti, and Omid Khonsaripour: A Statistical Model that Predicts Listeners’ Preference Ratings of Around-Ear and On-Ear Headphones (2018), the retails prices varied between $80 and $3000.
Maybe you want to have a look at fig. 6?
No offense either, but did YOU read the paper?
"This current study also found little correlation between headphone price and sound quality based on listener preferences. Fig. 6 plots the headphone preference rating versus its retail price. The price in USD is plotted on a logarithmic y-scale for better clarity since most of the headphones fall in the $100 to $500 category. A regression line shows a poor fit between headphone price and preference rating (r = 0.17). This poor correlation is similar to what Breebaart reported in his study of 283 headphones, and similar to what we reported in our in-ear headphone study [2]. Together these three studies provide further confirmation that the headphone industry is not following best engineering practices when designing headphones for best sound, even in cases where there are fewer cost constraints. Based on the headphone samples tested in this study, the sound quality of headphones doesn’t improve much beyond a $300 price point."
Well, in the section I have cited it is not 100% clear if they are referring to the virtual headphones or the headphones itself.
And they provided sufficient reasearch to back up that claim.
They described the methodology and then showed the results. I think it's clear.
And sorry, but there is a big difference between testing headphones and "virtual headphones". Especially if you use this test to prove statements like "sound quality and price correlate only extremely weakly".
By the way, they didn't test "sound quality", these papers are about listeners preferences.
Can you show me where they proved it?
Your assertion that 'there is a big difference between testing headphones and "virtual headphones" ' is not borne out by the research:
"had a correlation of 0.86 to 0.99". Yeah, that sounds like really big differences. On the other hand, the correlation between price and preference is 0.17. What exactly is your point? That the virtual headphone method is not making a 100% copy of the original phone? Granted. But it is good enough for all the subsequent research, that is what matters. Otherwise you should say what your criticism is, or, remain silent.
I don't get it, you chose to use a different term, but expect it to convey a same meaning, after the fact?Well, I assume you are aware of the fact that the big assumption of the Harman research is that listeners preference (especially if these are trained listeners) is basically the same as (or at least strongly correlated with) sound quality? You don't agree with that?
Yes, I've read those papers. Even Harman doesn't deny that there are audible differences between headphones and "virtual headphones".
I've done this type of testing with several headphones and IEMs, including the K701 and HD800S. If you equalize the K701 to the curve of the HD800S they will sound similar (90% the same or 0.86 or whatever you want to call it) but not the same. And the HD800 can give the illusion of a bigger "soundstage", "separation of instruments", "detail" that the K701 cannot. I don't want to judge whether these differences are worth the price, but they exist, people hear them and pay for them.
If you use "virtual headphones" you lose the differences that (some) more expensive headphones offer.
This goes both ways then. Those that used EQ matching functions with measurements using clone couplers are wrong to refer to Harman, instead of doing their own validation.Sure, but I'm assuming you haven't used industry standard equipment to measure your own individual copies of those headphones?