• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

TRUTHEAR x Crinacle Zero Red

The replicator headphone was large, open over-ears, yes, but the real headphones were not all open ears. One of them was Beats by Dre Studio Limited Edition, which was a pair of closed back on-ear headphones. Another one was AKG K550, which was closed back around-ear headphones. So there would still be leakage differences on each listeners' ears. As I quote from Olive, "Headphone leakage effects for (HP4) were a factor in the standard test (see Fig. 9 of [1]) but not the virtual tests. This would explain its higher ratings in the virtual test."

Yes indeed, for the K550 in particular, we already know that the virtualised headphones weren't similar to what the actual listeners experienced. That said, of all the HPs evaluated, it's actually the one that scored the most similarly between the virtual and real tests, but that's before transforming the ratings to occupy a similar portion of the scale, something which we can't do because of the lack of anchors, so.... I mean I have a lot of issues with the methods and conclusions of the 2013 OE virtualisation article, but that would be too long for that thread.
 
The best part of science is we know exactly how approximate it is. 98%.
If you were replicating the exact devices and protocols in that research, that could be the number you land on. What you don't don't to do is grab your own IEM and apply some random EQ to it and think you are listening to the other IEM. There is zero statistical backing on that working which is what we are discussing in the case of this youtuber.
 
Screenshot_2023-04-20_at_18.03.08.png
I believe the clone KB501x pinna is supposed to look like this, based on the official photo from manufacturer, which is a little bit different from yours.
clone.jpg
 
If you were replicating the exact devices and protocols in that research, that could be the number you land on. What you don't don't to do is grab your own IEM and apply some random EQ to it and think you are listening to the other IEM. There is zero statistical backing on that working which is what we are discussing in the case of this youtuber.
Has any reviewer done this before? Ever? And if it's such a nifty idea, why not? And has Sharur applied this method to any of his other reviews? If yes, which ones? If not, why not if it's a superior method of evaluating IEMs? And how would the manufacturer of the DUST (Device Under Simulated Test) feel about conclusions drawn about his product under those conditions? Crinacle might be more than a little torqued about this and who could blame him?
 
Has any reviewer done this before? Ever? And if it's such a nifty idea, why not?
??? Why would a reviewer do this instead of listening to the very item being reviewed? The research had a requirement for blinding so had to resort to this technique. A reviewer doing sighted, uncontrolled listening tests has no need for this facility. They should just listen and opine as they always do.

On my side, my listening test foundation is with the item under review and controlled testing with equalization. I can do it blind, or not. I am able to bridge the subjective and objective testing this way.
 
A few of you still think this is a black and white thing. Until you learn that it is not, you will continue to be lost in this discussion.

The research used a technique to solve the blinding problem. That is an approximation. An approximation can still be quite useful. You simply need to understand where its accuracy ends and don't go past that. If I give you a ruler with just inch markings, you can sort of guess where 1/4 inch might be. But it won't be as accurate as the inch markers on the ruler. Go to 1/16 of an inch and then the error can be become quite large and attempting to make such measurements with an inch ruler, would be very problematic.

The target curve is also heavily smoothed and has little bearing in higher frequencies to what we measure. This again attempts to give a high level answer, not a detailed one.

My use of the target is an approximate one and one that you could easily deviate a few dBs here and there and still be good. And extreme care in interpreting anything above 5 or 6 kHz.

Learn to deal with shades of gray in headphone measurements or don't bother.
Just a like was not enough. I will give another example and try to explain. In engineering, there are some methodologies to calculate approximate value of a problem which takes longer to calculate. You choose another fast and simple path for the calculation, it doesn't give you the exact number but you get close enough and it stays in the error margin. That doesn't mean you can apply same short cut to every problem you need to calculate. For Harman target their approximation for the problem which is "to achieve a generally preferred frequency response curve for headphones" is good enough and backed up with statistical calculations. Does that mean that approximation works for the problem which is "to evaluate headphones"? They are different goals...
 
View attachment 287262Red line is one of my zero:red sample's measurement by 45CA-10;Blue line is a lot of zero:red sample's AVG measurement by BK4195-Q. Just for reference only.
SORRY, my mistake.My GRAS is 45CA-9,not 45CA-10.I mistakenly remembered the version of my device because I initially wanted to order 45CA-10, but I mistakenly ordered it, which made me remember.
1684832457753.jpeg
 
??? Why would a reviewer do this instead of listening to the very item being reviewed? The research had a requirement for blinding so had to resort to this technique. A reviewer doing sighted, uncontrolled listening tests has no need for this facility. They should just listen and opine as they always do.

On my side, my listening test foundation is with the item under review and controlled testing with equalization. I can do it blind, or not. I am able to bridge the subjective and objective testing this way.
That is an interesting test and material for an article post or video review.

Grab 2 physical IEMs, EQ one to the other, and measure if it lands on top of the other curve. Then listen to both to confirm.

This could be done between the 2 Truthear Zeros, blue and red, but maybe also to a completely different IEM to exercise.

This would turn hypothesis into tests, and provide popcorn entertainment for us readers. :)
 
This methodology of eq’ing one set of IEM’s to match another posits a new question.

Let’s assume the new Genelec 8381A can play at 100 dB with almost no distortion. Could you EQ them to sound like any other speaker, regardless that the materials used for the woofer/tweeter are different and of a different design?

I get there are things we can’t replicate, like types of waveguides, or none at all, will effect high freq, as well as diffraction issues, but the point I’m interested in is if two speakers are EQ’ d the same, even though one uses a paper cone vs Kevlar (as one example), will they sound the same? Do the fancy materials used only allow the speakers to play louder at lower distortion or , like two violins (one being a cheap school instrument and the other a Stradivarius) playing the same note, sound different?
 
This methodology of eq’ing one set of IEM’s to match another posits a new question.

Let’s assume the new Genelec 8381A can play at 100 dB with almost no distortion. Could you EQ them to sound like any other speaker, regardless that the materials used for the woofer/tweeter are different and of a different design?

I get there are things we can’t replicate, like types of waveguides, or none at all, will effect high freq, as well as diffraction issues, but the point I’m interested in is if two speakers are EQ’ d the same, even though one uses a paper cone vs Kevlar (as one example), will they sound the same? Do the fancy materials used only allow the speakers to play louder at lower distortion or , like two violins (one being a cheap school instrument and the other a Stradivarius) playing the same note, sound different?
In near field and on axis they should sound the same but as soon as you put them in a reflective room the tonality will change depending on the dispersion and early reflections
 
If you were replicating the exact devices and protocols in that research, that could be the number you land on. What you don't don't to do is grab your own IEM and apply some random EQ to it and think you are listening to the other IEM. There is zero statistical backing on that working which is what we are discussing in the case of this youtuber.
So do you dislike the method itself or just that he doesn't use the exact the same rig as in the research? He measured his IEM on a clone coupler and used project red measurements made on a clone coupler, not his own admittedly. I've been doing transformations like this for years and the real IEM never sounded better than its virtual version. Sometimes it sounded worse, so I treat it more like a "sour milk test" - if it's bad you immediately know it's bad. But if it's good I'd test the real thing before recommending it - that's the part Sharur got burned many times on.
 
I've been doing transformations like this for years and the real IEM never sounded better than its virtual version

…Which implies that there's a noticeable difference between the real IEM and it's «virtual version».

Do you really not understand that the whole argument revolves around how close the virtual IEM sounds to its real version?

. But if it's good I'd test the real thing before recommending it

Sharur is doing EQ settings reviews, not IEM reviews.
 
Back
Top Bottom