Are you attempting to correlate the functions of a Lovense Lush 4 to how audio from headphones is measured?So the vibrations felt on the skin with an over-ear headphone are not real because they don't show up in DRP measurements?
Are you attempting to correlate the functions of a Lovense Lush 4 to how audio from headphones is measured?So the vibrations felt on the skin with an over-ear headphone are not real because they don't show up in DRP measurements?
You should try to correlate your experience with a Shokz bone conduction headphone and a "frequency response" measurement.Are you attempting to correlate the functions of a Lovense Lush 4 to how audio from headphones is measured?
Are you attempting to correlate the functions of a Lovense Lush 4 to how audio from headphones is measuredSo the vibrations felt on the skin with an over-ear headphone are not real because they don't show up in DRP measurements?
They would be visible as part of the frequency response if measured with an artificial mastoid or skull simulator and all aspects of how it impacts the ways the device is heard as a function of the device would be present.You should try to correlate your experience with a Shokz bone conduction headphone and a "frequency response" measurement.
The measurements you are idolizing only tell you what is happening at the eardrum. That is it. Trying to extrapolate that to account for everything demonstrates a lack of critical thinking and/or experience.
I don't think we actually disagree on anything, but at the end of the day, your brain is decoding the sensory information of sound produced by the earphones and internal sounds amplified by a sealed ear canal.Are you attempting to correlate the functions of a Lovense Lush 4 to how audio from headphones is measured
They would be visible as part of the frequency response if measured with an artificial mastoid or skull simulator and all aspects of how it impacts the ways the device is heard as a function of the device would be present.
Audio measurements are not limited in scope to what’s happening at the ear drum, that is a patently false statement.
This is an audio science forum. Measurements and evidence-based evaluations of audio are the primary focus in an environment that exists for the purposes of promoting and discussing audio science - Hence the name, Audio Science Review. Subjective interpretations, abstract concepts, inaccurate information, misinformed opinions presented as facts and emotional responses associated with an audio device as part of a user experience that don’t correlate to a basis in audio science probably aren’t going to be heavily showcased or supported in an audio science venue.
An example I often come back to is that of the Apple AirPods Pro 2/3 in ANC vs. Transparency modes. The frequency responses they deliver in each mode are identical, but the perceived sound of these two frequency responses is vastly different—transparency mode sounding brighter and, perhaps obviously, more open—because the acoustic condition and psychoacoustic backdrop of said playback frequency response is markedly different between the two modes.I don't think we actually disagree on anything, but at the end of the day, your brain is decoding the sensory information of sound produced by the earphones and internal sounds amplified by a sealed ear canal.
An example I often come back to is that of the Apple AirPods Pro 2/3 in ANC vs. Transparency modes. The frequency responses they deliver in each mode are identical, but the perceived sound of these two frequency responses is vastly different—transparency mode sounding brighter and, perhaps obviously, more open—because the acoustic condition and psychoacoustic backdrop of said playback frequency response is markedly different between the two modes.
I know we all agree that frequency response is far and away the most important factor, but we must remember that it's not at all scientific to minimize the importance of other important acoustic factors when it comes to how we unpack and perceive sound of these playback devices. Our ears are pressure sensors, and they can detect changes in acoustic pressure even when there is no music playing (and by that same token, we can measure these changes!). It seems like a fairly uncontroversial take that there may be something worth exploring when it comes to minimizing occlusion for the difference in sonic presentation that this different environment could contribute.
An example I often come back to is that of the Apple AirPods Pro 2/3 in ANC vs. Transparency modes. The frequency responses they deliver in each mode are identical, but the perceived sound of these two frequency responses is vastly different—transparency mode sounding brighter and, perhaps obviously, more open—because the acoustic condition and psychoacoustic backdrop of said playback frequency response is markedly different between the two modes.
I know we all agree that frequency response is far and away the most important factor, but we must remember that it's not at all scientific to minimize the importance of other important acoustic factors when it comes to how we unpack and perceive sound of these playback devices. Our ears are pressure sensors, and they can detect changes in acoustic pressure even when there is no music playing (and by that same token, we can measure these changes!). It seems like a fairly uncontroversial take that there may be something worth exploring when it comes to minimizing occlusion for the difference in sonic presentation that this different environment could contribute.
Do these have any new information? I'm up to date with the current state of Harman's published work, so unless these have some new information I'm not sure what you're trying to get at. Sean would likely agree that openness/acoustic impedance isn't something that was studied or accounted for all that rigorously in their prior work. It's an area of current investigation for him now, though!An example I often come back to
And this being an uncontroversial take that there may be something worth exploring
This is interesting, if true.I think frequency response is extremely important, but there are other factors not discussed enough. You can have a "perfect" measuring multi driver IEM with high passive noise isolation, and it will sound completely closed off and claustrophobic with poor soundstage. The same applies with over-ear headphones. The perceived soundstage of the Bose QuietComfort and Apple AirPods Max in ANC modes make the Dan Clark Stealth and E3 sound muffled and closed off.
This is why a lot of Chinese in-ear manufacturers are making "semi-open" IEMs, but these still often have higher passive isolation than basic earbuds. If the only way for passive IEMs to compete with ANC and Transparency mode IEMs in soundstage once the frequency responses match is by being purely open-back, this brings into question, what is the point of an IEM?
Do these have any new information? I'm up to date with the current state of Harman's published work, so unless these have some new information I'm not sure what you're trying to get at. Sean would likely agree that openness/acoustic impedance isn't something that was studied or accounted for all that rigorously in their prior work. It's an area of current investigation for him now, though!
EDIT: I've watched the timestamped part, thanks for highlighting the relevant section. Seems he's saying headphone sound is mostly frequency response, which is true! But again, I'm sure a man of science like Sean wouldn't diminish the possibility of something he never had the opportunity to test himself mattering more than he previously assumed.
Sean's currently building an acoustic impedance tube because he wants to verify the frequency-specific acoustic impedance of headphones, and I'm interested to see what kind of testing he decides to do once he has it built. I assume it will still have frequency response at the center of its hypothesis (as it should, because as we all agree, it's the most important factor), but a test about what effects different levels/types of openness have with the same playback frequency response would be very interesting to me. Especially because I—and I think others, like Axel Grell—have a sneaking suspicion that openness may have a part to play in how we perceive the sound of headworn transducers like headphones and IEMs.
I don't have a list of semi-open IEMs that a quick Google search won't achieve.This is interesting, if true.
At London CanJam this year I tried the Kiwiears Quintet, knowing nothing about it before trying and not looking too closely at it. When asked what I thought of it, I said it wasn't my favourite but it has quite a nice wide soundstage for an iem. I was then pointed to the fact that it's semi open, which I could then see that indeed it is.
That was the first semi open iem I'd come across, and I didn't see any more there (not saying there wasn't, btw). Can you point me to a list of semi open iems at all?
In all honesty, from the perspective of someone with a reference grade speaker system, all of this would come across as meaningless and sounding about the same once the frequency responses match.![]()
Earphones Archive Squiglink - IEM frequency response database by Earphones Archive
Compare hundreds of frequency response graphs between IEMs and earphones from manufacturers like Moondrop, Sony, 64 Audio, Fiio, and more.earphonesarchive.squig.link
New and exciting audio theory applied in practice, with bated breath wait I for the fruits of such ambitious labors
I agree. To me, *technicalities* (vague word yes) when used to mean things like *detail retrieval* as an inherent quality of a transducer outside of FR is not something I believe exists.An example I often come back to is that of the Apple AirPods Pro 2/3 in ANC vs. Transparency modes. The frequency responses they deliver in each mode are identical, but the perceived sound of these two frequency responses is vastly different—transparency mode sounding brighter and, perhaps obviously, more open—because the acoustic condition and psychoacoustic backdrop of said playback frequency response is markedly different between the two modes.
I know we all agree that frequency response is far and away the most important factor, but we must remember that it's not at all scientific to minimize the importance of other important acoustic factors when it comes to how we unpack and perceive sound of these playback devices. Our ears are pressure sensors, and they can detect changes in acoustic pressure even when there is no music playing (and by that same token, we can measure these changes!). It seems like a fairly uncontroversial take that there may be something worth exploring when it comes to minimizing occlusion for the difference in sonic presentation that this different environment could contribute.