• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Apple's first high-end headphones

KeithPhantom

Addicted to Fun and Learning
Forum Donor
Joined
May 8, 2020
Messages
642
Likes
658
I liked them until I tried the HD 600, after that, it isn't as good as I thought it was for me, but others may find it perfect.
 

bobbooo

Major Contributor
Joined
Aug 30, 2019
Messages
1,479
Likes
2,080
It sounds like this is alleging some some ill-intent. It seems to me they're trying to solve a legitimate issue... how are we universal fans of room correction @ ASR, but not something like this, which is analogous in the headphone space, is seen as somehow duplicitous? Because it makes measurement tricky? Because it's from Apple?

I don't get it. Okay, I do get it in a way... it takes away the ability to be objective, and it's something that can't be turned off.

Read the Reddit link again. It's more than the equivalent of room correction - the frequency response changes depending on the input (on the Airpods Pro at least). That would be equivalent to something like an AVR's dynamic EQ feature, except you can't turn it off, and you have no idea exactly what it's doing to your music. In the home cinema realm, an equivalent would be the dynamic contrast setting you see on some TVs, which mess with the filmaker's intention. To claim such an Adaptive EQ as Apple's could be used not only on the reproduction side but also the production side of the audio industry is absurd - it would be like movie studios using monitors with a dynamic contrast setting turned on for their final masters (and unable to be switched off), which would result in an utter mess. What's needed in the audio industry are standards on both the production and reproduction side (like the movie industry has), not more proprietary features and non-standard, non-constant audio reproduction, which have fueled audio's circle of confusion for years.
 
Last edited:

MayaTlab

Addicted to Fun and Learning
Joined
Aug 15, 2020
Messages
957
Likes
1,604
I'd be a little bit careful not to read too much into Oratory1990's post on difficulties with measuring the APP.
It isn't because he's noted variations with various test program materials that these variations necessarily mean that the APPs are inconsistent with musical material. There's a lot we don't quite know yet.
 

Tks

Major Contributor
Joined
Apr 1, 2019
Messages
3,221
Likes
5,500
I'd be a little bit careful not to read too much into Oratory1990's post on difficulties with measuring the APP.
It isn't because he's noted variations with various test program materials that these variations necessarily mean that the APPs are inconsistent with musical material. There's a lot we don't quite know yet.

I think he would do well to try contacting the audio department of Apple for a few more details as he is -by profession- someone qualified to speak with them perhaps about this, and inquire if he is misrepresenting them in some way. Doesn't have to be on record of anyone specifically, but it would be interesting if there is any fatal flaws in terms of consideration assumptions.

Also, all he would actually need to do for more clarity on the issue even without Apple's help; would be to simply measure multiple types of tones, and verify with music, and compare it between classical IEMs that don't have this supposed issue he seems to believe make the APP's immune to evaluating properly.
 

16/44

Member
Joined
Aug 13, 2020
Messages
26
Likes
14
For me, i see bluetooth as the future, but the FR of the available bluetooth headphones is way more messed up than wired ones. I would love to have a bluetooth HD 600.
Should be trivial to do. Just graft an MMCX connector to the HD 600 connector and then get one of the many MMCX bluetooth adapters!
 

hyperplanar

Senior Member
Joined
Jan 31, 2020
Messages
301
Likes
582
Location
Los Angeles
Read the Reddit link again. It's more than the equivalent of room correction - the frequency response changes depending on the input (on the Airpods Pro at least). That would be equivalent to something like an AVR's dynamic EQ feature, except you can't turn it off, and you have no idea exactly what it's doing to your music. In the home cinema realm, an equivalent would be the dynamic contrast setting you see on some TVs, which mess with the filmaker's intention. To claim such an Adaptive EQ as Apple's could be used not only on the reproduction side but also the production side of the audio industry is absurd - it would be like movie studios using monitors with a dynamic contrast setting turned on for their final masters (and unable to be switched off), which would result in an utter mess. What's needed in the audio industry are standards on both the production and reproduction side (like the movie industry has), not more proprietary features and non-standard, non-constant audio reproduction, which have fueled audio's circle of confusion for years.

I don't think the adaptive EQ is program-dependent like that based on my experience with it. I have a decent amount of experience mixing/mastering and never noticed the AirPods Pro changing the spectral balance of songs I know well relative to each other. I don't notice any sort of loudness EQ effect, dynamic EQ/compression or multiband compression going on in normal use. Maybe if you blast it at full volume then a multiband limiter would kick in...

However, I have noticed it constantly adapting to its seal, such as when one of them gets slightly loose while exercising. After about half a second, the slightly loosened one smoothly adapts its bass response back up to its target.

I think the reason why the measurements aren't stable is because the adaptive EQ, which constantly works in the background comparing the original signal to what the in-ear microphone hears, only has information on the frequency range that played recently. So sticking it into a test rig and running sine sweeps is unrealistic and doesn't allow the adaptive EQ to "settle" like it would with music.
 
Last edited:

hyperplanar

Senior Member
Joined
Jan 31, 2020
Messages
301
Likes
582
Location
Los Angeles
Funny to see Apple threw so much into Spatial Audio, so few cares about that out there
I think the real reason why Apple is investing this much effort into spatial audio right now is to have it ready for their future augmented reality efforts.
 

hyperplanar

Senior Member
Joined
Jan 31, 2020
Messages
301
Likes
582
Location
Los Angeles
I liked them until I tried the HD 600, after that, it isn't as good as I thought it was for me, but others may find it perfect.
I love my HD600s too, and they're the most similar-sounding headphones to the AirPods Pro I've heard so far. IMO the shortcomings of the HD600 are the subbass response, slightly harsh upper mids/lower treble, a non-extended/non-linear treble response >10 kHz lacking air, and closed-in soundstage. I've tried oratory's EQ and it does improve the bass/upper mids, but the bass distortion/warmth gets noticeable and it didn't solve any of the other problems.

Tonality wise the HD600s are very close to neutral for me, but somehow the AirPods Pro solves all of the above complaints, to/in my ears at least... If I had to describe it in one word, the AirPods Pro essentially sounds like an "effortless" version of the HD600 to me. But unfortunately I hate wearing IEMs for long periods of time so the HD600 gets to stay for that reason, and for low-latency monitoring :)
 
Last edited:

MayaTlab

Addicted to Fun and Learning
Joined
Aug 15, 2020
Messages
957
Likes
1,604
I think the reason why the measurements aren't stable is because the adaptive EQ, which constantly works in the background comparing the original signal to what the in-ear microphone hears, only has information on the frequency range that played recently. So sticking it into a test rig and running sine sweeps is unrealistic and doesn't allow the adaptive EQ to "settle" like it would with music.

That would also by what my uneducated guess is. As I originally mentioned a few pages back BTW in regards to how Apple tests for distortion (https://www.apple.com/newsroom/2020...gic-of-airpods-in-a-stunning-over-ear-design/) :

Screenshot 2020-12-11 at 11.05.59.png

So my wild guess is that pink noise should be played before playing the test program material ?
 

andymok

Addicted to Fun and Learning
Joined
Sep 14, 2018
Messages
562
Likes
553
Location
Hong Kong
I think the real reason why Apple is investing this much effort into spatial audio right now is to have it ready for their future augmented reality efforts.

Yes I agree, what I see is Apple trying to bring in a total solution of immersive experience, online and offline, virtual and reality (That’s why it’s AR they chose)

as far as apple’s concern, I assume, sound quality is a problem of the past. It is the experience of listening/stereo however that hasn’t been changed at all for the past few decades.
 

FridayPatrick

New Member
Joined
Jan 24, 2020
Messages
3
Likes
0
Funny to see Apple threw so much into Spatial Audio, so few cares about that out there
That feature is actually what interests me the most about these headphones given I am currently in an apartment. I have decent speakers (JBL lsr28p) but I want to be considerate to neighbors. I found Apple's Spatial Audio on my wife's Airpod Pros more impressive than past experience with Dolby Headphones or DTS Headphone X. Granted it would be nice if the Spatial Audio was supported on more than iphones and ipads.
 

voodooless

Grand Contributor
Forum Donor
Joined
Jun 16, 2020
Messages
10,450
Likes
18,492
Location
Netherlands
So my wild guess is that pink noise should be played before playing the test program material ?

I don't think so, but it will probably make the process as fast as it should be. In essence, you don't need pink noise to optimise the frequency response. You can just compare input to output of any signal, and from that deduce the response. The more actual frequency components you have in the signal the faster you will get the complete picture. So with normal music it might take a bit longer, but eventually you will get there.
 

MayaTlab

Addicted to Fun and Learning
Joined
Aug 15, 2020
Messages
957
Likes
1,604
I don't think so, but it will probably make the process as fast as it should be. In essence, you don't need pink noise to optimise the frequency response. You can just compare input to output of any signal, and from that deduce the response. The more actual frequency components you have in the signal the faster you will get the complete picture. So with normal music it might take a bit longer, but eventually you will get there.

Yes of course but I'm wondering if it could be related to how adaptive EQ works. Apple says that it works 200 times a second, which obviously isn't the frequency at which the mics and DSP can record and analyse what's going on as this would make for a rubbish ANC performance. So my guess is that the mics and DSP / ANC are capable of much faster recording / computing which is used as the basis for ANC and that adaptive EQ can work at a much higher frequency, but that it adjusts the FR curve for reasons other than noise reduction at a slower rate ? Hence why some test programs may be inaccurate with it ?
There is clearly something going on with adaptive EQ that isn't quite like the same as the usual feed forward / feedback ANC we're used to.
 

Frank Dernie

Master Contributor
Forum Donor
Joined
Mar 24, 2016
Messages
6,461
Likes
15,844
Location
Oxfordshire
That would also by what my uneducated guess is. As I originally mentioned a few pages back BTW in regards to how Apple tests for distortion (https://www.apple.com/newsroom/2020...gic-of-airpods-in-a-stunning-over-ear-design/) :

View attachment 98479
So my wild guess is that pink noise should be played before playing the test program material ?
The AKG system in the N90Q plays a fast toneburst and calculates from that.
It is probably all over in a second or so.
 

voodooless

Grand Contributor
Forum Donor
Joined
Jun 16, 2020
Messages
10,450
Likes
18,492
Location
Netherlands
There is clearly something going on with adaptive EQ that isn't quite like the same as the usual feed forward / feedback ANC we're used to.

Something like an advanced version of a LMS filter should do the trick, at least for the frequency response, not so much for ANC.

Oops, was wrong on the ANC part... at least some types of it.
 
Last edited:

Feelas

Senior Member
Joined
Nov 20, 2020
Messages
391
Likes
317
Not sure I understand.
AFAIK the AKG N90Q is the only other 'phone with built in correction using microphone in earcup.
Ooops, I just skipped some steps. I meant the signal - pretty much anything that has standard form (especially noises) and wideband is fine to calibrate to. Music has the problem of unknown correct spectral image, since it depends on the recording. Hope it's clearer!
 

M00ndancer

Addicted to Fun and Learning
Forum Donor
Joined
Feb 4, 2019
Messages
719
Likes
728
Location
Sweden
AFAIK the AKG N90Q is the only other 'phone with built in correction using microphone in earcup.

Sony MDR-1000x does similar things. From the owners manual:

This function analyzes the wearing condition such as the face shape, hair style, and presence or absence of eyeglasses to optimize the noise canceling performance. It is recommended that you perform this function when using the headset for the first time.

Hint:
When the wearing condition has been changed, such as you have changed your hair style or taken off eyeglasses, it is recommended that you perform the Personal NC Optimizer again.
 

voodooless

Grand Contributor
Forum Donor
Joined
Jun 16, 2020
Messages
10,450
Likes
18,492
Location
Netherlands
Music has the problem of unknown correct spectral image, since it depends on the recording

But you have the recording. You can "just" use is as a reference. And if you do this often enough, you'll get a pretty decent dataset that can be used to make a correction filter.
 
Top Bottom