• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Apple AirPods Max Review (Noise Cancelling Headphone)

I'm sure I briefly tried it to see if the oratory1990 PEQ settings sounded the same as with my miniDSP 2x4 HD though a Schiit Heretic amp, and I don't remember anything notable about the results, so I guess it was fine. I use the latter combo every day, and it works great.

mind sharing how you got your airpods Max to play? my dac recognizes them but I can’t hear anything! Source is iPhone/ipad with usbc.
 
wait, i have the white version of that same exact cable, is there a difference in the colors?

And so once you plug everything in, your iphone is able to play through the 5k wired?
Color doesn't matter. It just has to be the expensive Apple-branded bidirectional cable. Plug the Lightning end into your APM and the 3.5 mm end into an analog source, and you get audio in your APM.
 
Color doesn't matter. It just has to be the expensive Apple-branded bidirectional cable. Plug the Lightning end into your APM and the 3.5 mm end into an analog source, and you get audio in your APM.

Yes, it's not like the regular Lightning-to-headphone adaptor, there are additional electronics to make it work.
 
Last edited:
Yes, it's not like the regular Lightning-to-headphone adaptor, there are additional electronics to make it work.
And it’s super thin. Perfect for my cat to chew on. I can’t wait till someone makes an armored version of that cord.
 
I just did. It works fine. I used oratory filters. The resulting timbre is better balanced but lacks some slam
He designs his EQs such that bass and treble bands can be adjusted to preference. For APM, 17 Hz (subbass) and 105 Hz (bass) are available. The 105 Hz shelf seems pretty consistently present across all his EQs.
 
It's going to sound like I'm being pedantic here but i'm genuinely still trying to learn.

But if you say we naturally amplify this region, then that is a reason to make it flat?

I'm aware that research says we are most sensitive in this region and I'm aware that the Harman preference curve has a boost in this region.

It looks closer to the shape of equal- loudness contour curves than Harman target.

And also, is it possible Apple internally has done a lot of resarch on preference curves, with bigger sample size than Harman did?

Not trying to 'stir the pot' here - genuine questions. Because since this has hardcore engineering (see distortion) and DSP, the curve we see is exactly what they wanted - so the questions are why.

cc: @Sean Olive
I was told by someone who left Apple that they tamed the so-called "ear gain" centered at 3 kHz to accommodate different ear canal geometries that produce different amounts of gain.. The AirPods Pro 2 also have reduced gain through there. The SoundGuys target curve does as well although their target is based on an average of many headphones measured-- not based on theory or listening tests. And their target does seen to produce good results in listening tests.

We've looked at the effects of different ear canal geometries on the response at the DRP and I don't think they warrant this much reduced gain The B&K Type 5128/4260 simulator is based on an average of 40 human ear canals and its DF response doesn't warrant a reduction in gain as much as this...

But maybe they know something we don't..

I'm doing some headphone experiments where people can adjust the gain of a filter centered @ 3kHz to see if they adjust it and by how much.
 
Last edited:
If the vocal preset is selected it should comply well enough to Harman 2018, per messurements sheared earlier. It's reasonable to assume the default presets 3kHz recession has to do with maximizing output levels for the out of box experience.

This is all assuming the measurements are reliable, not a given with ANC sets.
 
I was told by someone who left Apple that they tamed the so-called "ear gain" centered at 3 kHz to accommodate different ear canal geometries that produce different amounts of gain..
How do they do that? Airpod max has mic to get this information from reflection? Or a Camera? How do they even know ear canal geometries of the person wearing it?
 
How do they do that? Airpod max has mic to get this information from reflection? Or a Camera? How do they even know ear canal geometries of the person wearing it?
They don't.

Since they don't know the wearer's precise HRTF, they slightly undercooked the ear gain, to ensure that no matter the wearer's HRTF, the ear gain region would not be emphasized.

BTW, Apple do have a method for approximating the user's HRTF, which uses the iPhone's FaceTime LiDAR sensor to create a 3D model of your head and ears, however, this is only used to tailor the surround sound algorithm to you.

Afaik, the stereo frequency response is not affected by this.
 
I was told by someone who left Apple that they tamed the so-called "ear gain" centered at 3 kHz to accommodate different ear canal geometries that produce different amounts of gain..
As the Airpods Pro 2 and future Airpods become certified hearing aids, I expect them to offer hearing test to tailor to each user preference.

Including a tone centred at 3kHz

Ideally, in addition to a hearing test, they would allow EQ in iOS/iPadOS/macOS - even something like the JBL TWS earbuds app would be 'chefs kisses'
 
How do they do that? Airpod max has mic to get this information from reflection? Or a Camera? How do they even know ear canal geometries of the person wearing it?

You use the iPhone's LiDAR and camera to scan your ear/head and create a custom profile. Somewhat similar to setting up FaceID.

Edit: I didn't refresh, ninja'd by @staticV3 and yes it is explicitly offered to improve head tracking and Spatial Audio. This is distinct from using a custom audio profile tailored to your hearing (which can be generated via a third-party app or importing an audiogram).
 
Last edited:
They don't.

Since they don't know the wearer's precise HRTF, they slightly undercooked the ear gain, to ensure that no matter the wearer's HRTF, the ear gain region would not be emphasized.

BTW, Apple do have a method for approximating the user's HRTF, which uses the iPhone's FaceTime LiDAR sensor to create a 3D model of your head and ears, however, this is only used to tailor the surround sound algorithm to you.

Afaik, the stereo frequency response is not affected by this.
So the same thing as Creative's gaming surround sound solution, but not for music.
 
Since they don't know the wearer's precise HRTF, they slightly undercooked the ear gain, to ensure that no matter the wearer's HRTF, the ear gain region would not be emphasized.

It's also, for the APM, largely related to HPTF, not HRTF.

In both the APM and APP2 cases they're facing HPTF / eardrum HRTF mismatch issues (and possibly undershooting - on average ! - a bit the response in the 1-5kHz range to avoid any excess), but the underlying mechanisms for that mismatch are entirely different.

How do they do that? Airpod max has mic to get this information from reflection? Or a Camera? How do they even know ear canal geometries of the person wearing it?

I think that Sean Olive was rather talking about the issue of inter-individual, in situ FR variation, rather than anything related to active systems.

The APM's active systems, relying on the internal mic, seem to only operate up to around 1kHz, in all modes. In that band there shouldn't be a lot of reasons to have the FR vary from one individual to the other based on anatomical features, the goal rather being to maintain the response as consistent as possible regardless of coupling / leakage. Above 1kHz I haven't seen much evidence that they're doing anything with active systems.

The APP2's active systems operate up to around 4-5kHz in all modes and seem to try to ensure a consistent response at the eardrum for all individuals, which actually could very well be not desirable in the 1-5kHz region, but still better than letting the response vary in a way similar to passive IEMs or even worse active IEMs with a "classic" feedback system operating up to 500-800Hz (which is great for controlling leakage, but amplifies HPTF / eardrum HRTF mismatch in the mids in some cases).

Bose's CustomTune feature operates up to around 5-6kHz or so, and this time does seem to try to individualise the response at the eardrum in the 1-6kHz band, at least based on ear canal length, but it's a one-off measurement at startup, while Apple's systems are continuously updated using playback content as measurement signal.

Whether these active systems operating past 1kHz are doing a good job or not across a large sample of individuals remains to be tested outside of these companies R&D's labs :D.

In all three cases ear geometry is not used with their onboard active systems, but as @staticV3 mentioned, Apple uses the iPhone's depth sensor to at least partly individualise the HRTF map / ITD / ILD (?) when using their binaural renderer.

Apple has a patent to reduce HPTF issues based on scans of one's face, but I don't think that it's been applied in their product. They also seem to have floated the idea of using structured light sensors inside the ear cup of their over-ears to have a rough image of the user's ear, but again it seems like a far-fetched idea rather than a directly applicable patent, and mostly aimed at identifying left and right ears.

If the vocal preset is selected it should comply well enough to Harman 2018, per messurements sheared earlier.

If Harman 2018 also means adding compression, maybe :D. And "maybe", as you're facing HPTF issues anyway, so difficult to know for certain.

This is all assuming the measurements are reliable, not a given with ANC sets.

Not a given depending on the actual exact model and the part of the spectrum under consideration in general. But for the APM, just like with most ANC over-ears indeed, it's very reliable in the range where their active systems operate (here up around 800Hz), not so much above.
 
They don't.

Since they don't know the wearer's precise HRTF, they slightly undercooked the ear gain, to ensure that no matter the wearer's HRTF, the ear gain region would not be emphasized.

BTW, Apple do have a method for approximating the user's HRTF, which uses the iPhone's FaceTime LiDAR sensor to create a 3D model of your head and ears, however, this is only used to tailor the surround sound algorithm to you.

Afaik, the stereo frequency response is not affected by this.
It isn’t a LIDAR but a dot pattern projector plus a camera.
Anyway there is customization of ear gain in the settings as mentioned in posts above by many others with pros and cons
 
It isn’t a LIDAR but a dot pattern projector plus a camera.
Anyway there is customization of ear gain in the settings as mentioned in posts above by many others with pros and cons
Every DSP enabled headphone and IEM should include selective 3kHz personalization controls like this. When most products omit that this puts APM ahead of the competition. I'm not an Apple fanboy by any means but giving credit when it's due, and hope the competition catches on, leaves arbitrary presets and frequency bands to the mists of time.
 
Back
Top Bottom