Aural ID should be a step towards more meaningful use of headphones for monitoring (though the word "meaningful" maybe is a bit misused these weeks). We're helping to build a standardised railroad, and hope other will contribute better locomotives.
"Dumb" headphones exclude the influential external ear from our hearing, so they break the link to natural listening we have personally acquired over a lifetime. The outer ear together with movement is what provides us the wonderful feature, spherical hearing, which we constantly use, regardless if we work in stereo or immersive.
Considering stereo, for example, each of us receive direct sound in the 60 degree angle in a distinct way, like we receive cross-talk and room reflections. By describing how sound from all different angles is modified during normal arrival, considering You personally, Aural ID enables a rendering engine to offer natural (or supernatural) hearing, even when using cans. Important sources like human voice are difficult to level, pan and eq on normal headphones, because midrange translates randomly between people when using them. What you hear can be quite different from what the other person hears, even when you are using the same headphones.
Immersive is an obvious benefit of personal, spherical listening, with any number of discrete direct sound sources for the renderer to cope with, ideally also movements and room reflections.
Aural ID provides the foundation for headphone rendering in stereo as well as immersive. More documentation about the precision of the technique is required and will be available.