• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Impulcifer, Copy speaker sounds to headphones!

Exactly and not just two loudspeakers, but a high end multi-channel setup in an acoustically optimised room which definitely cannot be normally achieved for $4k grand but rather ten times more.

That is not fully true.
As the Realiser is doing blocked ear canal measurements you will not have a 1:1 reproduction of the loudspeakers. You may come close to the real speakers but never match them to 100%.
 
That is not fully true.
As the Realiser is doing blocked ear canal measurements you will not have a 1:1 reproduction of the loudspeakers. You may come close to the real speakers but never match them to 100%.
The FR issue between blocked and open ear channel is one that can be manually corrected, I see though some limitations on other aspects.
 
That is not fully true.
As the Realiser is doing blocked ear canal measurements you will not have a 1:1 reproduction of the loudspeakers. You may come close to the real speakers but never match them to 100%.

I don't have an opinion on the subject when it comes to measuring speakers with in-ear mics, but when it comes to headphones I think that I can form one.

I've compared various forms of in-situ measurements for headphones (DIY probe mics - 2 types, blocked ear canal mics - 3 different types, in concha mics either with open or blocked canals). One thing I systematically do after running these measurements is take a pair of HPs with very low seatings to seatings measurements (usually my pair of HD650, either with the defaults or Dekoni Elite Velour pads) and trace the difference between it and the other HPs. That tells me whether or not the different mics recorded the same difference between the HPs, and if they didn't in which part of the spectrum they disagreed and to which extent.

Below is an example of that. What you get to see here is the difference between a pair of HD650 with Dekoni Elite Velour pads (not comparable to stock pads) and a pair of HE400SE (fuchsia traces), and HD560S (blue traces), recorded either by one of the DIY probe mics (solid traces) or blocked ear canal entrance mics (dotted traces). Each trace is an average of five seatings in a "normal" position (they way I'd wear them), and you get three traces for each mic type as I repeated that procedure three times over a few weeks.

Screenshot 2021-10-05 at 14.41.57.png


As a general rule, at least for large, passive open type headphones, I tend to see some disagreement between blocked and open ear canal microphones in the 2-4kHz region (up to 1-1.5dB ? I'm not sure as we're also battling seatings to seatings and sessions to sessions variation which adds quite a bit of "noise" to the data), and quite a lot of disagreement past 7kHz or so. I haven't repeated such measurements enough times with other headphones types (notably smaller closed back around ears) to be quite certain whether or not that carries over to these headphones types.

A little more concerning to me is that with ear canal entrance measurements, the exact placement of the mic may have quite a bit of an influence on the results past 7kHz. Not only do I tend to see disagreement between DIY probes and ear canal entrance measurements, I also tend to see disagreement between the three types of blocked ear canal entrance mics I've tried so far (all of which are at least visibly flush with the ear canal entrance, none of them is protruding to any significant degree in the concha - if so the disagreement starts to get quite a lot more significant for me), and the session to session variability is also quite a bit higher past 7kHz in terms of relative results between HPs than with the probes. I can't explain why so far. It's quite possible that I'm doing something wrong but I've yet to find what exactly :D.

So what I'd do when using blocked ear canal entrance measurements for headphones is leave out a couple of filters to fine-tune the 2-4kHz band by ear, make sure for a start that they're at least flush with the ear canal entrance, and not be too confident about the headphones measurements past 7kHz or so.
 
I agree that the Realiser is to expensive. But on PC you can use Impulcifer for measurements and EQ-APO for Playback and get a similar results for only a fraction of the cost of the Realiser.
But no head tracking which is crucial IMHO. Yes, if you measure a super premium set of speakers in a studio, use a tactile subwoofer and head tracking, the differences between that particular system and what you will hear on the Realizer over headphones is vanishingly small. I don't know of any other way you can get a 24 channel ATMOS system in your listening space without spending six figures and surrendering 300-400 sf of your house (if you live in a house, hopefully with no nearby neighbors) for a dedicated theater room, and who can afford that.

The Realizer is not a snake oil, like a cryogenically treated cable. I've had a high end speaker system in my listening space ever since 1977 and what the Realizer outputs over a good set of headphones (like the HD 800s) is indistinguishable from not just a stereo set of high end speakers, but up to 24 of them.
 
The FR issue between blocked and open ear channel is one that can be manually corrected, I see though some limitations on other aspects.
I manually eq everything including the speaker systems I've measured myself for that very reason, and it does make a difference.
 
I don't have an opinion on the subject when it comes to measuring speakers with in-ear mics, but when it comes to headphones I think that I can form one.

I've compared various forms of in-situ measurements for headphones (DIY probe mics - 2 types, blocked ear canal mics - 3 different types, in concha mics either with open or blocked canals). One thing I systematically do after running these measurements is take a pair of HPs with very low seatings to seatings measurements (usually my pair of HD650, either with the defaults or Dekoni Elite Velour pads) and trace the difference between it and the other HPs. That tells me whether or not the different mics recorded the same difference between the HPs, and if they didn't in which part of the spectrum they disagreed and to which extent.

Below is an example of that. What you get to see here is the difference between a pair of HD650 with Dekoni Elite Velour pads (not comparable to stock pads) and a pair of HE400SE (fuchsia traces), and HD560S (blue traces), recorded either by one of the DIY probe mics (solid traces) or blocked ear canal entrance mics (dotted traces). Each trace is an average of five seatings in a "normal" position (they way I'd wear them), and you get three traces for each mic type as I repeated that procedure three times over a few weeks.

View attachment 164509

As a general rule, at least for large, passive open type headphones, I tend to see some disagreement between blocked and open ear canal microphones in the 2-4kHz region (up to 1-1.5dB ? I'm not sure as we're also battling seatings to seatings and sessions to sessions variation which adds quite a bit of "noise" to the data), and quite a lot of disagreement past 7kHz or so. I haven't repeated such measurements enough times with other headphones types (notably smaller closed back around ears) to be quite certain whether or not that carries over to these headphones types.

A little more concerning to me is that with ear canal entrance measurements, the exact placement of the mic may have quite a bit of an influence on the results past 7kHz. Not only do I tend to see disagreement between DIY probes and ear canal entrance measurements, I also tend to see disagreement between the three types of blocked ear canal entrance mics I've tried so far (all of which are at least visibly flush with the ear canal entrance, none of them is protruding to any significant degree in the concha - if so the disagreement starts to get quite a lot more significant for me), and the session to session variability is also quite a bit higher past 7kHz in terms of relative results between HPs than with the probes. I can't explain why so far. It's quite possible that I'm doing something wrong but I've yet to find what exactly :D.

So what I'd do when using blocked ear canal entrance measurements for headphones is leave out a couple of filters to fine-tune the 2-4kHz band by ear, make sure for a start that they're at least flush with the ear canal entrance, and not be too confident about the headphones measurements past 7kHz or so.
Kind of confirms what I felt necessary to tweak with my manual adjustments. Generally in the 2-4kz range, the A16's manloud adjustment was no more than 1-2 db. After 7khz, as an older gentleman, it didn't matter all that much to me. From about 7 to 9 khz, I had to start boosting everything to get something close to equal loudness. I did some, but not as much as would have been required.
 
But no head tracking which is crucial IMHO.
I have no problem using my impulcifer measurement without headtracking.
So I wonder why head tracking should be crucial?
For movies and gaming you always look on a screen and you not move your head much, so headtracking is not used.
 
So I wonder why head tracking should be crucial?
For movies and gaming you always look on a screen and you not move your head much, so headtracking is not used.
Because some of us listen to music.
 
Also headtracking helps keeping the 3D soundspace illusion active as its a method our brain is used to, namely that with even small movement the localisation cues change and if this doesn't happen it is perceived as fake.
 
Don't get me wrong I'm not saying that head tracking is not helping but I just doubt that it is crucial.
 
Don't get me wrong I'm not saying that head tracking is not helping but I just doubt that it is crucial.
I don't think people realize what a different league the Smyth Realiser is in. I have three PRIRs made with me in the room. On all three of them, I not only hear things outside my head but practically outside the room. That includes in front of me, directly behind me, at 90 degrees to the side, at 120 degrees to the side, and overhead. And sounds track continuously in a smooth, uninterrupted arc. When I play Formula 1 Drive to Survive, and they do a shot of a car going over the camera, it sounds like I am directly under a Formula 1 car as it passes over me at 215 MPH, and I mean that literally. It's very convincing.

And, yes, head tracking is crucial. Without it, the distinct positioning of objects collapses.

Personally, I don't believe there is any reason why a pc based program could not incorporate head tracking. Some write a special dedicated stand alone processor is required to avoid latency, but I remain unconvinced. We do a lot of things on our PC servers (including home theater apps), and I, for one, know that Bacch from Theoretica Physics has a similar to the A16 software-based DSP program for both speakers and headphones and it runs on a MAC and incorporates head tracking, so latency is apparently no problem for that system. Why something similar can not also be used in a Windows 10 or 11 environment is beyond me. It would make the Impulcifier system probably equal to an A8 Realiser. Perhaps the Smyths should investigate releasing something like a 7.1 software based Realiser to run on Macs and PCs. Would only require a set of binaural usb mics, and a usb based head tracking system to go with the software. If it were any good, they could achieve significant market penetration, and probably make more money than they're currently doing with a strictly hardware based approach.
 
And, yes, head tracking is crucial. Without it, the distinct positioning of objects collapses.
How would you then explain that I have a perfect illusion of speakers without head tracking with my Impulcifer measurement?

Perhaps the Smyths should investigate releasing something like a 7.1 software based Realiser to run on Macs and PCs. Would only require a set of binaural usb mics, and a usb based head tracking system to go with the software. If it were any good, they could achieve significant market penetration, and probably make more money than they're currently doing with a strictly hardware based approach.
You mean the Smyths who failed completely for the pricing goal of their A16 and who where not able to deliver their product in a reasonable time to all bakers?
 
Last edited:
"Think of headphones that mimic the sounds of the best studio in the world." - My experience exactly. I often feel like I'm sitting in the recording session, or in the 1st row of the concert. If others could hear what I'm hearing Impulcifer would be getting far more recognition.

I'm yet to try Impulcifer (need to find a way to import the binaural mic to Brazil), but I'm most often wowed by the effect of the "00yh0.wav" HRIR preset in HeSuVi, which is based on the "Out Of Your Head Genelec Studio Recording". It's the only DSP thing that ever gave me illusion of a real room with good speakers.
I use it with the HD560S eq'd a la oratory1990's harman target suggestion with a tiny bit of bass reduction
 
I'm yet to try Impulcifer (need to find a way to import the binaural mic to Brazil), but I'm most often wowed by the effect of the "00yh0.wav" HRIR preset in HeSuVi, which is based on the "Out Of Your Head Genelec Studio Recording". It's the only DSP thing that ever gave me illusion of a real room with good speakers.
I use it with the HD560S eq'd a la oratory1990's harman target suggestion with a tiny bit of bass reduction
Yeah, I get a bit of spatial effect from that HRIR. Of course, using preset HRIRs is very hit or miss depending on your specific ears and hearing. Just lucky if one of them works well. Totally different ballgame with Impulcifer, or a good personalized HRTF. It can take some time and effort to do the measurements and tweak settings with Impulcifer, but if you can get all the pieces put together the effect is like a dream come true.
 
Perhaps the Smyths should investigate releasing something like a 7.1 software based Realiser to run on Macs and PCs.
I had created my modified version of Impulcifer&EqualizerAPO, which could support head tracing (1dof, 3dof or 6dof, limited by sensor and HRIR files) and height level sound playback (which is necessary for supporting head pitch and roll).
The sound capture is need much more point: 12 each horizontal layer (every 30 degree), and 5 layer (+60 +30 0 -30 -60) , so it will be even harder to capture than A16.
But you can get 1dof(yaw) version by only 1 layer.
The headtrack data is simple 6 float number : yaw pitch roll x y z
I am using two types of head tracking for check my result :
1. valve index vr headseat and a modified version CableGuardian which will send 6dof data into APO.
2. A little sensor which send 3dof data by wifi connection. (Can't find AliExpress version, sorry)
O1CN01XI0h4Y1NlsiPi9dky_!!2204099621611.jpg_430x430q90.jpg


In theory my code could support 16 channel audio, but WDM only support up to 7.1. Windows has new spatial audio support (such as Dolby Atoms) but don't has a open standard for homebrew version.

I can't compare my result with Realiser A16/A8 as I don't have it.
If anyone is interested in this, you could contact me and I will let you know how to use.
 
Last edited:
I've had a high end speaker system in my listening space ever since 1977 and what the Realizer outputs over a good set of headphones (like the HD 800s) is indistinguishable from not just a stereo set of high end speakers, but up to 24 of them.
If Realiser could output totally as same as speaker dose, that is good than impulcifer, I had DIY 3 pairs of mics and try to create HRIR multiple time, some time the result, could says, very good , but it still could be distinguish when quick switch between speaker and headphones.
 
I had created my modified version of Impulcifer&EqualizerAPO, which could support head tracing (1dof, 3dof or 6dof, limited by sensor and HRIR files) and height level sound playback (which is necessary for supporting head pitch and roll).
The sound capture is need much more point: 12 each horizontal layer (every 30 degree), and 5 layer (+60 +30 0 -30 -60) , so it will be even harder to capture than A16.
But you can get 1dof(yaw) version by only 1 layer.
The headtrack data is simple 6 float number : yaw pitch roll x y z
I am using two types of head tracking for check my result :
1. valve index vr headseat and a modified version CableGuardian which will send 6dof data into APO.
2. A little sensor which send 3dof data by wifi connection. (Can't find AliExpress version, sorry)
O1CN01XI0h4Y1NlsiPi9dky_!!2204099621611.jpg_430x430q90.jpg


In theory my code could support 16 channel audio, but WDM only support up to 7.1. Windows has new spatial audio support (such as Dolby Atoms) but don't has a open standard for homebrew version.

I can't compare my result with Realiser A16/A8 as I don't have it.
If anyone is interested in this, you could contact me and I will let you know how to use.
This is amazing! Please do share details of how does it work and how to use it.
 
I'm also interested, but linux based so some porting probably needed before I could try it.
 
This is amazing! Please do share details of how does it work and how to use it.
I had push my code into github : EqualizerAPO Impulcifer and CableGuardian

To use it, first need capture data, I suppose the distance should be same, so I will calculate the distance when position is below and up.

Impulcifer is modified for support input 12 direction by [0-12].wav (0 is back, then clockwise) , height diff is not calculated at current.
If use 1 layer, the file need put into ...\EqualizerAPO\config\brir\{name}\[0-12].wav.
If use 5 layer, the file need put into ...\EqualizerAPO\config\brir\[0-4]\[0-12].wav. (0 is below 60 degree ,2 is horizion and 4 is up 60 degree)

EqualizerAPO was added two new filter: BRIRFilter and BRIRMultiFiler, one for only 1 layer and another for 5 layer.
The filer will use lowpass filter to get unprocessed bass as my test speaker will not down to 100hz
The example config file:
BRIR: {"name":"index","directions":[-30, 30, 0, 0, -140, 150, -90, 90],"bassVolume":0.1,"receiveType":0,"port":3053}
BRIRMulti: {"bassVolume":0.2,"port":3053}
directions means the virutal speaker direction, bassVolume is control the raw bass input volume, port means the UDP position data will receive from.

The CableGuardian or sensor will capture the headposition, then send by UDP to audiodg.exe(APO) or VoiceMeter.exe(if use voicemeter client as it will be more flexible for debug)

The core algorithm :
The APO is process audio in batch (for example, 441 frames)
When one batch start, the filter will get position data and calculate the postion, such as if turn 30 degree left, the left channel need to be place at center in origin, etc.
1 layer will only support yaw, and it's quite easy to calcuate(just add/minus/devide),but multilayer will need vector rotate to support pitch and roll (that's why capture need in same distance, the capture points need to place on a sphere).
Then it will put sound data into capture points it need as some time the postion won't just fit into one speaker but betweeen 2 or 4(if multilayer) speaker.
After that, just batch convolution all capture points which has audio data, then add the data into output channels.

The tricky part is: when audio change position, the audio change need to be smooth, or it will create pop sound when rotate head, it will be audible when playing some pure tones like ( for example, windows system sound).
What I do is keep the channel allocation in last batch process, and in this time, if this brir is not used , then make it from origin volume down to zero in this process. vice versa.

The core processing code is in EqualizerAPO code (BRIRFilter.cpp and BRIRMultiLayerCopyFilter.cpp), if I don't explain myself very clear, just have a look at the code.

At current, the code is not very stable and is only pass few times test, it works on index, I think is not bad, but need good input data to check wether it has some audio algorithm problem, but it's hard to get good capture ( it need 60 times capture, position needs precise, the room need be big,need standstill during capture,the speaker should be small but good sound quality, and definitly need a person help moving the speaker)
 
Last edited:
At current, the code is not very stable and is only pass few times test, it works on index, I think is not bad, but need good input data to check wether it has some audio algorithm problem, but it's hard to get good capture ( it need 60 times capture, position needs precise, need standstill during capture,the speaker should be small but good sound quality, and definitly need a person help moving the speaker)
Also, it need much horsepower to run (as max to 4-6 times convlution than original impulcifer) , a bad processor will cause audio lag and clipped, I try it on Ryzen 3700x and 5600U, both works fine, but 11600h(I don't remeber very clear) will some time have problem.
 
Last edited:
Back
Top Bottom