• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

MMM approach and a new calibration app (magic beans)

Regardless of what exactly the Magic Beans app does. While @joentell assured me on the YouTube comments that there shouldn't be any issues with the moving mic method, I'm still unconvinced about this approach. Couldn't the fast movement/shaking of the mic introduce measurement noise and errors, due to air friction, vibrations, or, a Doppler effect? MMM might be a good idea for subwoofer calibration, but high frequencies are very directional.

Why not take a few static near-filed measurements and have the app average them? Could you use the MMM with conventional MLP-based calibrations such as Auddyssey or Dirac? I'm tempted to try holding the mic and moving it in circles during Auddyssey measurements and see if I get better results.
There are at least one or two threads on MMM. Despite what you might think it actually helps with getting noise to fall out of the measurements at the LP. I 've not used it up close. It is a reputable method used in theaters and other live venues for balancing the sound system. Somewhere on one of those other threads I compared multi-point averaged sweeps in REW with a single MMM over the same area. Near identical results for FR.

High frequencies will possibly vary a bit from single point measurements and is part of the reason multi-point averaged sweeps are used. MMM solves most of that issue with one measurement.

I'll be quite interested in how Magic Beans can match an up close MMM to Klippel or other up close measurements.

I've even done single tone distortion measures with MMM instead of single point and had results that matched the single point which surprised me.
 
Last edited:
Really cool!! Is it a method of applying a kind of weight (or correction value?) to the measurement of mmm?


By calibration, and by the characteristics of mmmm, there may not be much difference, but the outcome of microphones such as Earthworks' m23/m30 may also be curious.!
My UMIK-1 didn't match my measurements from my 4 other calibration mics. Is the Earthworks impervious to dropping? :cool:
 
Another post comparing MMM to regular sweeps.
 
Really cool!! Is it a method of applying a kind of weight (or correction value?) to the measurement of mmm?
This is another measurement straight out of the REW RTA using the MMM. I've only conformed it to fit the scale of the Klippel NFS graph.
Again, forgive my out of calibration UMIK-1.

If you're curious about my DI measurement, I was experimenting with putting the speaker on a motorized turntable while I attempted to take a sound power measurement using MMM. It's experimental, but it might lead to something in the future. This is all done in-room.

CEA2034 -- Kali LP-6v2 vs MMM.jpg
 
Is the Earthworks impervious to dropping? :cool:
lol. Do you have to drop it? :facepalm::facepalm:

In Korea, some senior users calibrated earthworks as a reference, and I also had an umik microphone calibrated.

1.png

2.png


I don't know how much this can affect the tendency of high notes (especially in the MMM method), but I also asked other microphones because there are some differences.
 
lol. Do you have to drop it? :facepalm::facepalm:

In Korea, some senior users calibrated earthworks as a reference, and I also had an umik microphone calibrated.

View attachment 337521
View attachment 337522

I don't know how much this can affect the tendency of high notes (especially in the MMM method), but I also asked other microphones because there are some differences.
Are you sure this isn't a case of using the different sample rate? It seems shifted over. Don't ask me how I know.
 
Are you sure this isn't a case of using the different sample rate? It seems shifted over. Don't ask me how I know.
Nope. that's Cal file. (Zero Reference is Earthworks.) What I am curious about is that you also need to verify the cal file provided by minidsp as to how precisely you can match NFS data such as listening windows.
Of course, as I said earlier, I'm rooting for and interested in what you're trying to do. But that's why I suggested that if I double check the calibration file of the microphone itself, wouldn't I have a better match rate? I hope you don't get me wrong.

Btw, In the picture I attached, what I marked as mindsp's cal means the factory calibration file that can be downloaded from each personal account on the minidsp website.
 
Last edited:
Nope. that's Cal file. (Zero Reference is Earthworks.) What I am curious about is that you also need to verify the cal file provided by minidsp as to how precisely you can match NFS data such as listening windows.
The corrections we can make are only as good as the mic calibration. Unfortunately, if you use the same mic for Dirac for example, the initial measurements it takes will also be off. With Dirac, we can export target curves which are determined using the difference between the NF and MLP measurement, so a bad mic calibration would affect both measurements equally. I've even tried it using the built-in phone mic and the target curves are very close.

Here's an example from May 2023, using the Magic Beans True Target method using a UMIK-1 with the cal profile, UMIK-1 without the cal profile, a cheap IMM-6 mic, and a phone mic. Ignore that the phone mic says MLP and not near. Just know it measured equally bad NF.

different mics.jpeg


Since we're looking at differences, the corresponding TT curves the app came up with look similar. I was happy to see that. The problem is that we still need a reliable MLP measurement to correct from. (This is with a very old build, and we've since improved upon the consistency of the measurements. They would align much closer now. This one is probably from about 100 builds ago.) I think you get the idea, though. I wasn't trying to be overly scientific here. I just did these quickly to show @Reverend Slim the difference in our Discord chat.

room curve.jpeg
 
Nope. that's Cal file. (Zero Reference is Earthworks.) What I am curious about is that you also need to verify the cal file provided by minidsp as to how precisely you can match NFS data such as listening windows.
Of course, as I said earlier, I'm rooting for and interested in what you're trying to do. But that's why I suggested that if I double check the calibration file of the microphone itself, wouldn't I have a better match rate? I hope you don't get me wrong.

Btw, In the picture I attached, what I marked as mindsp's cal means the factory calibration file that can be downloaded from each personal account on the minidsp website.
I only ask because you can change the sample rate for the UMIK-1 on Mac in the Midi settings.
 
Here's an example from May 2023, using the Magic Beans True Target method using a UMIK-1 with the cal profile, UMIK-1 without the cal profile, a cheap IMM-6 mic, and a phone mic. Ignore that the phone mic says MLP and not near. Just know it measured equally bad NF.
I wanted to see this! Really nice and meticulous messurement!!
I think you get the idea, though. I wasn't trying to be overly scientific here.
I agree. That's why I just asked lightly. Don't get me wrong. :D

I only ask because you can change the sample rate for the UMIK-1 on Mac in the Midi settings.
I use Window OS. And My sample rate was always 48000.
 
Joe, are you trying to make both speaker corrections and room correction at the same time???
I guess you can say that. We try to correct the speaker response above the transition region, and the speaker and room interaction below that. I don't really like the term "room correction" because I think that needs to be done using physical means, using treatment.
 
I know ASR is not your preferred discussion format, but I would make this conjecture that goes from hand waving to makes sense.

The weakness of Dirac/Audyssey is that they measure in mono. Distance from speaker is known but angle is not.

Take the same LCR speakers in an anechoic room. They are perfect 360 degree radiators where the response is identical on and off axis.

If you measure at the MLP, you get perfect on axis measurement so you don’t need Magic Beans. Irregularities in FR are real and you can EQ them to your preferred tilt.

Now imagine LCR speakers that are not 360 degree radiators. You have anechoic on-axis data. In one scenario, those LCR speakers are parallel and pointing straight with no toe in. In another scenario the LCR speakers are setup so that the L/R are toed in toward the MLP. How do they measure differently?

In the first scenario, a speaker whose dispersion characteristics were such that you lost x dB every y degrees from 20-20kHz, would mean that you could EQ full range since the difference in volume from speaker rotation represents the entirety of the difference.

But if we look at even the JBL 708P you are still shifting 3-4 dB in that +/- 20 degree range.
View attachment 337515

Option 1 is to just correct to a few hundred Hz and ignore the rest. Problem is that you lose the ability to EQ above the transition frequency.

Option 2 is to measure the axis to the MLP in anechoic, apply the PEQ at that point. In an anechoic room, that works.

Now we go in room. What you want to do is identify the room effects and identify the sound from the axis at the listening point.

With a standard Dirac/Audyssey, they take averages at multiple spots hoping that the weighting is such that it each measurement is equal and you can take more measurements near a seat you want.

Since you say MMM at different distances correlates with the NFS at different distances, are you saying that Magic Beans takes the nearfield at a few positions and the MLP and then picks a curve to send to Dirac such that things instead of pulling everything down or up to the target curve, it does “50%” correction (or some amount other than maximum) to account for the imperfect in room measurements?

A lot of nuggets of details that you have added in these posts such as MMM at different distances compared to NFS data and “weighting” the correction differently in the bass versus treble and picking how much of a correction to do along the way and what to ignore make Magic Beans seem a lot more scientific, but I am still trying to wrap my head around what you are doing exactly that makes it better than just going with transition frequency only bass room target (if you have an awesome speaker like a Genelec or Meyer Sound) or biting the bullet and doing full 20-20 correction at the MLP (if you have something like a Bose 901
which is inherently dependent on full range EQ even in anechoic conditions). The Bose 901 is a unique case since it does get closer to that consistent FR at larger areas since you are getting a mix of all those reflections smeared in time anyway. Its almost like a moving speaker as opposed to moving mic when you are measuring at a few different positions.
There is a lot here. Maybe we can address one thing at a time. What is the thing you would like me to answer first? We can get to them all eventually.
 
I agree. That's why I just asked lightly. Don't get me wrong. :D
It's important that I state that the goal for True Target is to provide a target curve that is more correct than the default target curves provided by the various auto-calibration software. Many people like to use the Harman curve based on the JBL Professional M2 measured in various rooms, but they fail to realize that research was descriptive, not prescriptive and wasn't intended for use as a target. Unless someone is using JBL M2's and their room's response is exactly the average of all the rooms measured, the Harman curve is probably not correct for them.

People spend lots of money on their equipment and on these "room correction" software that they expect to automatically provide them with correct results. At the end of the calibration process for most of the popular software, it asks the user to enter their target curve. Most people don't know what to use and end up using the default.

If you were to measure your speaker near-field after applying some of those corrections, you may be surprised to see that they make the response of the speaker worse than not applying EQ at all. This is especially true above the transition region. Some do a worse job than others. I would be interested to see some people's NF response after auto-calibration using the default settings. I think we'll start to see the issues with other methods.
 
Last edited:
I guess you can say that. We try to correct the speaker response above the transition region, and the speaker and room interaction below that. I don't really like the term "room correction" because I think that needs to be done using physical means, using treatment.
Treatments won't be effective in bass frequencies -- certainly not in situations where someone would use your app. :)

The approach we like to advocate is buying a proper speaker without deficiencies in its anechoic measurements (what Dr. Toole stated). Once there, the job becomes simple as you only need to optimize the bass frequencies which are predominantly a function of the room, not the speaker. If a speaker does have flaws and you have no choice but to use it, best way is to make that correction precisely based on measurements. But I can see using your method (assuming its accuracy is shown across good number of speakers) to determine that.
 
Treatments won't be effective in bass frequencies -- certainly not in situations where someone would use your app. :)

The approach we like to advocate is buying a proper speaker without deficiencies in its anechoic measurements (what Dr. Toole stated). Once there, the job becomes simple as you only need to optimize the bass frequencies which are predominantly a function of the room, not the speaker. If a speaker does have flaws and you have no choice but to use it, best way is to make that correction precisely based on measurements. But I can see using your method (assuming its accuracy is shown across good number of speakers) to determine that.
I agree with pretty much everything said here. If you buy great speakers, just EQ the bass.

The only issue I've seen when it comes to EQ based solely on the anechoic measurements is that the nearby surfaces and corresponding reflections become part of the response. How many systems have you seen where a nearby wall, console, or TV become part of the speaker's waveguide essentially? I'm guilty of this as well. The realities of a living room system come with many compromises, which I believe requires an in-situ NF measurement. I'm open to a discussion about that.

I would assume that the placement of my speakers isn't doing wonders for my directivity. :facepalm:
 

Attachments

  • PXL_20231228_030819064.jpeg
    PXL_20231228_030819064.jpeg
    185.8 KB · Views: 191
  • PXL_20231228_030840126.jpeg
    PXL_20231228_030840126.jpeg
    192.4 KB · Views: 190
  • PXL_20231228_030912576.jpeg
    PXL_20231228_030912576.jpeg
    234.6 KB · Views: 184
  • PXL_20231228_030949247.jpeg
    PXL_20231228_030949247.jpeg
    383.6 KB · Views: 180
  • PXL_20231228_030959067.MP.jpeg
    PXL_20231228_030959067.MP.jpeg
    220.1 KB · Views: 192
Last edited:
There is a lot here. Maybe we can address one thing at a time. What is the thing you would like me to answer first? We can get to them all eventually.

For the record, I am also rooting for you! The more I learn the less snake oil it sounds. But I still want to wrap my head around the nuts and bolts and not just analogies.

Here’s how I look at it.
1) You have Ratbuddysey to let you apply your preferred PEQ by giving Audyssey iOS app a target curve that does the EQ you actually want. You need Ratbuddysey because there may be huge variability in how you measure and we just care about the PEQ that is chosen as opposed to the actual target. You care about the relative target to the measurement.

2) Magic Beans is about giving Audyssey and Dirac better curves.

Using Dirac to control up to the transition is frequency is good, but there is no equivalent of Ratbuddysey to tell Dirac to just apply +3 dB, q2.4, at 10 kHz if that’s what the anechoic EQ predicts.

So unless you had a StormAudio or HTP-1 where you can run Dirac to 500Hz and then apply post-Dirac PEQ above the transition frequency you either need to limit your correction or hope that your target curve chosen and your measurements actually end up giving what you need.

So what I gather is that Magic Beans helps to better determine what the anechoic-equivalent EQ should be above the transition frequency with decent correlation/prediction by measuring the difference near field and MLP and developing a predictive model based upon using some known speakers to start where NFS data is available.
 
Here's an example from May 2023, using the Magic Beans True Target method using a UMIK-1 with the cal profile, UMIK-1 without the cal profile, a cheap IMM-6 mic, and a phone mic. Ignore that the phone mic says MLP and not near. Just know it measured equally bad NF.
The plots presented in the accompanying graphic are somewhat confusing. Was there a data offset applied to these plots or is that the native response obtained from the sweep? If you measured the Umik-1 without a calibration file; what was the sensitivity factor applied to the measurement? If you measured with the Umik-1 without a calibration file, there is no "Sense Factor" for REW to adjust the SPL level to. To see the native response of the mic, you would have to create a calibration file with the "Sense Factor" on the first line of the file and with at least the same number of frequency points as the factory .cal file with all the values set to 0dB.

Factory 0 degree .cal file (615 pts):
"Sens Factor =-6.868dB, SERNO: 700****" 10.054 -6.6321 10.179 -6.4576 10.306 -6.2864 10.434 -6.1183 10.564 -5.9534 10.696 -5.7916 10.829 -5.6329 10.964 -5.4773 . . .

Native response 0 degree .cal file (615 pts):
"Sens Factor =-6.868dB, SERNO: 700****" 10.054 0.0000 10.179 0.0000 10.306 0.0000 10.434 0.0000 10.564 0.0000 10.696 0.0000 10.829 0.0000 10.964 0.0000 . . .


I only ask because you can change the sample rate for the UMIK-1 on Mac in the Midi settings.
The Umik-1 is a 48KHz sampling device. Changing the sample rate in Audio MIDI Settings.app to something other than the native sampling rate will affect all response calculations—badly. The sample rate for the Umik-2 has several values to choose from and that sample rate must match the output device sample rate or vice-versa otherwise errors will result.
 
The plots presented in the accompanying graphic are somewhat confusing. Was there a data offset applied to these plots or is that the native response obtained from the sweep? If you measured the Umik-1 without a calibration file; what was the sensitivity factor applied to the measurement? If you measured with the Umik-1 without a calibration file, there is no "Sense Factor" for REW to adjust the SPL level to. To see the native response of the mic, you would have to create a calibration file with the "Sense Factor" on the first line of the file and with at least the same number of frequency points as the factory .cal file with all the values set to 0dB.

Factory 0 degree .cal file (615 pts):
"Sens Factor =-6.868dB, SERNO: 700****" 10.054 -6.6321 10.179 -6.4576 10.306 -6.2864 10.434 -6.1183 10.564 -5.9534 10.696 -5.7916 10.829 -5.6329 10.964 -5.4773 . . .

Native response 0 degree .cal file (615 pts):
"Sens Factor =-6.868dB, SERNO: 700****" 10.054 0.0000 10.179 0.0000 10.306 0.0000 10.434 0.0000 10.564 0.0000 10.696 0.0000 10.829 0.0000 10.964 0.0000 . . .



The Umik-1 is a 48KHz sampling device. Changing the sample rate in Audio MIDI Settings.app to something other than the native sampling rate will affect all response calculations—badly. The sample rate for the Umik-2 has several values to choose from and that sample rate must match the output device sample rate or vice-versa otherwise errors will result.
The sample rate change was accidental, but I caught it. I decided to see what the difference would be and what was shown was a shift in the response.

The graphs were separated on purpose using the function in REW to do so.
 
For the record, I am also rooting for you! The more I learn the less snake oil it sounds. But I still want to wrap my head around the nuts and bolts and not just analogies.

Here’s how I look at it.
1) You have Ratbuddysey to let you apply your preferred PEQ by giving Audyssey iOS app a target curve that does the EQ you actually want. You need Ratbuddysey because there may be huge variability in how you measure and we just care about the PEQ that is chosen as opposed to the actual target. You care about the relative target to the measurement.

2) Magic Beans is about giving Audyssey and Dirac better curves.

Using Dirac to control up to the transition is frequency is good, but there is no equivalent of Ratbuddysey to tell Dirac to just apply +3 dB, q2.4, at 10 kHz if that’s what the anechoic EQ predicts.

So unless you had a StormAudio or HTP-1 where you can run Dirac to 500Hz and then apply post-Dirac PEQ above the transition frequency you either need to limit your correction or hope that your target curve chosen and your measurements actually end up giving what you need.

So what I gather is that Magic Beans helps to better determine what the anechoic-equivalent EQ should be above the transition frequency with decent correlation/prediction by measuring the difference near field and MLP and developing a predictive model based upon using some known speakers to start where NFS data is available.
Thank you for your support. Going from snake oil to not snake oil is quite an upgrade. ;-)

1) I've never used Ratbuddysey, but I know what it is. I think MultEQ-X is needed to get the most out a Denon or Marantz product because it allows you to exclude all measurements taken by Audyssey. Therefore, it allows you to use the device as a filter bank with as many channels of processing as the device supports. I think of it as a giant MiniDSP. Without being able to exclude Audyssey's measurements and corresponding correction to flat, I can't trust that any corrections made will give good results.

I would be interested in hearing from others, but I would like to see a NF response of your speaker before and after auto-calibration with X software.

Here's an example of a measurement of a Monolith Encore T5 tower that I took during my event where I presented MB to a group of people. In Green, you see the natural NF response (bad UMIK-1 cal. causing the artificial rise in HF response.) In Blue, you see the NF response after Audyssey corrected for flat at MLP. So without any target curves, this is the correction it made initially. That doesn't seem like a good starting point for correction, IMO. I think that happens because they apply an overly precise correction curve based on the MLP measurement. I asked them about this and the response was something to the effect of, why would you want to measure NF? :)

With_and_Without_Audyssey.jpeg

Here's the NF response of that speaker after using my method of correction. (This was from an August 2023 build, so changes have been made since.) All the Audyssey measurements were excluded in MultEQ-X, and the filters from Magic Beans (MB) were imported.

MB_in_orange.jpeg


2) Yes, it's about using target curves that make sense for your speakers in your room at your MLP.
I think you might be overcomplicating what it's doing. We don't have to predict anything, because we derive the room response by looking at the difference between NF and MLP measurements. Our target is for the NF response to be flat, but we arrive at that not simply by inverting the NF response. We use the MLP measurement to target the room response curve and that flattens the NF response at higher frequencies. It also takes into account where the response is no longer in the direct-dominant field, and we start making corrections based on the MLP response, where the room and the direct sound of the speaker are inseparable. In the post MB correction, you can see where the speaker starts to transition from an ideal flat response, to something less controlled as the room begins to take over.
 
Back
Top Bottom