• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Understanding Audio Measurements


Staff Member
CFO (Chief Fun Officer)
Feb 13, 2016
Seattle Area
Here is a tutorial and documentation on measurements I perform. I will update it from time to time to keep it current.

The weapon of choice for me is the Audio Precision Sys-2522. Here is a picture of it in my rack (top box):
Audio Precision Analyzer.jpg

This is a very capable device able to work both in digital and analog domains. I can for example use the analog output in both balanced and unbalanced configuration to drive an amplifier, and measure its output again using either balanced or unbalanced inputs. In addition, it has digital output capable of generating both AES/EBU balanced and S/PDIF plus Toslink optical. All of these ports can be programmed to create jitter on demand to test robustness and fidelity of digital input on both consumer and professional equipment.

Retail price originally was around $25,000 which in today's prices is probably $40K. The box is rather old though and discontinued. Fear not. It is still exceptionally capable due to user interface and some of the processing being connected to a PC over USB. As such, it still operates like a workhorse. An ex-salesman for Audio Precision (AP) said in an online review that their biggest competition was ebay (i.e. used Audio Precisions)!

A bit of background on AP, Tektronix and HP used to own audio instrumentation (besides other domains). Their boxes were expensive but highly limited. A bunch of Tektronix engineers left and started Audio Precision where the UI functionality was in the PC as mentioned above. And the device was fully capable including such things as stereo measurements which the Tek/HP gear lacked. I heard about them when I was running engineering at Sony in early 1990s. Was almost going to buy an HP unit until I heard about AP. Could not believe how much better it was. So purchased the unit and found out no one in Sony Japan knew about it either.

The unit is no longer in calibration and I am in no mood to spent big bucks getting it calibrate. Such gear drifts very little and at any rate, you should be looking at measurements in relative mode and order magnitude. As an example if a distortion spike is at -130 graph, it could be a dB up or down. That doesn't matter to the type of analysis we are doing.

By the way, John Atkinson at Stereophile performs all of his measurements using almost as old a unit as mine (the AP 2700: see: https://www.stereophile.com/asweseeit/108awsi). I am pretty sure his is not calibrated recently either. So I am good company. :)

AP has upgraded their gear but all of their new units sans one underperform my analyzer! The only one that does better is the APX-555 series that retails for $28,000. One day I might upgrade to that but the difference in performance is not worth it right now. My analyzer is able to go deeper than threshold of hearing on many measurements.

Audio Precision and Rhode and Schwartz dominate the high-end of audio measurement field and demand high prices for them. You can buy cheaper units such as Prism Sound dSound which I have also used but you lose some performance in the front-end.

Theory of Operation
At high level, the Audio Precision is nothing more than an analog to digital converter. Input signals are digitized and then analyzed and reported as graphs. So in theory you could use a sound card to do the same thing. The big difference is that the AP is a known quantity so results can be replicated by others whereas sound card come in so many variations that comparing their results gets hard.

Another huge difference is the analog scalar front-end in AP. The controller in the analyzer is constantly monitoring input levels and scales them to the most ideal range for its ADC. This allows the internal ADC to work at its most linear and optimized level. In addition, the same logic allows measurements of high voltage inputs up to something like 150 volts! This is necessary for measurements of amplifier that can output such high voltages. If you connected an amp to a sound card input it would blow it up in an instant.

Signal Processing
Some of you may be wondering about a catch-22 issue: how can an older analog to digital converter keep up with the latest in digital to analog conversion? Wouldn't the noise from the old ADC dominate? The answer is no. Using signal processing we can achieve incredible amount of noise reduction. This occurs when we use the Fourier Transform ("FFT") to convert time domain signals to frequency. By using many audio samples, we are able to gain in the order of 32 db or so in noise reduction (called "FFT gain"). This allows us to dig deep to as low as -150 db looking for distortion products. This floor is well in excess of hearing dynamic range (around -116 dB).

Unfortunately this FFT gain is source of a lot of confusion. By increasing or decreasing the FFT gain, we can make the noise floor of a DAC better or worse on demand. Without compensation, graphs using different FFT sizes cannot be compared. I use 32K FFT which is a great balance between noise reduction and not having too much detail that is hard to interpret. I have seen measurements using 2 million point FFTs which produce extremely low measured noise floors that can mislead one quite well.

In addition averaging can be used to reduce variations in noise floor. Combined, these two techniques perform miracles in allowing is to measure analog output of equipment to incredible resolution.

Effect of FFT size on audio measurements.png

The bottom line in cyan is the 32K FFT I use in all of my measurements unless noted otherwise. Notice how it has lowered the noise floor much more than the 256/1024 points. But importantly the noise floor is so low that we now see spikes that are 130 dB below our main tone at 12 kHz!!! The noise floor itself is at -140 dB which is way better than either the DAC or ADC can do (approaching 24 bits) let alone the combination of both. Such is the power of software and signal processing. Once in a while we get a free lunch. :)

Test Configuration
For above test, I ran the digital output from AP using a BNC to RCA adapter to a 3 foot coax cable which then connected to S/PDIF input of Pro-Ject. This is the same as you using a transport or USB to S/PDIF converter except that my analyzer is the source. Inside AP I can set the sampling rate and bit depth. The default for the J-test above is 24 bit, 48 kHz. I sometimes run 44.1 kHz. I am using 48 kHz because sometimes DACs are only optimized for 44.1 kHz and not 48 kHz (and multiples thereof).

The output of the DAC is unbalanced RCA which I connect back using a 6 foot or so, monster cable interconnects I have had for a couple of decades. It is a beefy cable with pretty tight RCA terminations.

Sometimes I test the balanced output of equipment. In that case I usually use a set of balanced cables I purchased from Audio Precision. They are kind of thin but very short so fine for testing.

For other tests like headphones I use either adapters from 3.5 mm/TRS to RCA or cables with those terminations on them. So nothing fancy here.

PC Testing
Toda many of us use external DACs with our computers using USB. For this reason I have converted some of my tests so that the source is generated by the computer but then captured and analyzed by the AP. For that, I use my normal everyday music player, Roon. In almost all cases I allow Windows 10 to detect the sound card and "gen up" the WASAPI interface which I use in exclusive, "bit-exact" mode. I usually capture the output of Roon format detection in my reviews. Here is the output for example for Pro-Ject Pre Box S2 Digital:


Roon has a nice indicator for when it is playing the file exactly or is converting it which is useful.

I used to use Foobar2000 which you may see in my older reviews to same effect.

Note that if Windows detects the device as is the case here, that is what I test. I only install drivers if I have to which thankfully these days is rare.

The computer I use for testing is my everyday laptop running Windows 10. It is an HP Z series "workstation." I am usually running other things on it while testing. No good DAC cares about that although some rare ones do (e.g. Schiit Modi 2). I have a Mac but have not done any work on it.

I will focus on DAC testing here which is most of what I do. Over time I will add to it for testing of other products.

j-Test for Jitter and Noise
My starter test is always the so called "J-Test." This is a special test which was developed by the late Julian Dunn which is the most recognized authority when it comes to standardizations and issues around serial digital audio transmission (i.e. AES/EBU and S/PDIF). He developed the J-Test signal as a way to increase amount of jitter which may be induced with the cable. And at the same time, it is a signal that is pure it its own.

J-Test is comprised of a square wave at 1/4 the frequency of the sampling rate. In the case of my default testing, sampling rate is 48 kHz so that square wave will become 12 kHz. Very oddly what comes out of the DAC is not a square wave but a sine wave! Why? Because to make a square wave you make a sine wave and then add to it infinite series of odd harmonics. The third harmonic will be the first addition at three times 12 kHz or 36 kHz. Because our sampling rate is 48 kHz, the DAC will filter out everything at half of that frequency or 24 kHz. Therefore none of the harmonics of that square wave get out. The only thing remaining is the first component which is a 12 kHz sine wave!!!

square wave.png

Why do we use a square wave as opposed to a 12 kHz sine wave? Well to create a sine wave we need to use fractional numbers. Digital audio samples on the other hand are all integers. We can convert those fractions to integers but we then must use some amount of noise as "dither" otherwise we create distortions. That raises the measured noise floor which is not good. Square wave on the other hand is just high or low numbers at fixed PCM values so we have no rounding/dither/noise to to add or worry about.

In addition to above, the J-Test signal varies with a fixed frequency to the tune of one bit. That one bit is designed to force all the bits to flip. Even though the level hardly changes, the effect on the cable and the receiver is significant. Normally this flipping bit will be visible in spectrum analysis when performing the test in 24 bits as I do. However, you never see it in my test since my FFT is not large enough to make it visible (you do see it in JA's stereophile tests). So from practical point of view, you can ignore everything I said in this paragraph. :)

The measurement you see posted above is the J-Test. We have our single peak in the middle representing the 12 kHz tone. Everything on the sides in an ideal DAC would be an infinitely low noise floor. Real DACs have higher noise floor and spikes here and there. If spikes are symmetrical around our main tone, they usually mean there is "jitter." Other spikes can exist by themselves indicating idle tones created due to interference or other problems in the DAC. In other words, the quieter the space around our 12 kHz peak, the better.

I usually show two devices on the same graph so that the contrast is easy to see. Here is an example from review of Schiit Modi 2:


As we see the iFi iDAC2 is much much cleaner than Schiit Modi 2. It has a flat, smooth noise floor whereas the Schiit Modi 2 has a bunch of distortion spikes (deterministic jitter) in addition to raised noise floor around our main 12 kHz tone (low frequency random jitter). The iFi iDAC2 is clearly superior.

Note that due to a psychoacoustics principle, jitter components hugging our main tone of 12 kHz are much less audible than the ones farther out from it. That distance is the jitter frequency. We can put this in the form of a graph as done by Julian Dunn:

Jitter Spec.PNG

Note: there is no jitter in the test signal itself. It is the nature of a higher frequency tone like 12 kHz to accentuate jitter because jitter correlates with how fast our signal is. Small clock differences don't matter to a slow low frequency wave like 60 Hz. But make it 12,000 Hz and small variations in clock to produce the next sample can become a much bigger deal.

It is said that J-Test is useless for non-S/PDIF interface. This is not so. Yes the bit toggling part was originally designed for AES/EBU and S/PDIF but the same toggling causes changes inside the DAC which can produce distortion/jitter/noise. And per above its 12 kHz tone is revealing just the same of jitter.

Here is Julian Dunn in his excellent write-up for Audio Precision on this topic:



While the above talks about AES/EBU balanced digital interface, the same is true of S/PDIF.

Linearity Test
I run this test using the AP as the digital generator because it needs to be in control of changing the level. It produces a tone which it then makes smaller and smaller and compares the analog output to expected one from the digital samples:

SPDIF Linearity Exasound E32.png

I usually create a marker where the variation is around 0.1 dB of error. I then look up its amplitude (-112 dB above). I divide that by 6 and get what is called ENOB: effective number of bits. This gives you some idea of how accurate the DAC is in units of bits. Above we are getting about 18 bits.

Here is an example of a bad performing DAC, the Schiit BiFrost Multibit:

Schiit BiFrost Multibit DAC Linearity Test.png

In an ideal situation we would have linearity going to 20+ bits. Reason for that is that in mid-frequencies absolute audible transparency would require about 120 db which is 20 bits * 6. As a lower threshold I like to see at least 16 bits of clean reproduction. No reason we can't play CDs/CD rips at 16 bits with any error after so many years since the introduction of that format.

Linearity Test Take 2
Another way to look at linearity is to see how a very small amplitude sine wave at just -90 dB can be reproduced using 24-bit samples and no dither. Put more simply we want to see if the rightmost bits in a 16-bit audio samples can be reproduced cleanly. If so, we should see a perfect sine wave. Here are two examples, first the Teac NT-503 and then Exasound E32:



The beauty of this measurement is that we can visually confirm accuracy. Any noise or incorrect conversion of digital to analog causes the waveform to deviate from ideal.

Distortions can be quite extreme as in this example on the left of the Schiit BiFrost Multibit:


There is just no ability to reproduce 16 bit audio samples correctly with the Schiit DAC on the left.

Harmonic Distortion
This is a classic test with a twist. We play a 1 kHz tone. We then take the output of the device and subtract that same 1 kHz tone. We then convert everything to frequency domain and plot its spectrum. What we see is both the noise floor and distortions represented as spikes:


Remember again the theory of masking. Earlier spikes are less audible that the later ones. In the above comparison we see tons more distortion products from Schiit BiFrost Multibit DAC relative to Topping DX7 which not only has a lower noise floor but its distortion peaks die down before the Schiit BiFrost Multibit start!

THD+N vs Frequency
This test shows the total harmonic distortion+noise at each frequency band:


Needless to say, the lower the better.

Perceptually THD+N is a rather poor metric since it considers all harmonic distortion products to have the same demerit. As I explained, from masking point of view, later spikes are more important than earlier ones. So don't get fixated on small differentials between devices being tested. Large differences like what is shown above though are significant and show the difference between good design/engineering and not so good ones.

Intermodulation Distortion
When an ideal linear system is fed two tones, it produces two tones. But when fed to a system with linearity errors, we get modulation frequencies above and below our two tones. This is called intermodulation distortion. There are many dual tone tests. For this, I have picked the SMPTE test which combines a low frequency (60 hz) with a high frequency (7 kHz) in a 4:1 ratio. Here is the explanation from Audio Precision:

The stimulus is a strong low-frequency interfering signal (f1) combined with a weaker high frequency signal of interest (f2). f1 is usually 60 Hz and f2 is usually 7 kHz, at a ratio of f1_f2=4:1. The stimulus signal is the sum of the two sine waves. In a distorting DUT, this stimulus results in an AM (amplitude modulated) waveform, with f2 as the “carrier” and f1 as the modulation.

In analysis, f1 is removed, and the residual is bandpass filtered and then demodulated to reveal the AM modulation products. The rms level of the modulation products is measured and expressed as a ratio to the rms level of f2. The SMPTE IMD measurement includes noise within the passband, and is insensitive to FM (frequency modulation) distortion.​

The plotted output looks like this:


Again the lower the graph, the better.

As with THD+N this measurement is also psychoacoustically blind. So don't get fixated over small differences.
Last edited:
Frequently Asked Questions (FAQ).

I seem to have to answer the same question over and over again, here and elsewhere. And deal with some misconceptions about me/audiosciencereview. So I thought I create and update this FAQ as I see repeated points.

1. Why don't you listen to audio equipment you review?
I do listen and compare headphone DAC/amplifiers. There, output impedance, distortion or uneven frequency response can result in audible differences.

You must be referencing DACs where I do not perform listening tests. Unlike headphone amps, I cannot match volumes on DACs without external means. Changing their volume control is hit and miss. Without level matching, listening test results are unreliable. Indeed through my headphone testing, I go through this time and time again. Without level matching, there is "night and day" difference in "air," "soundstage," "fidelity," "smoothness," etc. But once I level match, as the line goes in the great movie Shawshank Redemption, differences disappear like a fart in the wind. :)

Audible differences between DACs is likely even less than headphone amps. As such, it is even more important to match levels. And perform the test blind in controlled manner. So much extraneous factors affect our perception of our audio which have nothing to do with what enters our ears.

You may disagree but remember, you are disagreeing with the entire audio science/research community and in this forum, we follow what the science/research tell us. Not made up notions by audiophiles.

2. We don't listen to graphs, we listen to sound. Why look at graphs?
Agreed. We value listening tests even more than others. But per #1, the listening tests must be controlled and devoid of bias before it is accepted. Once there, I would be the first to put them ahead of graphs and measurements.

Sorry, but subjectivists, uncontrolled testing is of no value so please don't keep saying "your ears" say different. It is your ear+brain that is saying something different. You have to exclude all the other factors your brain takes into account beside sound before we look at your feedback.

3. Who cares about these measurements that go to -120 dB or whatever? Does it even matter if in these extreme cases if two products are different.
Yes and no. You are correct that vast majority of audiophiles are insensitive to these types of non-linear distortions. Heck, most would flunk tests of MP3 against CD at high rates so heaven knows they are able to ignore incredible amount of distortion that is provably there.

The issue at hand is that we want our conclusions to be durable across all people, all content and all situations. No listening test is going to encompass that. And we know, based on solid data that there are people who are trained for example to hear non-linear distortions better than others. And some listeners are more sensitive than others.

In addition, content makes a big difference in how it reveals differences. For example, if some music lacks energy below 30 Hz, then clearly having a system that reproduces down to 10 Hz won't be differentiated from one that doesn't. Finding such content for lossy audio compression like MP3 is easy because that work has been done (so called "codec killers"). Unfortunately we don't have such content for non-linear distortions we see in DACs, etc. Throwing audiophile content at them is not an answer because there is no evidence they are more revealing than not. To wit, none of the codec killers are audiophile tracks yet they are very revealing.

So the technique that is used in research is to analyze human hearing and find its extremes. Such analysis has been done and shows that in mid-frequencies we have a dynamic range of about 116 dB. In that regard, if we show that the system has distortions that in all cases are below -116 dB, then we are golden. We can stand back and with high confidence and say that the system, and the channel, are transparent.

Move that line to say 90 dB and that becomes hard logic to prove. Some people, on some content at some listening level may be able to hear distortions.

Ultimately though, I provide full set of graphs and not just single numbers. You are welcome to apply your own standards below mine.

3. Why are you guys so obsessed with measurements?
We are not obsessed with measurements. You are focused too much on our reviews here. Search the forum and you see tons more content that focuses on audio science, music, video, movies, etc. Measurements get all the news but they are a subset of what we are about.

4. Is Amir NWAVGUY?
No. Just because there are two engineers in the northwest, doesn’t mean we are the same people. He is a lower level electrical designer, I grew up as a designer but moved into management. We have different skills, writing and communicating styles, etc. He also had an different audio analyzer than what I have.

Having read NWAVGUY’s blog, I say I would not mind being him. The issue is whether he minds very much being me.

5. Why do you emphasize the audio measurement gear you have and how expensive it is?
First and foremost, I paid a ton of money for the bloody thing so cut me some slack as I try to get some respect for it.

Seriously, there is a gold standard and that is Audio Precision. They know it so, and they put a ton of goodies in their gear. Combined with very low volume of products they sell, the product is priced way up there.

I have used much lower cost products like Prism Sound (1/5 the cost of AP). I also know the people and respect them very much. As such, I wish I could use the Prism Sound but I opted not. Simple reason is that other companies have Audio Precision and I like them to be able to replicate my results. If I used a Prism Sound, there is no way to do that due to much usage of that product in the field.

The AP is also more capable with higher precision in their top of the line unit I have (APx555). When we are splitting hairs with difference in USB cables and such, every bit of precision comes in handy.

6. What about soundcard measurements?
You can get useful data out of those as seen by measurements performed by some of our members. Per above, I am opting for Audio Precision because it is a standard in audio R&D and measurement.

7. You must be in the pocket of Topping as you give them good marks in your reviews.
Well, I am not. I have been fortunate enough in life to have earned a couple of bucks that I don’t need to do this as a job. I bought my original Topping products myself (DX7 and D30).

It just happens that when I reached out to Topping about some of their newer products (DX7s and D50), they offered to send them for free. So I accepted them since the cost of purchasing equipment to test is far higher than the donations I receive. My preference would be to not ever get free gear this way but as a practical matter of getting more gear evaluated, I accept them.

However, I do not let the loan or discount impact my reviews. Heaven knows I have made enemies of companies that gave me discounted gear only to see my less than stellar review of their product.

8. You must have an agenda against Schiit. You seem to hate all of their products.
Actually, I gave a thumbs up to their Sys audio switcher.

That aside, I can’t afford to have an agenda here. I love you all but I love my reputation in the industry even more. No way, no how do I want to soil that for some personal reason.

I have bought half a dozen Schiit products with my own money. Hate to see them not do well. Alas, that is exactly what they have done.

Ultimately, one thing you can take to the bank is that I am on your side rather than manufacturer. I don’t care of a company gets upset over my review. I go by the data that they produce on my bench. If there is an error, they can reach out to me and help me correct it. Complaining elsewhere does no good, and neither does immature accusations of me being biased.

You want biased? Look at all the people who write subjective reviews. Or those who do measurements that show clear issues with Schiit products yet write flowery positive words around them.

Personally, I would love to find more Schiit products that do well so that I don’t have to answer this question so much. Time will be better spent evaluating more products than dealing internet conspiracy theories.

9. You seem to be in love with Chinese products and hate American ones.
Not really. I am in love with well performing products. Some Chinese companies take this business of producing hi-fi seriously. You know, like measuring it to make sure it performs well. Others like some western companies, either don’t measure, don’t have the equipment to measure, or don’t understand the measurements. So is it a surprise that when I put them on the bench they do worse than the Chinese companies that do their homework?

As an American, I love to wave the flag of excellence, but I can’t compromise my ethics and do that when the product doesn’t do well on my bench. Again, seek other places if you want the fact that something is American made, to influence what the reviewer says about the waveforms coming out of the device.

10. You seem to make mistakes in your measurements at times. Why should I believe someone so sloppy?
Look, I didn’t take this job voluntarily. There are folks that dot the i better than I do. Problem is, no one else is doing this work. Thousand and thousands of audio products are out there with no review. Having owned proper measuring equipment and knowledge of how to use it, I decided to do something about this by measuring and publishing such. If you can find a better version of me elsewhere, by all means, put me out of business. I have a lot of other fun things I like to do like photography, travel, woodworking, gardening, etc.

11. Microsoft sucks. Why should I care that you worked there?
Not here to tell you to work for Microsoft or use their products. It just happens that my job at Microsoft was to manage the development of the entire suite of audio, video and imaging technologies in Windows. I had over 600 engineers and a bunch of expert PhDs and other luminaries working on my team which taught me a lot. It also enabled me to become a trained listener. It is that body of experience that allows me to have a more informed opinion than others in doing this work. So it gets mentioned.

I also worked for other companies like Sony where I bought our first Audio Precision back in 1990 or so. So if you can go by that experience if you like.

12. Why do you brag about your experience as you just did above?
Well, it becomes necessary when I am challenged on not knowing what I am doing. What else would you like me to say?

Try challenging your doctor on whether he knows what he is doing. I am confident he will seem arrogant too after he points to his license on the wall and years of experience versus you just reading stuff online.

But sure, it is my personality failing in how I try to defend myself too much at times. For that, you have my apology.

13. Thirteen is an unlucky number so there is no entry here.

14. You seem very insecure and want attention by creating this forum.
You are in dire need of a mirror because we all crave that kind of attention. Why else do we spend time online for free dispensing advice and opinion? So don't ask me to apologize for being human. :)

I have been on forums dating back to early 1980s. I love interacting with others this way and put in the work to help and enjoy the praise here and there.

15. Did you post under this or that alias on other forums?
No. I have never signed up under any alias on any forum. I use my alias "amirm" which is what my email alias was at Microsoft. Prior to that I used my full name (which you can find in Usenet archives).

As a member of industry, I think it is mandatory to bring full transparency by using my full name.

16. Aren't you a troublemaker online and banned from many forums?
Oh, I do have strong opinions about audio (and video). After many decades of this, you can't be human and not have strong opinions.

As to being banned from other forums, I have not been on many. I did co-found another forum called WBF. I had a falling out with my partner, sold out my half of the shares and my partner decided to ban me after a year because I would question his endorsement of $20,000 USB cables (not a typo). Pretty uncool but hey, ASR forum is born out of that disagreement.

I also got banned from computer audiophile for defending my review of a forum sponsor there. The reason for banning was some kind of commercial interest in competing products. I think that took just a couple of days to get banned so I don't think that counts. :D

For similar reason, while not banned and after contributing a ton on head-fi forum, my posts are being "moderated" (euphemism for the posts never seeing light of day).

And after some 15,000 posts on AVS Forum, and more than a decade, I did get banned out of there too because they considered my published audio research articles as "spam."

OK, so maybe I have been banned here and there. Shoot me. :D


I think that is it for now. I will update the list as the time goes.
Last edited:
This is some very dense material. :oops:
But it is explained pretty well in my opinion.
For the linearity test, do you use a test tone? At what frequency? Or a sweep?
There are two linearity tests. The one where I annotate the resolution is run at 200 Hz (default value selected by Audio Precision).

The one with a sine wave output uses 1 kHz.

Both are fixed frequencies. In the former, the amplitude is changed/swept. In the latter, that is kept constant at -90 db or so.
Last edited:
I see the Pro-Ject Pre S2 digital in the test. Have you measured it?
I see the Pro-Ject Pre S2 digital in the test. Have you measured it?
Just the bit that you see here. I plan to finish the measurements soon as it is a loaned equipment and I put priority on those so that I can return them (boy, that is a run on sentence if there ever was one!).
Hi Amir,

Very nice.

One small thing needs correction. I have noticed that you are in the annoying habit of typing Khz, when it's obvious that you really mean kHz. Please clean up your act on using correct SI notation.

Uppercase K denotes kelvin. Lowercase k denotes kilo.

Uppercase H denotes hertz. Lower case h denotes hour.

This is the science-based audio forum. Stop being so careless. Thank you!
Correction, I should have written that Hz (uppercase H, lowercase z) is hertz.

Uppercase H is henry (the unit of inductance).
The FR plots you do are @48kHz. Is that for a 16 or 24 bit source?

Also, the AP's graph (screen grabs) resolution show approximately 30 steps per division on the 0.25dB graticule. So approx 720 vertical per plot display, giving a 0.008dB minimum step resolution on the plot?

The A/D of the AP is 16 or 24 bit isn't it? And where does all the other intermediate data it is capturing go? Is it dumped into a fie for external processing?
he A/D of the AP is 16 or 24 bit isn't it? And where does all the other intermediate data it is capturing go? Is it dumped into a fie for external processing?
It is 24 bits. Everything is kept in memory. The only thing saved is the project/setting file. There are ways to capture and export them but I have not used them. The unit does NOT act as a capture device though if that is what you are asking.
Also, the AP's graph (screen grabs) resolution show approximately 30 steps per division on the 0.25dB graticule. So approx 720 vertical per plot display, giving a 0.008dB minimum step resolution on the plot?
I have not done the math so I don't know. :) The step size is different in different test. I usually let AP decide the steps between min and max that I specify.
Oh most definitely! I can't believe someone tried to solve this problem and posted a blog about it given the small audience. I have to see if I can make it work with my RME ADI-2 Pro.
Top Bottom