• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Review and Measurements of Empirical Audio Synchro-Mesh

Empirical Audio

Active Member
Audio Company
Joined
Mar 10, 2019
Messages
224
Likes
63
Location
Great Northwest, USA
Nice, thank you. Is this sample by sample check

Couldn't it just detect a compressed stream and enable pass-through?

How would that reduce jitter? Just the PLL action?

I have had customers request a bypass like this. I would have to put some selectors/muxes on the board to accomplish this. That way one could pass a 24/96 file with and without ASRC and see which sounds better just by flipping a switch. Maybe on the next fab.
 
Last edited:

Empirical Audio

Active Member
Audio Company
Joined
Mar 10, 2019
Messages
224
Likes
63
Location
Great Northwest, USA
External jitter is of no concern for compressed streams as they need to be decompressed and re-clocked on the receiver regardless.

Sure it is, whether its .wav or FLAC or ALAC. Jitter is still a concern. Decoding compressed data and clocking are two separate things.
 

gvl

Major Contributor
Joined
Mar 16, 2018
Messages
3,425
Likes
3,979
Location
SoCal
Sure it is, whether its .wav or FLAC or ALAC. Jitter is still a concern. Decoding compressed data and clocking are two separate things.

Are you saying DD or DTS capable units use the clock extracted from SPDIF for the DA stage? I'm no expert but kind of doubt it.
 

gvl

Major Contributor
Joined
Mar 16, 2018
Messages
3,425
Likes
3,979
Location
SoCal
Yes, captured digital passed through iPurifier and was saved and then compared bit for bit with the original 24/96 source file.

I wonder if the supplied iPower PSU can be causing the jitter. Also there seem to be a single XO in the iPurifier, which probably means it performs better on some sampling rates than others.

ifi-spdif_4-580x435.png
2SPDIFpcb.jpg
 
Last edited:

pkane

Master Contributor
Forum Donor
Joined
Aug 18, 2017
Messages
5,626
Likes
10,202
Location
North-East
I wonder if the supplied iPower PSU can be causing the jitter. Also there seem to be a single XO in the iPurifier, which probably means it performs better on some sampling rates than others.

Tested it with the iPower supply and with another SMPS that measures a bit cleaner (included with the Focusrite Forte). No difference. Don't know if there is another sample rate that it handles better.
 

gvl

Major Contributor
Joined
Mar 16, 2018
Messages
3,425
Likes
3,979
Location
SoCal
Tested it with the iPower supply and with another SMPS that measures a bit cleaner (included with the Focusrite Forte). No difference. Don't know if there is another sample rate that it handles better.

I keep one around but it doesn't get much use but is handy to adapt optical to say a Khadas TB. Subjectively it brought noticeable differences when I used it with a CCA and a vintage DAC. I use a different DAC with the CCA now and hear no differences with it and actually think it sounds a bit worse.
 
Last edited:

BYRTT

Addicted to Fun and Learning
Forum Donor
Joined
Nov 2, 2018
Messages
956
Likes
2,452
Location
Denmark (Jutland)
Ah, then it is a mystery unless somehow the USB interface improves jitter a little each time. It cannot improve other types of distortion can it?

If i get it right you try to make conclusions why at least 2 of the tracks from @Blumlein 88 test sounded more focused or actually better than the original after a 8x DAC/ADC conversion, in that conclusion all the talk is about digital domain stuff as re-clockers / PLL / Jitter improvement, but how can one know and conclude difference is the digital chain that make the actual improvement when a simple thing as those tracks AC bandwidth (stopbands) are skrinked for each and every DAC/ADC conversion, and in that process will point to that own research show that whenever a track is shrinked a bit in bandwidth then its dynamic range is actual improved a bit and should make a sensed difference. Also myself had run test on those converted tracks and can agree for a few of them one can maybe come to prefeer the converted ones and that thing could be system or personal dependant, but on the other hand when i load test tracks in real ABX test (Foobar) it can really get hard at least it was for me, for example my best score was on J. Warnes track where i hit 6 right out of 8 but for the other tracks it was 3 to 5 out of 8. In the end you and wife could have better or trained ears and better system than mine but will mean little schrinked track bandwidth should be taken into count to what you call a mystery.
 
Last edited:

PierreV

Major Contributor
Forum Donor
Joined
Nov 6, 2018
Messages
1,437
Likes
4,686
(Foobar) it can really get hard at least it was for me, for example my best score was on J. Warnes track where i hit 6 right out of 8 but for the other tracks it was 3 to 5 out of 8. In the end you and wife could have better or trained ears and better system than mine but will mean little schrinked track bandwidth should be taken into count to what you call a mystery.

Statistically speaking, those things quickly get tricky. These are very small sample sizes. Plus, what you describe is a perfectly plausible random distribution for multiple tests. People get all carried away on ABX tests and, yes the principle is valid. However, most ABX tests are done on samples that are way too small. It is a better, more grounded, attempt than going full subjective, yes. But it is problematic.

This isn't only for audio btw, it is a recurrent problem.

This paper https://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020124 "Why Most Published Research Findings Are False"
made a big splash when it was published and has been cited a ton of times.

Note 1: this isn't meant as a criticism of your method or results. I would do the same because I don't have the patience or the focus to collect enough data.

Note 2: if the effect is strong - in audio, say a clear distortion is heard and recognized with high accuracy - small n results may be valid. Otherwise, you really need a relatively large number of tests for every specific point you want to test (around >30, ballpark figure I've kept in mind from my stats teachers) unless you want to go with fancy distributions.

Note 3: multiple distinct tests (again, unless you have identified a very specific characteristic and you test for that characteristic in these tests) increase the chance of finding a significant result just by chance, but it will be _any_ significant result... Drug companies love those :)
 

Empirical Audio

Active Member
Audio Company
Joined
Mar 10, 2019
Messages
224
Likes
63
Location
Great Northwest, USA
I wonder if the supplied iPower PSU can be causing the jitter. Also there seem to be a single XO in the iPurifier, which probably means it performs better on some sampling rates than others.

ifi-spdif_4-580x435.png
2SPDIFpcb.jpg

I tried it with my fast LPS and it definitely sounds better. If Amir measured with my LPS, it would probably measure better. I never used it for PCM though, only DD and DTS.
 

Empirical Audio

Active Member
Audio Company
Joined
Mar 10, 2019
Messages
224
Likes
63
Location
Great Northwest, USA
If i get it right you try to make conclusions why at least 2 of the tracks from @Blumlein 88 test sounded more focused or actually better than the original after a 8x DAC/ADC conversion, in that conclusion all the talk is about digital domain stuff as re-clockers / PLL / Jitter improvement, but how can one know and conclude difference is the digital chain that make the actual improvement when a simple thing as those tracks AC bandwidth (stopbands) are skrinked for each and every DAC/ADC conversion, and in that process will point to that own research show that whenever a track is shrinked a bit in bandwidth then its dynamic range is actual improved a bit and should make a sensed difference. Also myself had run test on those converted tracks and can agree for a few of them one can maybe come to prefeer the converted ones and that thing could be system or personal dependant, but on the other hand when i load test tracks in real ABX test (Foobar) it can really get hard at least it was for me, for example my best score was on J. Warnes track where i hit 6 right out of 8 but for the other tracks it was 3 to 5 out of 8. In the end you and wife could have better or trained ears and better system than mine but will mean little schrinked track bandwidth should be taken into count to what you call a mystery.

I get this. The challenge is understanding the relative importance to human hearing of this B/W decrease and harmonic distortion increase compared to whatever is making the tracks more focused. I assume jitter reduction because that is usually the case IME.

I doubt if my ears are better than yours. I'm old and have tinnitus in one ear. It's more likely my acoustics and system that makes it easier for me and wife to hear these differences.
 

Empirical Audio

Active Member
Audio Company
Joined
Mar 10, 2019
Messages
224
Likes
63
Location
Great Northwest, USA
Statistically speaking, those things quickly get tricky. These are very small sample sizes. Plus, what you describe is a perfectly plausible random distribution for multiple tests. People get all carried away on ABX tests and, yes the principle is valid. However, most ABX tests are done on samples that are way too small. It is a better, more grounded, attempt than going full subjective, yes. But it is problematic.

This isn't only for audio btw, it is a recurrent problem.

This paper https://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020124 "Why Most Published Research Findings Are False"
made a big splash when it was published and has been cited a ton of times.

Note 1: this isn't meant as a criticism of your method or results. I would do the same because I don't have the patience or the focus to collect enough data.

Note 2: if the effect is strong - in audio, say a clear distortion is heard and recognized with high accuracy - small n results may be valid. Otherwise, you really need a relatively large number of tests for every specific point you want to test (around >30, ballpark figure I've kept in mind from my stats teachers) unless you want to go with fancy distributions.

Note 3: multiple distinct tests (again, unless you have identified a very specific characteristic and you test for that characteristic in these tests) increase the chance of finding a significant result just by chance, but it will be _any_ significant result... Drug companies love those :)

I would much rather see a DBT comparing just 2 tracks and picking the one that sounds most live or the most resolving/intelligible. Comparing two tracks to determine if they are identical is much harder. The 8X tracks were IMO not the best choices and the effects of A/D and D/A includes too many variables changing of which we don't understand yet the relative importance.

If the tracks were more carefully selected and the change was only a single variable, such as jitter, offset or format of the files etc., I think the results would be more uniform. The DBT participants MUST be blindfolded too. What test have you read about that insisted on this? None that I have ever read about.
 
Last edited:
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,368
Likes
234,381
Location
Seattle Area
Are you saying DD or DTS capable units use the clock extracted from SPDIF for the DA stage?
Yes. Otherwise in an Audio/Video scenario, the audio track would drift out of sync with video. It wouldn't matter if it is audio only but the receiver has no idea if that is the case or not. So it needs to stay in sync with input rate.
 
  • Like
Reactions: gvl

pkane

Master Contributor
Forum Donor
Joined
Aug 18, 2017
Messages
5,626
Likes
10,202
Location
North-East
I would much rather see a DBT comparing just 2 tracks and picking the one that sounds most live or the most resolving/intelligible. Comparing two tracks to determine if they are identical is much harder. The 8X tracks were IMO not the best choices and the effects of A/D and D/A includes too many variables changing of which we don't understand yet the relative importance.

If the tracks were more carefully selected and the change was only a single variable, such as jitter, offset or format of the files etc., I think the results would be more uniform. The DBT participants MUST be blindfolded too. What test have you read about that insisted on this? None that I have ever read about.

Picking one out of two tracks the one that sounds better to you is a preference test. It's a subjective test, even if done blind, because it indicates your preference. Assuming you get a statistically valid blind test results in a preference test, with both tracks well-matched, you can be fairly certain that you prefer one track over the other. But this result is not only not applicable to anyone but you, but it also says nothing about transparency, accuracy of reproduction, level of jitter, or anything else about the component.

Oh, and by the way, I've been building this kind of testing into DeltaWave software. Still early work in progress, but for now there are three different kinds of tests, including a blind subjective preference, an ABX, and a stereo discrimination test.
 

Empirical Audio

Active Member
Audio Company
Joined
Mar 10, 2019
Messages
224
Likes
63
Location
Great Northwest, USA
Picking one out of two tracks the one that sounds better to you is a preference test. It's a subjective test, even if done blind, because it indicates your preference. Assuming you get a statistically valid blind test results in a preference test, with both tracks well-matched, you can be fairly certain that you prefer one track over the other. But this result is not only not applicable to anyone but you, but it also says nothing about transparency, accuracy of reproduction, level of jitter, or anything else about the component.

It's true that many listeners will have a preference for a warm sound or a detailed sound etc.. I've seen this myself. However, if they are first coached as to what to listen for, such as lyric intelligibility or live sounding decay in percussion etc., I think this can more level the playing field. Also, if they are musicians, they have a much better sense of what is live sounding, particularly violins or piano. One test that I sometimes do for friends and at trade shows is to play tracks of Steinway, Bosendorfer, Yamaha and Baldwin pianos and see if they can identify each.
 

pkane

Master Contributor
Forum Donor
Joined
Aug 18, 2017
Messages
5,626
Likes
10,202
Location
North-East
It's true that many listeners will have a preference for a warm sound or a detailed sound etc.. I've seen this myself. However, if they are first coached as to what to listen for, such as lyric intelligibility or live sounding decay in percussion etc., I think this can more level the playing field. Also, if they are musicians, they have a much better sense of what is live sounding, particularly violins or piano. One test that I sometimes do for friends and at trade shows is to play tracks of Steinway, Bosendorfer, Yamaha and Baldwin pianos and see if they can identify each.

As someone who played piano for most of my adult life, I prefer the sound and action of different pianos for different types of music. For classical, Steinway all the way. For jazz - I like Yamaha. My preference for certain sound doesn't indicate a technical superiority of the playback equipment, I just happen to like that particular sound better.
 

garbulky

Major Contributor
Joined
Feb 14, 2018
Messages
1,510
Likes
827
Picking one out of two tracks the one that sounds better to you is a preference test. It's a subjective test, even if done blind, because it indicates your preference. Assuming you get a statistically valid blind test results in a preference test, with both tracks well-matched, you can be fairly certain that you prefer one track over the other. But this result is not only not applicable to anyone but you, but it also says nothing about transparency, accuracy of reproduction, level of jitter, or anything else about the component.

Oh, and by the way, I've been building this kind of testing into DeltaWave software. Still early work in progress, but for now there are three different kinds of tests, including a blind subjective preference, an ABX, and a stereo discrimination test.
Well it does show that they can differentiate. But you are right. It is hard to define one person's "quality" when applying to other people. But if something which measures worse shows there is a preference then it warrants follow up to try to tease out what is the quality of the sound, preferrably with other people trying.
 
Top Bottom