• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Multi-Channel, Multi-Amplifier Audio System Using Software Crossover and Multichannel-DAC

Hello friends,

Let me share some hyperlinks to the remote threads where really interesting and "worthwhile-to-read" discussions are going on as of today (August 23 2023) and recently I myself have been partly participating:

- What is your favorite house curve

- The BEST rooms - looking for real world measurements

- Room equalization through inverse delayed and attenuated bass signals

- Pink noise for evaluating speakers

EDIT: These thread also...
- Blew out the woofers on my brand new speakers, need help to find the cause

- Manually time-aligning subwoofer(s) to mains - how to

I hope you would enjoy reading these threads, especially the recent discussions thereof.
 
Last edited:
Reproduction and listening/hearing/feeling sensations to 16 Hz (organ) sound with my DSP-based multichannel multi-SP-driver multi-amplifier fully active stereo audio system having big-heavy active L&R sub-woofers

Hello ASR friends,

After partly participating the interesting remote threads listed in my above post #781, I feel it would be worthwhile writing this post for your reference.

First of all, let me remind you that I have written this posted #641;
Excellent Recording Quality Music Albums/Tracks for Subjective (and Possibly Objective) Test/Check/Tuning of Multichannel Multi-Driver Multi-Amplifier Time-Aligned Active Stereo Audio System and Room Acoustics; at least a Portion and/or One Track being Analyzed by Color Spectrum of Adobe Audition in Common Parameters: [Part-09] Organ Music: #641

where I shared one YouTube video clip of amazing organ performance which has significant 16 Hz organ sound clearly analyzed and shown in 3D color sound spectrum of Adobe Audition 3.0.1 as follows;

WS00004464.JPG


I also have several other excellently recorded CDs and download-purchased tracks of organ performances having distinct 16 Hz pipe sound.

Of course, as a classical and early music enthusiast, I attended impressive organ concerts several times at huge cathedrals and large concert halls in Europe, the USA and Japan, where 16 Hz lowest pipe is available. This means that I know well "what is my real sensation" listening to 16 Hz pipe not only with my ears but also with my whole body. I assume, however, very few people in our ASR Forum have ever experienced such 16 Hz organ sound in real world.

To reproduce 16 Hz organ sound as excellently as possible with my home audio system has been always one of the big challenges throughout my long-year audio exploration, and I believe that my latest stereo setup can do it quite nicely with the L&R large-heavy subwoofers YAMAHA YST-SW1000 at properly-acoustically-treated home listening room.

On the other hand, I (we) have several difficulties in properly/precisely objectively record and analyze the sound below 25 Hz even though the measurement microphone, BEHRINGER ECM8000 in my case, has reasonably flat response over 15 Hz to 20 kHz.
EDIT: Please refer to my post #831 for present (as of October 5 2023) frequency response of my "specially selected" BEHRINGER ECM8000.
WS00005939.JPG

Usually the ADC (or audio interface) using has some limitation below 20 Hz, and also the FFT Fq analysis software and mathematical algorisms have inevitable limitation (or statistical fluctuations) in very low Fq zone below 20 Hz. I assume, if I could have extraordinary expensive professional microphone and very much advanced recording/analysis system, maybe I would be possible for better objective measurement of 16 Hz sound (below 20 Hz sound).

One another critical aspect of 16 Hz sound would be that we "hear" and "feel" such low Fq sound not only by our ears but also (more importantly/significantly) with our whole body receiving the sound-pressure (just as if receiving sharp pulses of vigorous transient storm windshear).

Consequently, the description/presentation/sharing of my (our) listening sensations/experiences to 16 Hz sound inevitably becomes mainly rather subjective expressions, with only a little bit of objective data and/or spec representation.

Nevertheless,,, I assume it would be worthwhile, just for your reference, sharing some description and specification of my treasure L&R subwoofer YAMAHA YST-SW1000 which was launched in 1990 and still highly rewarded now (mainly in Japan) as one of the best subwoofers can go down to 15 Hz in excellent sound quality. Attached herewith, you can find PDF of this AudioHeritage web page on YST-SW1000 (English translation by Google Chrome).
https://audio-heritage.jp/YAMAHA/speaker/yst-sw1000.html
We can read as;
The YST (Yamaha Active Servo Technology) method uses a negative resistance drive circuit, an amplifier, a woofer unit that is driven in a state where the impedance is close to zero, and its unit diaphragm as part of its components, and the air resonance inside the port is composed of a Helmholtz resonator (resonance box) that receives ultra-low frequencies.

In conventional speakers, low-frequency reproduction was limited by the rise in impedance, but by canceling the negative resistance circuit, the woofer unit becomes a dynamic speaker with an infinite damping factor, enabling linear operation down to ultra-low frequencies. At the same time as functioning as a normal woofer that takes charge of several Hz or more, it plays the role of supplying vibration energy to the resonator with linear minute vibrations down to ultra-low frequencies.

The resonator then becomes an air woofer in which the air in the port directly vibrates due to this energy, and reproduces ultra-low frequencies according to the resonance characteristics freely set by the volume of the cabinet and the shape of the volume of the port.

The air woofer does not have its own diaphragm, and the air in the port itself acts as a diaphragm, so there is no difference in tone due to the diaphragm material, and there is no distortion such as stroke distortion or magnetic distortion.

For further technical description including its unique 30 cm driver and the voice coil, you would please read carefully the PDF attached below.

Let me share the specification of large-heavy YAMAHA YST-SW1000;
WS00005940.JPG


As I have already repeatedly shared, in my multichannel audio system, the 0.1 msec precision time alignment has been established all over the SP drivers, i.e. L&R subwoofers YST-SW1000, woofers, midrange-Be-dome-squawkers, Be-dome-tweeters, and metal-horn-super-tweeters. You would please refer to my post #774 for the details of latest system setup including the actual photos of gears, SP system, and listening room environment.

Whenever I seriously listen/feel the organ (and other) tracks containing 16 Hz (and/or below 35 40 Hz) significant "music sound", I always take my best care of our furniture alignments, position of potteries and glass wares in our cabinet, the glass-window-door to the corridor, etc. avoiding possible rattle/resonance by the extremely low Fq sound pressures.


In any way, using L&R subwoofer YAMAHA YST-SW1000, I (we) can very much enjoy the quasi-perfectly-excellent reproduction of 16 Hz (and other below 35 Hz) sounds hearing and feeling with our ears and whole body even using my home audio setup at properly-acoustically-treated home listening room.

I do hope many of you who have excellent (modern) subwoofers and nice listening room would agree and understand what I would like to share in this post.

EDIT:
Of course, I can check/confirm/tune 16 Hz (and/or below 40 Hz) sound-reproduction by using several well-QC-ed test tone tracks of "Sony Super Audio Check CD (ref. here #651)".
 

Attachments

  • AudioHeritage_YAMAHA YST-SW1000_YST-SW1000L_Eng_by_Google Chrome.pdf
    709.5 KB · Views: 135
Last edited:
Can I (we) temporarily synchronize outputs of multiple DAC units (each of them has own independent ASIO driver) in 10 micro second (0.01 msec) precision in DSP-based multichannel audio setup?

Edit:

After reading this post, you would also please visit my Part-2 post here #804;
Can I (we) temporarily synchronize outputs of multiple DAC units (each of them has own independent ASIO driver) in 10 msec (0.01 msec) precision in DSP-based multichannel audio setup? Part-2: Simplified experiments without using audio mixer


Important top message:
This post is not intending to suggest/recommend you to apply or utilize the same setup and procedures in your DSP-based audio system, but I just would like to share my curiosity and experiments relating to the above titled subject for your possible reference and interest.

Hello ASR friends,

Introduction

It is well known that we should not use multiple DAC units (through ASIO routing) simultaneously in our audio setup since we essentially have no way to fully synchronize them; each of the dedicated ASIO driver has its own "settings" such as "Safety Mode", "ASIO Buffer Size", "Latency (ms)", etc., and hence they never can be configured into completely the same. Furthermore, we usually know nothing about the physical buffer(s) inside the DAC units, and we cannot control Windows' (or MAC's) ASIO packet sending priorities and/or timings.

Nevertheless, I have very temporarily used a second DAC unit in my audio setups, for example in the case of my posts #264, just for my sub-woofer channels in that "tentative" configuration; I had no "audible out-of-sync issue" at that time, even though I fully understood the two DAC units were not fully "in-sync" with each other.

During the past five years, I have been much curious about what and how-much would be the possible "out-of-sync" characteristics/nature between the multiple DAC units if I use them simultaneously in my DSP-based multichannel multi-SP-driver multi-amplifier fully active audio setup, and also I have been wondering that if I could have any "acceptably compromising method(s)" to temporarily synchronize the outputs of multiple DAC units in my audio rig.

In order to clarify and resolve my above personal curiosity, therefore, this time I carefully designed my experiments as follows.


Preparation of a test audio track with sharp-peak timing markers, and other experimental setup and conditions

First of all, using Adobe Audition 3.0.1, I prepared one stereo music track (44.1 kHz, 16 bit) consists of two portions of full orchestral music (I use the first track, 9 min 59 sec "Overture", of my favorite Schubert "Rosamunde" orchestra album which I shared here #588), and then I inserted sharp-peak 10 kHz timing markers (width 1 msec) at the beginning, in the middle, and in the end of the track; the total length of this test track is 21 min 35 sec.
WS00006184.JPG


Hereinafter uSec is "micro second", 0.001 msec.

As shown in above Fig.01, the time resolution of this test track is 22.7 uSec (0.0227 msec) on the readable X-axis (time scale) which is corresponding to the 44.1 kHz test track of 22.7 uSec time resolution. Furthermore, we can easily identify/read the exact time point of the timing markers in 1 uSec precision by using vertical-line timing cursor on fully enlarged/expanded X-axis (time scale).

In the following present experiments, I used four (4) DAC units each of them have its own dedicated ASIO driver as shown in this Fig.02;
WS00006183.JPG


Again, let me emphasize that each of the dedicated ASIO driver has its own "settings" such as "Safety Mode", "ASIO Buffer Size", "Latency (ms)", etc., and hence they can be never configured into completely the same. Furthermore, we usually know nothing about the physical buffer(s) inside the DAC units, and we cannot control Windows' (or MAC's) ASIO packet sending priorities and/or timings.

I intentionally connected these USB 2.0 compatible DAC units to USB 3.0 ports of my PC for much better stability especially in so-many-multichannel DSP operations; no other USB device was connected to USB 3.0 port. On the other hand, I connected my keyboard and mouse to the USB 2.0 ports. Just for your possible reference, I usually use latest version of "USB Device Tree Viewer, UsbTreeView (x64)" (now v.3.8.8.0) to see and confirm my USB tree-path/routing/status, as well as maximum USB speed of the port and device, in my PCs.

In present experiments, I used my default and favorite digital music player JRiver MC v.31.0.52 (64-bit) together with ASIO4ALL v.2.14 as virtual Audio Device, as shown in this Fig.03;
WS00006182.JPG


Please note that, as shown in above Fig.03, I usually convert all the music tracks into 96 kHz PCM on-the-fly to be fed into DSP "EKIO" through ASIO4ALL + VB Audio ASIO routing; this remained unchanged throughout present experiments (ref. here #532 for "my" rationales for conversion into 88.2 kHz or 96 kHz PCM).

The details of ASIO4ALL configurations for JRiver MC and DSP software "EKIO" are shown in this Fig.04;
WS00006181.JPG


Please understand that, with properly configured ASIO4ALL, EKIO can select any of the available USB-ASIO DAC units as output destination from each of the output channel, as shown in Fig.05 below;
WS00006180.JPG


And, EKIO can have unlimited numbers of output channels as far as your PC is capable of handling all of them finely. (Later-on in this post, I will share the simultaneous 24-channel case in Day-3 Experiment.)

Although I have already shared the details of my two (2) audio dedicated Windows 11 Pro (64 bit) PCs (ref. here #225 and #774), I believe it would be worthwhile sharing here again for your reference.
WS00006179.JPG



Day-1 Experiment: Non-XO-ed through stereo 2-channel to be fed into four (4) DAC units, (total 8-channel simultaneous output)

To start my present experiments, I prepared very simple stereo "through path" EKIO configuration which has no XO nor group delay setting. I copy-paste such L&R "through" stereo channels three times to configure 4-set of stereo pairs, i.e. total 8-channel, as shown in the below Fig. 07.
WS00006178.JPG


Here I tentatively used OKTO DAC8PRO as 2-channel stereo DAC in this experimental setup by using its CH7(L) and CH8(R) as shown in the above.

All of the four (4) analog stereo outputs were fed into EDIROL M-10E Active Analog Mixer where I intentionally "mixed" all of the output signals into one analog out, and such mixed sound was then fed into TASCAM US-1x2HR Audio Interface for digital recording (192 kHz 24 bit) by Adobe Audition 3.0.1 on my second PC.

For simplicity of my writing, please let me abbreviate as follows in this entire post;

OKTO as OKTO RESEARCH DAC8PRO,
KORG as KORG DS-DAC-10,
OPPO as OPPO SONICA DAC,
ONKYO as ONKYO DAC-1000(S).

As for the recording of the test track, I always did the following step-by-step procedures;

1. Start recording by Adobe Audition 3.0.1 on the second PC,
2. About 5 sec later, push/start "PLAYING" the DSP EKIO,
3. About 3 sec later after 2., start "PLAY" the test music track by JRiver MC.

The mixing in EDIROL M-10E were always set in almost the same gain for all the input analog channels, and so adjusted that the mixed highest/loudest full orchestra tutti sound would hit about -2 dB below the clipping level.

The recorded "such mixed sound" was then analyzed by Adobe Audition 3.0.1 as shown in the below Fig.08;
WS00006177.JPG


The enlarged/expanded time scale at the timing marker portions, i.e. beginning, middle and end parts of the track, showed the same relative "time-domain discrepancies" of the four (4) DAC sound.

By your looking at Fig.08, I understand that you may have simple question for me; "How can you assign the four timing markers to the four DACs??"

Yes, I can/could do it quite easily by using a tentatively "staggered" mixing levels in the audio mixer EDIROL M-10E as shown in below Fig.09;
WS00006176.JPG


Let's go back to Fig.08, and we can clearly observe;

- The relative time-domain discrepancies did not change/drift throughout the whole track of 21 min 35 sec,
- ONKYO was the most slow-starting DAC in this setup,
- OKTO started 7.08 msec prior to ONKYO,
- KORG started 4.81 msec prior to ONKYO,
- OPPO started 1.81 msec prior to ONKYO.

Then an idea came to me;
"If the above five observations would remain the same in this experimental setup, then can I compensate the relative time domain discrepancies using EKIO's accurate group delay in upstream digital domain, so that the outputs of four (4) DACs would be fully synchronized in 10 uSec precision?"

I mean that I may test giving 7.08 msec group delay in EKIO's OKTO channels, 4.81 msec delay in EKIO's KORG channels, and 1.81 msec delay in OPPO channels so that all the output analog signals from the four (4) DAC units would be fully synchronized in 10 uSec precision.

As shown in Fig.01, since the test music track is in 44.1 kHz with 22.7 uSec time resolution, the possible compensation in 10 uSec precision should be enough. Furthermore, EKIO's group delay (in 96 kHz operation) has 0.01 msec (=10 uSec) numerical input granularity.

Consequently, I tested the above idea as shown in the below Fig.10, and the result was just amazing; the outputs of all the four (4) DAC units were fully synchronized in 10 uSec precision.
WS00006175.JPG


Of course, the durability, stability, and reproducibility of such "compromising method" would be highly important; during the 18-hour Day-1 experiment with no change at all in the PC including ASIO settings, I repeated the recording eight (8) times until going to bed, and the result of Fig.10 has been remaining unchanged.


Day-2 Experiment: Fully XO-ed group-delayed stereo 4-way (8-channel) to be fed into two (2) DAC units, (total 16-channel simultaneous output)

The somewhat positive result given by my Day-1 experiment encouraged me to proceed into Day-2 experiment where I tested two (2) sets of "fully XO-ed and group-delayed 8-channel" (total 16-channel) to be fed into two (2) DAC units.

I have repeatedly shared my stereo 4-way 8-channel XO configuration in EKIO as shown in this Fig.11; (ref. here #774 for the latest total setup of my audio rig).
WS00006174.JPG


In the Day-2 experiment, I exactly configured EKIO for the two (2) sets of such "4-way 8-channel" as shown in the below Fig.12, and each of the "4-way 8-channel" sets were fed into OKTO and KORG simultaneously;
WS00006173.JPG

Here the two (2) DAC units were used as stereo 2-Ch DAC, and therefore;
EKIO's CH1, CH3, CH5, CH7 to go into OKTO's CH7,
EKIO's CH2, CH4, CH6, CH8 to go into OKTO's CH8,
EKIO's CH9, CH11, CH13, CH15 to go into KORG's L-CH,
EKIO's CH10, CH,12, CH14, CH16 to go into KORG's R-CH.

The analysis of the recorded "mixed" test track gave this result in Fig.13;
WS00006172.JPG


Now, like in the case of Day-1 experiment, we can clearly observe;

- The relative time-domain discrepancies did not change/drift throughout the whole track of 21 min 35 sec,
- KORG was the slower-starting DAC in this setup,
- OKTO started 2.27 msec prior to KORG.

Consequently, I added +2.27 msec group delay for all the OKTO channels in EKIO so that the outputs from the two (2) DAC units would be hopefully synchronized, and the result was shown in this Fig.14.
WS00006171.JPG


We can observe that the outputs from the two (2) DAC units were fully synchronized in 10 uSec precision.

Of course, I checked the durability, stability, and reproducibility of such "compromising method"; during the 18-hour Day-2 experiment with no change at all in the PC including ASIO settings, I repeated the recording eight (8) times until going to bed, and the result of Fig.14 has been remaining unchanged.


Day-3 Experiment: Fully XO-ed group-delayed stereo 4-way (8-channel) to be fed into three (3) DAC units, (total 24-channel simultaneous output)

The rather favorable result given by my Day-2 experiment encouraged me to proceed into Day-3 experiment where I was so brave that I dared to test three (3) sets of "fully XO-ed and group-delayed 8-channel" (total 24-channel) to be fed into three (3) DAC units, as shown in this Fig.15; each of the "4-way 8-channel" sets were fed into OKTO and KORG and OPPO simultaneously;
WS00006170.JPG


The analysis of the recorded "mixed" test track gave this result in the below Fig.16;
WS00006169.JPG


Now, like in the case of Day-1 and Day-2 experiments, we can clearly observe;

- The relative time-domain discrepancies did not change/drift throughout the whole track of 21 min 35 sec,
- OPPO was the most slow-starting DAC in this setup,
- OKTO started 5.27 msec prior to OPPO,
- KORG started 3.00 msec prior to OPPO.

Consequently, I added +5.27 msec group delay for all the OKTO channels and +3.00 msec group delay for all the KORG channels in EKIO so that the outputs from the three (3) DAC units would be hopefully synchronized, and the result was shown in this Fig17.
WS00006168.JPG


We can observe that the outputs from the three (3) DAC units were fully synchronized in 10 uSec precision.

Of course, I checked the durability, stability, and reproducibility of such "compromising method"; during the 18-hour Day-3 experiment with no change at all in the PC including ASIO settings, I repeated the recording eight (8) times until going to bed, and the result of Fig.17 has been remaining unchanged.


Discussion and conclusion

Throughout those 3-day careful experiments shared above, I know/knew that I have been unexpectedly quite "lucky" in terms of the consistency and reproducibility of the "relative time-domain startup discrepancies" given by the four DAC units (connected to USB 3.0 ports) I have tested, as summarized in this Fig.18;
WS00006167.JPG


All of us, people in the DSP-based multichannel audio league including myself, know and rely on the "fact" that the "starter pistol" of DSP software for all of the XO-ed runners (channels) to run (go) into DAC(s) is always synchronized by the "one starter pistol shot in (sub-)micro-second precision", otherwise no DSP-based multichannel audio system could be achieved.

We should also remind and understand that once the DAC unit "starts" its DA processing, the time-domain accuracy should be in sub-micro second accuracy and stability; I mean a sound track of 21 min 35.123456 sec should be played by any of the past-present (SOTA) DAC units in exactly the same playback length of 21 min 35.123456 sec, and this is essentially true in the real world. (Of course, I know well that the time domain resolution depends on sampling rate of the track and processing; 44.1 kHz corresponds to 22.7 uSec resolution, 88.1 kHz to 11.4 uSec, 96 kHz to 10.4 uSec, 192 kHz to 5.21 uSec, and so on.)

Consequently, the "relative time-domain startup discrepancies" between the multiple DAC units (in downstream) should be totally responsible for each of the ASIO drivers and physical DAC units, and also possibly could be affected by the timings/priorities of OS's (Windows 11 Pro, in my case) ASIO packet sending.

And, therefore, as a conclusion at least in my experimental settings, my fortunate findings throughout the 3-day experiments told me that;

I can compensate such "relative time-domain startup discrepancies" of multiple DAC units by using DSP software's (EKIO in my case) group delay settings in 10 uSec precision, only if the "relative startup discrepancies" would remain unchanged throughout the specific experimental (and audio listening) session, so that the "outputs" of all the DAC units can be synchronized.

Of course, I need to objectively measure and confirm the "relative time-domain startup discrepancies" in less than 10 uSec precision before and after my experimental (and audio listening) sessions when I would dare to apply this "acceptably compromising method" for multiple DAC units. (As you may agree, I essentially have no intention of "routinely" apply this approach in my multichannel audio rig, though.)

Now, you may understand that I carefully wrote the title of this post as "Can I (we) temporarily synchronize outputs of multiple DAC units (each of them has own independent ASIO driver) in 10 micro second (0.01 msec) precision in DSP-based multichannel audio setup?"

I was not intending to (was not capable of) physically synchronize the multiple DAC units, but I tried to synchronize the outputs from the multiple DAC units by compensating the "relative time-domain startup discrepancies" (if it would remain unchanged during the session needed) using upstream DSP's accurate and precise group delay settings.

To finish this rather long post, let me repeat the top message I wrote;
This post is not intending to suggest/recommend you to apply or utilize the same setup and procedures in your DSP-based audio system, but I just would like to share my curiosity and experiments relating to the above titled subject for your possible reference and interest.

I would highly appreciate if I could hear your thoughts and discussion(s).

Edit:
After reading this post, you would also please visit my Part-2 post here #804;
Can I (we) temporarily synchronize outputs of multiple DAC units (each of them has own independent ASIO driver) in 10 msec (0.01 msec) precision in DSP-based multichannel audio setup? Part-2: Simplified experiments without using audio mixer
 
Last edited:
Hello again, Igor,

In comparison with my listening room acoustics/environments, I have been always much envy for your wonderful listening room (or should I say your amazing "studio"?) environments... Yes, I essentially agree with you that listening room greatly "matters" the quality of stereo listening!

You are absolutely right dualazmak.

It is the quality and size of the listening room which, even more than the equipment or the equalizations, determines the realism of the final rendering of a High Fidelity system.
In Japan the price per m2 of listening room is unfortunately very expensive...
In France, if you move away from Paris, the price per m2 of listening room is modest.

It is quite possible that my Hi Fi installation will give, in a room of 300 m3, an even better result than in my current Studio in Briare (100m3)
 
I would highly appreciate if I could hear your thoughts and discussion(s).
All i can say is that i would never dive as deep into what seems to me such a small problem, but i'm glad some did and i'm even more glad it's you. :)

Could we buy 4 of the same DACs, disable the internal clock on the first 3 and hook them up to the clock of the fourth? I imagine it's some kind of crystal on the PCB that gives the timing, or maybe i'm wrong and the clock is inside the main chip in the DAC to safe cost?

Or maybe, if you used 4 DACs that are the same make and manufacturerer, the problem isn't there anyway and they are synchronized from the start?

Excuse me if you had mentioned those ideas already.
 
All i can say is that i would never dive as deep into what seems to me such a small problem, but i'm glad some did and i'm even more glad it's you. :)
Thank you for your kind comments on my experiments.

Could we buy 4 of the same DACs, disable the internal clock on the first 3 and hook them up to the clock of the fourth? I imagine it's some kind of crystal on the PCB that gives the timing, or maybe i'm wrong and the clock is inside the main chip in the DAC to safe cost?
Usually HiFi DAC units has one or more quartz oscillator chips for clocking/synchronization, but I myself is not an expert to understand the PCB design.
Furthermore, I know very little about design of cheaper DAC units including small USB-DAC dongles.

Theoretically you can do it, but practically almost impossible since you should know the very details of the physical and software/firmware design of the DAC unit, and you should be highly skillful for such DIY electronics modification work; you also need most advanced pro-grade measurement instruments including SOTA synchroscope. I believe only the designer(s)/engineer(s) who designed the DAC unit can do such a DIY-type modification.

Furthermore, even if you could fortunately achieve such physical DIY, software (i.e. ASIO driver) recognition of multiple DAC channels should be another critical issue (please see my points bellow).

Or maybe, if you used 4 DACs that are the same make and manufacturerer, the problem isn't there anyway and they are synchronized from the start?
The main issue for your inquiry would be the design of the individual ASIO driver for DAC unit.

Any of home-use 2-CH stereo DAC units has "dedicated" ASIO driver software which cannot recognize multiple "same" DACs connected to your PC. I have confirmed this since I actually have three units of KORG DS-DAC-10, but the KORG's USB-ASIO driver recognises only one of them even when I connected three of them to my PC. (I do not know the situation in Mac OS, I would highly appreciate, therefore, having comments and discussion from Mac people.)

I also sent my inquiry to several companies asking; "Can your USB ASIO driver properly recognize multiple of your specific DAC units?". All the answers were just negative; they never thought/tested such a "very odd" use case, and their common comments were;
"No, our USB ASIO driver can recognize only one specific DAC unit. If you would connect two or more of the specific DAC units to your PC, they may possibly cause some 'fatal' software confusions in ASIO routing, although we have never tested it. We need to say 'just don't do it'. Please understand that such odd usage is clearly out of our warranty on USB-ASIO software and the DAC unit, and therefore we would be fully not responsible for any fatal software and/or operating system (OS) failure and/or even hardware failure, those might be caused by your such odd use case."

In the professional audio(-visual) market, on the other hand, I heard (so far I have never tested by myself) that some of the stereo and/or multichannel "DAC units" and "audio interfaces" (can act as DAC) have "daisy-chain" capabilities with inter-sync protocol/hardware; the first unit can behave as master clock, and the others could receive clocking signals from the fist one to be the slaves, and furthermore their dedicated software and ASIO driver ("single" driver software) is designed to recognize all the the multiple DAC channels; this means, through one USB cable into the master unit, it would be possible that we could establish multiple ASIO routing into such "daisy-chained and synchronized" multiple DAC (or audio interface) units.
Edit: Just for example, RME Fireface UFX series have this nice features. I hope your web browser would properly translate this clearcut FAQ page into English.
 
Last edited:
Hello friends,

I assume and hope the "experimental methodologies" in my above post #783 would be somewhat worthwhile and of your reference;
- preparation of actual music track with sharp-peak timing markers in the beginning, middle and end portions
- mixing all the analog outputs by an analog audio mixer, intentionally
- analysis of the recorded mixed sound by Adobe Audition (or Audacity), especially in time-domain in 1 uSec to 10 uSec resolution/precision
 
Last edited:
Would you mind running two very simple tests?

1) Run a frequency response sweep in REW, with one DAC as your main output and another DAC as your timing reference. Your capture device should be a two channel ADC. No filters applied, just flat frequency response. Run this a few times over a few minutes. Does the phase response stay the same or is it variable?

2) Similar setup to my test 1 but just play a constant 15 kHz tone and observe the output of each DAC at the same time using the scope in REW. Is there any shifting between the two or are they constant?

Michael
 
Would you mind running two very simple tests?

1) Run a frequency response sweep in REW, with one DAC as your main output and another DAC as your timing reference. Your capture device should be a two channel ADC. No filters applied, just flat frequency response. Run this a few times over a few minutes. Does the phase response stay the same or is it variable?

2) Similar setup to my test 1 but just play a constant 15 kHz tone and observe the output of each DAC at the same time using the scope in REW. Is there any shifting between the two or are they constant?

Michael

Sorry, but I do not well understand what you are asking me to perform.

For your inquiry of above 1),,,
I believe my "Day-1 experiment (no filter applied, just the through stereo signals)" and the result (Fig.08 and Fig.10) in post #783 would very well fit for your demand, don't you think so??

As I wrote there, at least in my Day-1 settings; "during the 18-hour Day-1 experiment with no change at all in the PC including ASIO settings, I repeated the recording eight (8) times until going to bed, and the result of Fig.10 has been remaining unchanged."

Here, I am (we are) interested only in "relative time domain discrepancies" and not "absolute time-domain comparison" between the two outputs. I believe, therefore, that my method of "sound mixing" and analysis of "relative time domain discrepancies in 10 uSec or less resolution/precision" of the recorded mixed sound would be most suitable.

(In present experiments, I do not care at all of sound quality, including the Fq responses and distortions, of the outputs from the DAC units!)

Furthermore, how can you (can we) analyze time-domain discrepancies in 10 uSec resolution/precision by using "REW's FFT frequency response sweep"??


For your inquiry of above 2),,,
Again, I am only interested in "relative time domain discrepancy (discrepancies)". What do you mean by "a constant 15 kHz tone"??
If you give me simple schematic description, even "a hand written diagram", it would greatly help me to understand what you mean and what you would like to achieve.;)

Would would please look carefully again the structure of my "test track" which I prepared and used throughout in my experiments in post #783;
WS00006184.JPG


If needed, I can very easily make the two Full Orchestra Music parts into complete silence keeping the total length of the track remains unchanged.
 
Last edited:
Would you mind running two very simple tests?

1) Run a frequency response sweep in REW, with one DAC as your main output and another DAC as your timing reference. Your capture device should be a two channel ADC. No filters applied, just flat frequency response. Run this a few times over a few minutes. Does the phase response stay the same or is it variable?

Hello again,

Regarding your request of 1) above, do you mean like this diagram??
WS00006199.JPG


I copy-pasted the strictly-prepared well-QC-ed "sine sweep track" of "Sony Super Audio Check CD" (ref. here) into the L-channel of the test track.

Please note, my "simple" TASCAM US-1x2HR Audio Interface is capable of only one stereo (L&R) analog line input for digital recording.
 
Last edited:
Would you mind running two very simple tests?

1) Run a frequency response sweep in REW, with one DAC as your main output and another DAC as your timing reference. Your capture device should be a two channel ADC. No filters applied, just flat frequency response. Run this a few times over a few minutes. Does the phase response stay the same or is it variable?

2) Similar setup to my test 1 but just play a constant 15 kHz tone and observe the output of each DAC at the same time using the scope in REW. Is there any shifting between the two or are they constant?

Michael

Hello again,

Regarding your request of 2) above, do you mean like this diagram??
WS00006196.JPG


I copy-pasted the strictly-prepared well-QC-ed "20 sec steady gain 16 kHz tone" of "Sony Super Audio Check CD" (ref. here) into the L-channel of the test track. (Sorry, the Audio Check CD does not have 15 kHz steady tone.)
 
Would you mind running two very simple tests?

1) Run a frequency response sweep in REW, with one DAC as your main output and another DAC as your timing reference. Your capture device should be a two channel ADC. No filters applied, just flat frequency response. Run this a few times over a few minutes. Does the phase response stay the same or is it variable?

2) Similar setup to my test 1 but just play a constant 15 kHz tone and observe the output of each DAC at the same time using the scope in REW. Is there any shifting between the two or are they constant?

Michael

And, if you also would like to see the coming results, I can easily "cross" the inputs to the two DAC units for comparison/confirmation;
WS00006200.JPG


and,
WS00006198.JPG
 
Last edited:
I don't use Windows or EKIO but I think I can describe the concepts well enough such that you will be able to implement a similar measurement procedure.

I use Mac which has the ability to create an aggregate device from two USB devices, just like you are doing in EKIO. Here is an aggregate device between a MOTU Ultralite Mk5 and a miniDSP 2X4HD. Output channels 1-18 belong to the MOTU and channels 19-20 belong to the miniDSP.

1694525995654.png


In REW I use the the aggregate device as an output device. With your setup I assume you would set your output device to EKIO in some way. As you can see I am also using an ADC as input device. Output 1 of the aggregate device (output channel 1 of the MOTU) is physically routed to input channel 3 of the ADC and output 20 of the aggregate device (output channel 2 of the miniDSP) is physically routed to input channel 4 of the ADC.

1694526101470.png


I would expect your setup would like something this. There should be no need to use two computers. You would output to EKIO and use your TASCAM as an input in REW.

1694527441546.png


Here is my measurement setup in REW. The key item here is I am using a loopback as timing reference. You can see that my main measurement is the MOTU and the miniDSP is being used as a reference. With this setup REW will be able to calculate the relative phase response of the output of the two DACs.

1694526375214.png


I run an initial sweep to determine the delay between the two devices. In this case it was -1.9069 ms and I updated the timing offset field to reflect. I then run several measurements sweeps with the following results. You can see that the low frequency phase response looks good but there are significant deviations above 1 kHz.

1694526569234.png


For my second test I simply use the Generator function to play a constant 15 kHz tone.

1694526696208.png


I then use the Scope function to view the input channels of the ADC. When using two USB DACs there is significant variation in the waveforms which matches the phase measurement. You can see this in the video below.

Screen Recording 2023-09-12 at 9.20.26 AM.gif


In comparison, here are the same DACs but the TOSLINK output of the MOTU is routed to the TOSLINK input of the miniDSP, this ensures that they are clock sync'd. This clearly shows in the phase response measurements.

1694527187166.png


And also clearly shows in the 15 kHz scope measurement where there is no movement between the two traces.

Screen Recording 2023-09-12 at 9.28.51 AM.gif


Michael
 
Last edited:
Just as a sanity I also tried the 15 kHz test using a separate oscilloscope. Wanted to rule any potential issue that may have resulted from using the MOTU Ultralite Mk5 as both a DAC and ADC. My results were the same.

Top trace is MOTU Ultralite Mk5 (hot pin), middle trace is miniDSP 2x4HD and bottom trace is MOTU Ultralite Mk5 (cold pin).

Here is aggregate device (dual USB), clearly no sync.
IMG_8713.gif


Here is clocking the 2x4HD via TOSLINK from the MOTU. This has clock sync.
IMG_8715.gif


And finally a bit of a new measurement. Mac has a "drift correction" option which resamples in attempt to sync the two separate devices. This is definitely better but still not sync'd.

IMG_8714-2.gif


Michael
 
Hello again @mdsimon2,

I highly appreciate having your above two posts describing your intensive setup and measurements in your Mac and MOTU audio rig. You are very nicely responding to my reply post #786 to @TheBatsEar where I was inviting Mac people using professional multichannel DAC units.

Furthermore, I know/knew very well that you are one of the most advanced audiophiles using Mac and MOTU and other audio interface(s)/DACs together with REW. I believe that your above discussion and shared info should be very nice reference for all the people kindly visiting this thread periodically.

I too have installed REW in my two Windows PC for occasional room acoustic measurements (e.g. ref. my very early post #017, #018, #020-#022), and even nowadays I periodically use REW to check room acoustics.

You would please understand, however, that my objective of the rather long and intensive post #783 is to share what is happening and what kind of "acceptable compromisation" I can have (under the strictly limited conditions) in Windows-ASIO4ALL-USB_ASIO-DSP audio rig on rather outdated-spec PCs when I dare to connect multiple DAC units with independent ASIO drivers.

I assume you are well understanding that I carefully avoid any USB-ASIO (or other route) loopback measurements/analyses within the single PC which "plays" music using digital music player (JRiver MC) and DSP software (EKIO), since the loopback process itself may have high possibility of affecting the time-domain sequence of the ASIO "playback" configurations, especially in rather outdated-spec PCs. This is the main reason for I use second independent PC for recording of the "actual sound" given by the DAC units.

In my post #783, I dared to avoid using REW or other advanced audio measurement and management software, but I used only Adobe Audition 3.0.1 (we may use free Audacity as well in these kind of simple analysis) for simple and straightforward "time sequence" analysis in less than 10 uSec precision/accuracy, since I well know people visiting this thread are not always very much familiar with the highly advanced utilization of REW (or Equalizer APO, and son on) which you kindly shared. And, very little people always has SOTA oscilloscope(s) or synchroscope(s).

And, please also understand that I have no "out-of-synchronization issue" at all in my present audio system setup (ref. here) where I only use the nice 8-CH multichannel DAC unit OKTO DAC8PRO. (I use KORG DS-DAC-10 together only for VU monitoring of the whole-sum input signal where 50 msec precision synchronization would be more than enough.) As I wrote on the top of my post #783, the experimental setups and the results thereof are just for my curiosity, and I liked to share them for possible interest and reference for people visiting this thread.

Edit:
And I know very well that S-PDIF and AES/EBU digital signals have the sync pulses in them for full synchronization of the slave devices, even though the sync mechanism is now a little bit outdated in terms of possible higher jitters compared to USB-ASIO, DANTE, RAVENA, AES67; I hope your web browser would properly translate this web article into English or your language.


By the way, I am interested in your two suggestions in your post #788, for which I understood the possible setup shown in #790, #791, and #792, since in these setups I can eliminate the use of analog audio mixer (EDIROL M-10E) even though it has absolutely no effect on relative time-domain sequence/discrepancies. I really like as simple as possible experimental setup to extract what I would like to measure, and your suggested setups would very much fit for this policy.

During the next weekend, I hope I could find some time to perform these additional experiments, fully eliminating EDIROL M-10E in the recording sequence, in completely identical "PC conditions" for the Day-1 experiment in
#783, where I will connect two DAC units (I like to use OKTO and KORG) to USB 3.0 ports of the PC motherboard (no other device should be connected to USB 3.0 port), and again I will use second independent PC for recording. I will get back to you when I would finish such additional simplified experiments, even though I believe the results would remain unchanged.
 
Last edited:
I assume you are well understanding that I carefully avoid any USB-ASIO (or other route) loopback measurements/analyses within the single PC which "plays" music using digital music player (JRiver MC) and DSP software (EKIO), since the loopback process itself may have high possibility of affecting the time-domain sequence of the ASIO "playback" configurations, especially in rather outdated-spec PCs. This is the main reason for I use second independent PC for recording of the "actual sound" given by the DAC units.

I actually don't understand your concerns with a loopback. To clarify, I am talking about a physical loopback not a virtual loopback. In this way a loopback refers to physically routing the output of one DAC output channel to one ADC input channel. This is used as a timing reference against the DAC output you are actually measuring. This is a very common procedure when measuring/analyzing phase response, and I don't see how the ASIO playback configuration needs to be any different.

During the next weekend, I hope I could find some time to perform these additional experiments, fully eliminating EDIROL M-10E in the recording sequence, in completely identical "PC conditions" for the Day-1 experiment in #783, where I will connect two DAC units (I like to use OKTO and KORG) to USB 3.0 ports of the PC motherboard (no other device should be connected to USB 3.0 port), and again I will use second independent PC for recording. I will get back to you when I would finish such additional simplified experiments, even though I believe the results would remain unchanged.

It is disappointing you are unwilling to use a REW based setup so that we can more directly compare our methods. As a compromise can you record 30 seconds of a high frequency tone (15+ kHz) using the setup without the mixer? I ran this test using an ADC running at 192 kHz connected to a separate computer and the results were as expected.

Without drift correction enabled it goes from fully in sync to fully out of sync in less than 1 second.

No Drift Correction - in sync - 0.261 s.png

No Drift Correction - out of sync - 0.902 s.png


With drift correction enabled the results are pretty good. The rough timing is about correct but sometimes the highest samples are aligned and some times they are one sample off. This seems to match my oscilloscope measurements with drift correction where rough timing is maintained but there is some shifting of the waveforms relative to each other.

Drift Correction - ch 2 high sample comes first - 5.060 s.png


Drift Correction - high samples aligned - 8.494 s .png


Assuming you are achieving similar results to my drift correction measurements, my only explanation is that there is drift correction resampling occurring somewhere in your chain (ASIO4ALL?).

In addition to recording the 15 kHz tone, it would also be interesting to use REW's scope functionality to view the waveforms from each DAC, this could be done with the ADC connected to a separate computer running REW. I think this would be a good way to see if it looks like there is drift correction resampling occurring.

Michael
 
Last edited:
Also, want to thank you for exploring this topic.

There is so much discussion of multi-device setups using ASIO4ALL and MacOS on the internet but so little actual data available. It seems that most people think ASIO4ALL does not have drift correction for multiple devices, but I have also seen some speculation that it does. You testing will definitely help confirm or deny that.

Michael
 
Hello @mdsimon2,

I fully agree with you that we are having really useful and important data exchanges and discussions which are not only worthwhile for us two but also for many people interested in DSP-based multichannel audio setup who would like to establish fully time-aligned synchronized air sound using multiple SP drivers.

I do believe neither of EKIO and ASIO4ALL has "drift sync" correction/compensation mechanism, but ASIO4ALL (and each of the ASIO drivers) highly possibly give "startup timing discrepancy" for multiple DAC units, as I mentioned in the introduction part of my post #783, and objectively found throughout the 3-day experiments thereof.

You would please understand that I prepared the test "full Orchestra" music track very carefully inserting my "very carefully prepared" 10 kHz timing marker on the top, middle and end of the track for precision (less than 10 uSec accuracy) timing analysis.

I believe, at least in my setups of "Windows-JRiver-->DSP-EKIO-->ASIO4ALL-USB_ASIO-->multiple-DAC-Units", my 3-day experiments in #783 clearly showed I have no "internal-drift of discrepancy" at all throughout the test music track, but I do have "relative startup timing discrepancy" between the DAC units, and such "startup timing lag(s)" can be compensated by upstream group delay of DSP EKIO, only if the "relative startup discrepancy" remains unchanged during the sessions.

The enlarged view of the "carefully prepared" my 10 kHz timing marker is shown in this diagram;
WS00006212.JPG


And it was quite easy for me to observe/measure the "relative startup discrepancy" in my 3-day experiments in #783 since the timing markers in the mixed sound were well separated more than 1.5 msec with each other.

I can identify/measure the smaller "timing discrepancy", however, even the relative difference would be less than 0.2 msec (200 uSec), like shown in this diagram where I simulated 158 uSec "timing discrepancy" of the two markers in mixed sound;
WS00006219.JPG


Please note that the simulated 158 uSec discrepancy in the above diagram corresponds to only 7-unit/granularity of absolute time resolution (22.7 uSec) of the 44.1 kHz test track.

In any way, in the coming weekend, I will perform the following experiments without using analog audio mixer ROLAND M-10E but assessing the relative time-domain discrepancy of the timing markers by reading the L-CH marker vs. R -CH marker of the recorded sound in these simplified experimental setups. Of course, again I will also try to compensate the "relative discrepancy" by using "group delay" of upstream EKIO for possible complete synchronization of the "outputs" from the two DAC Units, only if the "relative discrepancy" would remain unchanged during the sessions.
WS00006236.JPG

and,
WS00006254.JPG


You would please stay tuned, and let's continue discussion after I would get back to you with the results given by the above additional simplified Day-4 experiments.;)
 
Last edited:

The Toyota Century is using Yamaha Zylon drivers!

I know the NS-5000 was not as good as your setup, @dualazmak , but I wonder if you converted the NS-5000 into a tri-amplified setup if you would get even better results! :)

Thank you, I really do hope so!!;)
But if I will apply such DIY on NS-5000, the product warranty should completely sease/lost, I believe.:facepalm:

As you may also aware of, recently I intensively auditioned NS-5000 at Yamaha's audio dedicated cottage near Hamamatsu City, Yamaha's HQ place.
 
Back
Top Bottom