• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Delta-sigma vs “Multibit”: what’s the big deal?

mansr

Major Contributor
Joined
Oct 5, 2018
Messages
4,685
Likes
10,705
Location
Hampshire

Esotechnik

Member
Joined
Mar 9, 2019
Messages
72
Likes
5
Location
Russia
http://161.111.70.163/bitstream/10261/92625/1/Digital Tests.pdf
Digital Tests for D-S modulators - Gildas Léger (2006)
Page 149:
Digital Tests-p165.png
 

cjm2077

Active Member
Joined
Jan 22, 2020
Messages
160
Likes
261
http://161.111.70.163/bitstream/10261/92625/1/Digital Tests.pdf
Digital Tests for D-S modulators - Gildas Léger (2006)
Page 149:
View attachment 47714

I hate to break this to you, but all converters have non-linearities. It's right there in the data sheets. If a delta-sigma converter ends up with a better performance in terms of integral and differential non-linearity, it is more linear than another type of converter. That it is not easily modeled is irrelevant. So pick your converter based on actual performance, not based on something you read once that stuck in your head. If I can use a product like this to control the generation of plasma I think you can live with one in your stereo system.
 

lightrush

Member
Joined
Jul 16, 2020
Messages
17
Likes
6
...

Sinewaves need to be 'perfect'.
Some manufacturers (and audiophools), however, keep insisting filterless NOS DAC's are THE best way to reproduce a digitized recording.
Comparing the actual 'analog' waveforms that were recorded to the actual reproduced analog waveforms will show how wrong their thinking is.
Below a picture taken from Computeraudiophile and a bit 'misused' by me to make my point.
waveform.png.68bd33cac004229fa83d04e8c890fd23.png

Consider the signal upper trace the 'original' recorded signal and the lower one from a NOS filterless ladder DAC.
The upper signal is also what one gets when looking at the output of any (properly reconstruction filtered) R2R (ladder) or DS DAC.

The lower trace is what the 'digital value representation' of the digital signal that is actually sent to the DAC chip looks like as well.
The upper trace is the waveform that is represented by the lower trace as it is supposed to come out of a properly constructed DAC device.
...

Please bare with my not so versed brain here and entertain it for a moment.

I understand the reason why the bottom is what comes out of a filterless R2R, but given that Nyquist-Shannon say that 44.1kHz sample rate captures a 22kHz band-limited signal perfectly, doesn't it follow that the irregularities seen out of such a DAC at this scale don't actually matter? In other words, once we are outputting 44.1 voltage signals per second, without filling the blanks between each of those voltages, wouldn't that signal contain all the audio information from the original signal? And wouldn't the extra values generated in-between by oversampling or lookahead/behind be just superfluous?
 

Don Hills

Addicted to Fun and Learning
Joined
Mar 1, 2016
Messages
708
Likes
464
Location
Wellington, New Zealand
The sinc reconstruction filter at the output of the DAC is not optional. It is required to fulfil the Nyquist-Shannon theorem. Without it, the frequency response falls off well before 22 kHz and you don't get all of the original captured signal. You also get a lot of > 22 kHz noise which can be problematic for downstream equipment.
 

lightrush

Member
Joined
Jul 16, 2020
Messages
17
Likes
6
Interesting. In that case, is the reconstruction filter represented / proven by some math? That is, is the reconstruction filter provably lossless beyond certain precision? Because the way I understand the sampling theorem - it proves that you can record all the information of a band-limited signal without any loss under the conditions it prescribes. It proves you don't need infinite precision (infinite sampling rate) to capture it. So in the other direction, if simply outputting the voltages with the prescribed sample rate isn't enough to reverse the process, can the reconstruction filter in conjunction with the requisite DAC provably produce the original information, assuming perfect analog path following downstream from it?

EDIT:
What makes me wonder about this is that the sampling theorem tells me I can record the information without loss under certain conditions without needing infinite precision. However in the other direction, it seems that some devices create more data points (voltage signals) than others, and I can see a potential for needing infinite precision (generate infinitely many in-between points) to reproduce the original signal without loss and since infinite precision is impossible, the reproduction couldn't be lossless. I would expect that if there's a proof that sampling a specific signal doesn't require infinite precision to be lossless then either part of that proof or another proof should show that you can also reproduce the signal without the need for infinite precision. I don't know enough about it and it bothers me. Would be a shame if the sampling theorem doesn't work in reverse but knowing math in other fields I know theorems don't have to hold in all directions. I obviously don't know the sampling theorem mathematically, but just *some* of its consequences. Help me understand!
 
Last edited:

SIY

Grand Contributor
Technical Expert
Joined
Apr 6, 2018
Messages
10,511
Likes
25,348
Location
Alfred, NY
Interesting. In that case, is the reconstruction filter represented / proven by some math? That is, is the reconstruction filter provably lossless beyond certain precision? Because the way I understand the sampling theorem - it proves that you can record all the information of a band-limited signal without any loss under the conditions it prescribes. It proves you don't need infinite precision (infinite sampling rate) to capture it. So in the other direction, if simply outputting the voltages with the prescribed sample rate isn't enough to reverse the process, can the reconstruction filter in conjunction with the requisite DAC provably produce the original information, assuming perfect analog path following downstream from it?

EDIT:
What makes me wonder about this is that the sampling theorem tells me I can record the information without loss under certain conditions without needing infinite precision. However in the other direction, it seems that some devices create more data points (voltage signals) than others, and I can see a potential for needing infinite precision (generate infinitely many in-between points) to reproduce the original signal without loss and since infinite precision is impossible, the reproduction couldn't be lossless. I would expect that if there's a proof that sampling doesn't require infinite precision to be lossless then either part of that or there should be another proof to show that you can also reproduce the signal without the need for infinite precision. I don't know enough about it and it bothers me. Would be a shame if the sampling theorem doesn't work in reverse but knowing math in other fields I know theorems don't have to hold in all directions. Help me understand!
The answer to your first question is the Sampling Theorem, which requires image removal.
 

lightrush

Member
Joined
Jul 16, 2020
Messages
17
Likes
6
The answer to your first question is the Sampling Theorem, which requires image removal.

Does that mean that multibit and DS DACs with filters, oversampling, etc. correctly implemented as required by the Sampling Theorem's are both devices covered under it and therefore equivalent in their ability to reproduce the original signal?
 
Last edited:

solderdude

Grand Contributor
Joined
Jul 21, 2018
Messages
16,051
Likes
36,427
Location
The Neitherlands
Yes, that's what it means. In addition linearity is also important. This is where DS is better in general. Then there is timing accuracy, deglitching, postfiltering, noise contribution, distortion (in all its forms) and PSRR etc as well as reconstruction filter implementation in real world applications.
 

Veri

Master Contributor
Joined
Feb 6, 2018
Messages
9,598
Likes
12,040
Does that mean that multibit and DS DACs with filters, oversampling, etc. correctly implemented as required by the Sampling Theorem's
Well yes of course. It all started with multibits (like TDA1541) and their reconstruction filters (like SAA7220), long before delta sigma DACs.
 

barracuda156

New Member
Joined
Aug 15, 2021
Messages
3
Likes
0
There is a reason the industry moved on from R2R to other topologies that let us get around these serious issues.

The fact that an industry predominantly chosen particular technology says nothing about technology as such, it only means that it turned better economically. In tech specifically there are many examples where superior technologies were dropped – or pushed into a niche – in favor of inferior which were significantly cheaper (directly or indirectly).
Photo industry went with inferior CMOS sensors, while superior CCD remained in extremely expensive top-end cameras (and by the way full frame was dropped initially by Pentax and Contax because it was too expensive). Apple dropped superior Power architecture in favor of cheaper but inferior Intel (only now returning back to kinda RISC chips), Firewire was dropped due to being too expensive for mass implementation, SGI went out of business etc.
Also you have to keep in mind what is called path dependence. Once a certain arrangement is in place, switching to an alternative or a return to a previously dropped one may be too costly.
 

gvl

Major Contributor
Joined
Mar 16, 2018
Messages
3,495
Likes
4,081
Location
SoCal
Well, objectively at least, DS technology is leaps ahead of what the best of the best multibit chip could achieve at the time, and that was already pushing that technology to its limit. So the "superior" designation is at least arguable in this context as there is a strong engineering evidence that it's actually not.
 

SIY

Grand Contributor
Technical Expert
Joined
Apr 6, 2018
Messages
10,511
Likes
25,348
Location
Alfred, NY
The fact that an industry predominantly chosen particular technology says nothing about technology as such, it only means that it turned better economically. In tech specifically there are many examples where superior technologies were dropped – or pushed into a niche – in favor of inferior which were significantly cheaper (directly or indirectly).
Photo industry went with inferior CMOS sensors, while superior CCD remained in extremely expensive top-end cameras (and by the way full frame was dropped initially by Pentax and Contax because it was too expensive). Apple dropped superior Power architecture in favor of cheaper but inferior Intel (only now returning back to kinda RISC chips), Firewire was dropped due to being too expensive for mass implementation, SGI went out of business etc.
Also you have to keep in mind what is called path dependence. Once a certain arrangement is in place, switching to an alternative or a return to a previously dropped one may be too costly.
However, this is not the case here. As is often so with decades of smart engineering, the highest performance way is also economic.
 

blueone

Major Contributor
Forum Donor
Joined
May 11, 2019
Messages
1,195
Likes
1,545
Location
USA
The fact that an industry predominantly chosen particular technology says nothing about technology as such, it only means that it turned better economically. In tech specifically there are many examples where superior technologies were dropped – or pushed into a niche – in favor of inferior which were significantly cheaper (directly or indirectly).
Photo industry went with inferior CMOS sensors, while superior CCD remained in extremely expensive top-end cameras (and by the way full frame was dropped initially by Pentax and Contax because it was too expensive). Apple dropped superior Power architecture in favor of cheaper but inferior Intel (only now returning back to kinda RISC chips), Firewire was dropped due to being too expensive for mass implementation, SGI went out of business etc.
Also you have to keep in mind what is called path dependence. Once a certain arrangement is in place, switching to an alternative or a return to a previously dropped one may be too costly.

You're making conclusions based on generalizations that aren't correct. Technologies usually don't become dominant because they turn out better economically, they dominate because of significantly greater R&D investment over time. You can pretty much always just follow the R&D money. To use your Intel example, Intel became the leading CPU provider because IBM chose their 8088 CPU for the IBM PC. Many thought the Motorola 68000 was a superior CPU design, including me, but it didn't matter, Motorola wasn't ready for large scale production in time. The high volume of emerging PC market gave Intel the opportunity to invest a superior level of R&D spending in chip fabrication to achieve a technological lead that lasted until very recently. Being a generation or two ahead in chip fabrication technology compensated for a lot of design weaknesses in x86 architecture.

Was MS DOS a technically superior OS design? Of course not, but IBM also chose it for the PC, providing a huge addressable market, and DOS revenues begat Windows R&D spending, and PC Windows begat Windows Server, and a few decades later Microsoft used their R&D spending superiority to achieve a $2.2T market capitalization. Following the money worked for the OS, and enabled related applications (e.g. Office, databases, games, etc.) too. Even Apple can't keep up. (Excel beats Numbers hands-down, IMO, for spreadsheet functionality.)

Another example is Ethernet. In computer networking Ethernet R&D spending dominates that of all other networks, client or server or wide-area, by a huge margin. Ethernet is not necessarily a superior wired networking technology, but it was an industry standard design (as opposed to propriety alternatives) that allowed many companies to invest, compete, and their products interoperate to create workable solutions. Industry R&D spending on Ethernet ballooned, eventually exceeding that for all other message passing networks combined. How many open switched networks were there in the 1990s? One, that I'm aware of. [This is incorrect. See my correction below.] Also, brilliantly IMO, Ethernet used algorithmic logical topology generation, which meant you could create a random physical topology (plug anything into anything), and if a physical path was possible Ethernet would use Spanning Tree Protocol to calculate a loop-free route. Ethernet, to this day, has some silliness in it from the days of its CSMA/CD bus origin, but huge R&D investment (and openness) made Ethernet the only LAN that matters, warts and all. I was just reading that 800Gbps per port IP (the circuit designs licensed by chip developers to integrate in their higher level designs) is becoming available for producing data center switch and server interface chips. Ethernet is also evolving beyond its original specification through massive investment by cloud computing vendors and their suppliers, displacing the Spanning Tree Protocol strategy. They use proprietary programmable switches and centralized network management to program the switches (enabling the creation of more scalable networks for cloud data centers), making standard Ethernet more of a link specification to the cloud computing companies than a full-on OSI Layer 2 network specification, but Ethernet links and PHYs are still dominant due to the highest level of R&D spending, and IMO will likely continue to be.

I remember 20 years ago some people thought WIMAX should displace CDMA and GSM cellular technologies to make IEEE 802-based technology open and prevalent worldwide for data and voice communications, increasing innovation through openness and decreasing costs. Maybe it was superior, but it didn't matter, the world was investing massively in cellular technology and now 5G and 6G are the future, and WIMAX is a "Huh? What's that?" part of history.

IMO, the same is true for audio and video. You can always follow the money, unless some disruptive innovation comes along. In computing everyone thinks that disruptive technology is quantum computing, which probably has a long way to go to commercial practicality for general purpose systems, but fear of being on the wrong side of that disruption is causing massive industry and research investment. I think streaming music is like this. Even Amazon and Apple are jockeying for position, for fear of being left out, and of the potential side effects on other businesses.

[Edited to correct grammar, and insert a note to see my post below.]
 
Last edited:

barracuda156

New Member
Joined
Aug 15, 2021
Messages
3
Likes
0
You're making conclusions based on generalizations that aren't correct. Technologies usually don't become dominant because they turn out better economically, they dominate because of significantly greater R&D investment over time.

First of all, I did not try to “prove” multibit technology is necessarily better, I just pointed out the argument to the contrary that I quoted is fallacious. Market choices are economic choices, not technological. We live in world of scarcity, so everything is necessarily a compromise. Even if rails made of titanium were better (I am not an engineer, it’s a hypothetical example), market won’t choose such, because those resources have more preferred alternative uses.
And R&D decisions are also economic decisions.
 

blueone

Major Contributor
Forum Donor
Joined
May 11, 2019
Messages
1,195
Likes
1,545
Location
USA
Another example is Ethernet. In computer networking Ethernet R&D spending dominates that of all other networks, client or server or wide-area, by a huge margin. Ethernet is not necessarily a superior wired networking technology, but it was an industry standard design (as opposed to propriety alternatives) that allowed many companies to invest, compete, and their products interoperate to create workable solutions. Industry R&D spending on Ethernet ballooned, eventually exceeding that for all other message passing networks combined. How many open switched networks were there in the 1990s? One, that I'm aware of.

Right after I made that post, I went to work out. Apparently the work out got my blood flowing to my brain, offsetting the onset of senility. There were two other open specification switched interconnects in the 1990s, Fibre Channel and ATM. ATM has been largely replaced by Ethernet, but Fibre Channel still lives on in storage area networks (SANs) in enterprise data centers (though even in SANs Ethernet is replacing it).
 

blueone

Major Contributor
Forum Donor
Joined
May 11, 2019
Messages
1,195
Likes
1,545
Location
USA
And R&D decisions are also economic decisions.

True, but they're mostly business decisions, which are different than economic decisions. Economic decisions have solutions rooted in optimization. Business decisions are just about getting ahead of the competition. That's why so many technologies in so many markets which were not best of breed end up winning for, perhaps, less than optimal reasons. Then dominant R&D spending keeps them dominant in the market. Of course, that obviously doesn't explain the resurgence of vinyl...
 
Top Bottom