• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required as is 20 years of participation in forums (not all true). There are daily reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

DMI/UMI and you! A guide to your motherboard's chipset bandwidth.

ThatM1key

Addicted to Fun and Learning
Forum Donor
Joined
Mar 27, 2020
Messages
906
Likes
705
Location
USA
So what is DMI and why is it important? DMI is the bandwidth tunnel between your Motherboard's Chipset's PCH/Southbridge and your CPU. DMI is important because the bandwidth is divided between your USB ports, Non-CPU linked PCIe slots, SATA ports and others. Knowing this information will help your understand why your USB DACs strutters when you use, for example, using 2 PCIe 3.0 NVMe M.2s at RAID 0 through the DMI 3.0 bus.

First, we gotta talk about PCIe lanes. PCIe lanes is the amount of bandwidth lanes a Chipset and a CPU can handle at once. For example, a 6th generation Intel CPU (Something like a i3 6100), would have 16 PCIe lanes. Those CPU PCIe lanes are used for the top CPU-linked PCIe 3.0 x16 slot. A modern Intel i3 12100, would have 20 PCIe lanes for the top CPU-linked PCIe 5.0/4.0 x16 slot and rest for the CPU-linked PCIe 4.0 NVMe slot. CPU PCIe lanes can be split with other CPU-linked slots if the board has them. The chipset has PCIe lanes but some are already shared with USB ports and such. Although some motherboards may appear to have multiple x16 slots, in reality most of the time they are x4.

A chart of a modern Intel system (From "ExplainingComputers"):
Screenshot 2022-07-07 204341.png


Bandwidth in this thread means Total Max speed if everything was running at once.

If you want a CPU-linked PCIe slot (Bypass DMI), then your top PCIe slot supports that but usually people use it for there GPU. Even though your CPU-Linked and Non-CPU linked PCIe slots are not the same, they still share PCIe Lanes (PCIe lanes amount depends on your CPU/Chipset and how its divided/used).

For the longest time (2015 - 2021), Intel used DMI 3.0 on motherboards (4,000 megabytes/sec [PCIE 3.0 x4 speed's]), which is very slow. If you bought a very cheap motherboard between that era, you probably got DMI 2.0 still (2,000 megabytes/sec [PCIE 2.0 x4 speed's]. With DMI 4.0 (Intel 600 series) and depending on your motherboard, speeds are between 8,000 megabytes/sec to 16,000 megabytes/sec.

Chart of Intel 600-series chipset differences (Picture from "TechPowerUp"):
GSgXWI3xpB4SCkp6.jpg



Chart of a Intel 600-series using DMI 4.0 (Picture from "ExplainingComputers"):
Screenshot 2022-07-07 202359.png


DMI 3.0 motherboards generally can 1 have PCIE 3.0 NVME SSD (Non-CPU linked), which means it technically takes up your whole DMI bandwidth and you'll rarely use the max speed of your NVMe SSD. Also In the DMI 3.0 era, if you wanted to have a NVMe PCIE 3.0 SSD with your GPU, your GPU's bandwidth would be cut in half from x16 to x8 (Due to set Intel CPU PCIe speed configurations). Luckily most PCIE 3.0 GPU's don't suffer from performance loss due to them not using the full x16. When you add a SATA Card to your system (That's connected to the Non-CPU linked slots), it's essentially like adding more SATA slots directly to the motherboard, and the major downside is that the bandwidth is divided more.

A picture of the "Expansion Options" of a i7-12700:
Screenshot 2022-07-07 204956.png




Picture of a Gigabyte MB that has a Non-CPU linked M.2 Slot:
Screenshot 2022-07-07 203759.png

*32 Gigabits/sec = 4,000 megabytes/sec, which is the DMI 3.0 Max Bandwidth limit.



If you have a Nvidia SLI board of the DMI 3.0 era, then you could have 2 or more slots talk directly the to the CPU rather then just 1. If you used both slots, each slot would be running at x8 or x16 if your only using 1. There is non-SLI modern motherboards that have 2 CPU-linked slots but companies rarely advertise this feature and you have to be knowable about motherboards.

Picture of a MSI MPG Z690 that has two CPU-linked slots (Newegg):
13-144-484-V80 (1).jpg



These days, PCIe 4.0 motherboards (That still use DMI 3.0 and also use DMI 4.0), can have 1 NVMe PCIE 4.0 SSD directly linked to the CPU, just like the top PCIe slot. Beware that there is "PCIe hybrid" motherboards, meaning your top PCIe slot & top NVMe slot is CPU linked & PCIe 4.0 while your other Non-CPU linked slots are PCIE 3.0. There is also PCIe 5.0 Hybrid motherboards (Top Slot is PCIe 5.0 while other slots are 4.0 or even 3.0). Dual PCIe 5.0 slot MB's can split bandwidth (x8/x8), which means you could run (if your MB supports bifurcation) 2-PCIe 5.0 M.2 NVMe SSDs in RAID 0, which is 32,000 megabytes/sec total.

Currently there is no major advantage jumping from a SATA SSD to a NVME SSD right now. Although in terms of value, buying a SATA 2.5 SSD over a NVME SSD is a bad idea since the cost is the same, so might as well get the NVMe SSD. In the future with Microsoft's DirectStorage API, games can load at least 10x faster with current hardware even SATA 2.5 SSDs somehow.

Theoretically, A USB PCIe Card that's connected to a Intel CPU-linked slot, could perform better then the on-board USB ports. The limiting factor I could imagine is PCIe lanes and your USB PCIe card's controller. This would be perfect for DACs since they don't have to worry about DPC-Latency hungry ethernet controllers and other USB devices. I don't have much knowledge on AMD systems but I have heard some USB ports do actually connect to the CPU directly.

A chart of a modern AMD system (by "ExplainingComputers"):
Screenshot 2022-07-07 204738.png




Edit (SATA M.2's): I didn't mention SATA M.2's because there technically a worst value for consumers that want a SSD. SATA M.2's take up the same space as a NVMe M.2 but have SATA3 speeds while disabling 1-2 SATA ports. SATA M.2's cost the same as a SATA 2.5 SSD and a NVMe SSD, it's not always dumb to buy a SATA 2.5 SSD. SATA 2.5 SSD's can be used in old consoles and computers while SATA M.2's take up a valuable slot on your motherboard which could've been used for a same-price 3x+ speed NVMe SSD. There is literally no good use for SATA M.2's.

Edit (Chipset Drivers): Every motherboard comes with chipset drivers but nobody notices. Windows 10 does a great job at providing basic drivers for your chipset. Its similar to the "memory support list" for your motherboard, you should get the "correct" tested memory but you don't have to. If you have actual chipset drivers installed (Via Windows Update), it's mostly outdated like your video card drivers. What's the point of getting the proper drivers? Stability and potentially more performance.

Edit (CPU Graphics/Integrated Graphics): These days its a smart idea to install your integrated graphics drivers alongside your Nvidia/AMD video card drivers. There is a few benefits to having both installed (If you got a integrated graphics & video outputs on your motherboard).
The benefits:
  • Resource dividing. A good example would be using your integrated graphics to record your gameplay that's being rendered by your video card. Windows or yourself can choose which GPU to use. If proper balancing, you can gain FPS in your videogames.
  • Easy video passthrough. With modern Windows you can passthrough your video cards graphics through your motherboards video outputs or vice versa. This feature is great for people that have broken or even no video ports on there video card.
The problem with having integrated graphics (when buying a new system) is that having a no-igpu version is a lot cheaper, usually a ~$50 difference.

Edit (RAID NVMe PCIe 3.0 M.2s): With some Nvidia SLI PCIe 3.0 motherboards (That has 2 or more CPU linked slots), you can't RAID NVMe drives because age (Well really bifurcation). Despite the 2nd slot on these boards offering x8 support, they can only detect 1 drive even though technically they can run 2 PCIe 3.0 drives at full speed. The reason we can't do this on these specific boards is because lack of support for bifurcation. Bifurcation allows the computer to split up a slot (usually x16 into x4/x4/x4/4) or x8 into x4/x4). If you got a PCIe 4.0 motherboard, you probably have bifurcation support but in the PCIe 3.0 land, its rare. There is one product that can use 2 PCIe 3.0 NVMe drives at full speed without bifurcation but its pricey, its called the WD Black AN1500. The WD Black AN1500 almost reaches 1 PCIe 4.0 NVMe speeds by using RAID 0 with 2 PCIe PCIE 3.0 NVMe drives on a actual x8 slot on your motherboard. The value of this product (1TB for $220) is very sad considering the Samsung 980 Pro is faster and a lot cheaper (1TB for $120), although the Samsung is PCIe 4.0. At the end of the day, well currently, the benefit of having a 1000mb+/sec drive is not that noticeable from SATA SSD. Current games do not utilize those high speeds but games in the future that use Microsoft's DirectStorage API (Similar to PS5's) will benefit from these high speed SSDs.


X570 and Z690, which is better.

In this section, well go over the bandwidth limitations, manufacturer limitations, etc, of each chipset. Heads-up, with X570 will be using the "latest" CPU and well be using motherboards that include 2 CPU-Linked PCIe slots.




Z690:
Screenshot 2022-08-27 051657.png

For any Z690 MB, you'll get 1 CPU-linked (PCIE 4.0) M.2 slot, at least 2 DMI-linked (PCIE 4.0) M.2 slots and 1 PCIE 5.0 x16 slot. If a Z690 MB features 3 or more DMI-linked PCIE 4.0 M.2 slots, the bandwidth is split between the PCIe 4.0 M.2s. If there's 4 DMI-linked M.2 PCIE 4.0 slots and your using 4 PCIE 3.0 M.2s, your bandwidth is not divided. Also on Z690 MBs, the bottom x4 PCIE slot is usually tied to a DMI-linked PCIE 4.0 M.2 slot, so if you populate the M.2 slot, the x4 slot is disabled or the x4 slot runs at half speed and also the M.2 does too.

Z690 uses DMI 4.0 (Which is equivalent to PCIE 4.0 x8, in speed is 16,000 megabyte/sec) while Z590 uses DMI 3.0 (Which is equivalent to PCIE 3.0 x4, in speed is 4,000 megabytes), in terms of value Z590 is an awful value if you care about chipset bandwidth, M.2's and/or PCIe devices.

If you spend more on a Z690 MB, you can gain 2 PCIE 5.0 x16 slots, which can run in x16/x0 or x8/x8. Sadly you cannot divide the x8/x8 into x8/x4+x4 due to Intel not the motherboard makers. However, you are technically future proofed for 1 PCIE 5.0 M.2 SSD [with adapter] (or 2 if your not using a GPU) and since it is CPU-linked also, In theory with the right future APIs, you could load games/programs faster than a PS5. It would've been cool to run 2 PCIE 5.0 M.2's (with GPU installed) or 4 PCIE 5.0 M.2s, but you could thank Intel for this weird limitation that shouldn't technically exist.

X570:
 
Last edited:

Katji

Major Contributor
Joined
Sep 26, 2017
Messages
3,001
Likes
2,221
I wonder whether it relates to glitches/like 1-second pauses in audio stream when a web browser tab/page is opened, with that sudden big hit on processor/memory.
...Which @Doodski mentioned yesterday, but considering I have browser already using much of 8GB.......like right now I have 16 browser threads/instances...
 

Doodski

Grand Contributor
Forum Donor
Joined
Dec 9, 2019
Messages
13,681
Likes
13,471
Location
Canada
I wonder whether it relates to glitches/like 1-second pauses in audio stream when a web browser tab/page is opened, with that sudden big hit on processor/memory.
...Which @Doodski mentioned yesterday, but considering I have browser already using much of 8GB.......like right now I have 16 browser threads/instances...
I think the issue was detailed and explained in post #63 by @Propheticus where the resource management feature of Chrome and YouTube stop allocating resources to the TABs that are not selected. This explains the laggy behavior. I do remember reading something about this feature months or years ago and so to have it explained again instantly made sense to me.
 

Katji

Major Contributor
Joined
Sep 26, 2017
Messages
3,001
Likes
2,221
It probably [most likely] applies to Firefox as well. But it's so unusual that I get it these days...I would need to check all the parameters, like is it when the new tab [or embedded player is Youtube,is there a threshold point how many tabs are actually loaded, not just there from last previous sessions... iow, although there are 30-40 tabs, most are not loaded, but sometimes there are like 4 or 5 Soundcloud tabs loaded, maybe Youtube...

[PS/edit:] Ok, right. (I hadn't seen that reply.) I suspect Youtube is a major factor.....and sometimes (more recently) seen odd/new behaviour like that with Soundcloud, like partially unloaded when I go back to it.
 
Last edited:

Doodski

Grand Contributor
Forum Donor
Joined
Dec 9, 2019
Messages
13,681
Likes
13,471
Location
Canada
It probably [most likely] applies to Firefox as well. But it's so unusual that I get it these days...I would need to check all the parameters, like is it when the new tab [or embedded player is Youtube,is there a threshold point how many tabs are actually loaded, not just there from last previous sessions... iow, although there are 30-40 tabs, most are not loaded, but sometimes there are like 4 or 5 Soundcloud tabs loaded, maybe Youtube...
I always run ~5 tabs for essentials and then things go up to 30 or more. If I am doing a torture test I'll open or attempt to open maybe 100 tabs. With the ~5 tabs I experience the laggy behavior.
 
OP
ThatM1key

ThatM1key

Addicted to Fun and Learning
Forum Donor
Joined
Mar 27, 2020
Messages
906
Likes
705
Location
USA
I think the issue was detailed and explained in post #63 by @Propheticus where the resource management feature of Chrome and YouTube stop allocating resources to the TABs that are not selected. This explains the laggy behavior. I do remember reading something about this feature months or years ago and so to have it explained again instantly made sense to me.
Would be nice if there was switch for that
 

Katji

Major Contributor
Joined
Sep 26, 2017
Messages
3,001
Likes
2,221
Firefox has a lot of switches in the config file - really a lot - but I doubt it includes anything like that.

Few years back I had an add-on that let you unload tabs, and configure its behaviour, but it...got cancelled when FF changed the add-on API thing to restrict add-ons' access.
 

Doodski

Grand Contributor
Forum Donor
Joined
Dec 9, 2019
Messages
13,681
Likes
13,471
Location
Canada
Firefox has a lot of switches in the config file - really a lot - but I doubt it includes anything like that.

Few years back I had an add-on that let you unload tabs, and configure its behaviour, but it...got cancelled when FF changed the add-on API thing to restrict add-ons' access.
I gave up on Firefox decades ago after trying it several time both in the Linux and Windows environments. It was too much for me...lol. I was into IRC and maintaining my boxes and did not need more on my plate.
 

Doodski

Grand Contributor
Forum Donor
Joined
Dec 9, 2019
Messages
13,681
Likes
13,471
Location
Canada
There is. See. But the setting is per tab and does not stick. Every time you open a tab you don't want Chrome to 'discard' you have to set this again.
Thanks for the heads up on this issue. Dang Google nanny minders! :D
 

AnalogSteph

Major Contributor
Joined
Nov 6, 2018
Messages
2,268
Likes
2,166
Location
.de
Firefox has a lot of switches in the config file - really a lot - but I doubt it includes anything like that.

Few years back I had an add-on that let you unload tabs, and configure its behaviour, but it...got cancelled when FF changed the add-on API thing to restrict add-ons' access.
Dormancy still works for me.
...Which @Doodski mentioned yesterday, but considering I have browser already using much of 8GB.......like right now I have 16 browser threads/instances...
If your RAM usage looks like that, you don't really have enough. I have 16 gigs and that's commonly more than half full.

nowing this information will help your understand why your USB DACs strutters when you use, for example, your ethernet heavily.
Bad example. I doubt that DMI congestion would be the issue in such a case, I mean even DMI 2.0 has far more bandwidth than the usual consumer-level 1 or 2.5 Gb Ethernet could ever occupy.
For the longest time (2015 - 2021), Intel used DMI 3.0 on motherboards (4,000 megabytes/sec [PCIE 3.0 x4 speed's]), which is very slow. If you bought a very cheap motherboard between that era, you probably got DMI 2.0 still (2,000 megabytes/sec [PCIE 2.0 x4 speed's]. With DMI 4.0 (Intel 600 series) and depending on your motherboard, speeds are between 8,000 megabytes/sec to 16,000 megabytes/sec.
Backbone bandwidth is always a compromise. If you size it for the sum of everything attached, you'll never have a bottleneck but chances are you'll never use the full bandwidth either, while wasting power on a fancy link. (Even ASPM can only do so much.)

Let's break things down for some examples:
H310 chipset (9th gen)
This basic chipset sports 4 lanes of 5 GT/s or 16 Gbit/s (2 GByte/s) worth of DMI bandwidth (PCIe 2.0 x4 levels), while having to support:
6 PCIe 2.0 lanes (x1, x2, x4) - ASUS PRIME H310 boards were using two PCIe x1 slots plus one M.2 x2 (other lanes may have gone to onboard peripherals)
4 USB 3.1 gen 1 (5 Gbit/s) + (10-4=)6 USB 2 (0.48 Gbit/s)
4 SATA 6 Gbit/s

What is the fastest conceivable transfer in this scenario?
Probably SATA to M.2 x2 = 2x 6 Gbit/s = 12 Gbit/s, followed by either to USB = 10 Gbit/s.
You could saturate the bus if the board bothered to implement a x4 port but it seems they generally didn't. You could also run a software RAID 0 on two of the SATA ports (which would max out DMI at 16 Gbit/s) but that's getting increasingly exotic.

H510 chipset (11th gen)
This basic chipset sports 4 lanes of 8 GT/s or 32 Gbit/s (4 GByte/s) worth of DMI bandwidth (PCIe 3.0 x4 levels), while having to support:
6 PCIe 3.0 lanes (x1, x2, x4) - 2 PCIe x1 slots + 1x M.2 x4 seems to be a common config
4 USB 3.1 gen 1 (5 Gbit/s) + 6 USB 2 (0.48 Gbit/s)
4 6 Gbit/s SATA

The 4 extra PCIe 4.0 lanes on 11th gen CPUs seem to be generally left unused.

Here you can saturate DMI with an M.2 SSD but only with transfers from or to memory, which is a bit pointless. Other scenarios are little more demanding than for the H310 above.

Now let's get fancy:
Z590 chipset (11th gen)
This chipset sports 8 lanes of 8 GT/s or 64 Gbit/s (8 GByte/s) worth of DMI bandwidth (PCIe 3.0 x8 levels), while having to support:
24 PCIe 3.0 lanes (x1, x2, x4), in addition to 4 PCIe 4.0 lanes built into the processor
3 USB 3.2 Gen 2x2 (20Gb/s) Ports, +
(10-3=) 7 USB 3.2 Gen 2x1 (10Gb/s) or Gen 1x1 (5Gb/s) Ports, +
(14-10=) 4 USB 2.0 Ports (0.48 Gbit/s)
6 6 Gbit/s SATA

Z590 boards have up to three M.2 x4 slots, one being fed from CPU lanes and two others via the chipset.

You can make DMI break a sweat when:
* copying data between the two chipset M.2 slots (64 Gbit/s)
* copying from CPU-attached PCIe 4.0 M.2 to a RAID 0 of chipset M.2s (same)
* copying between USB 3.2 gen 2x2 and chipset M.2 (40 Gbit/s)
It still isn't a severe bottleneck though.

Where it gets interesting in terms of DMI congestion is the
Z390 chipset (9th gen)
with:
4 lanes of 8 GT/s or 32 Gbit/s worth of DMI bandwidth (PCIe 3.0 x4 levels), while having to support:
24 PCIe 3.0 lanes (x1, x2, x4)
6 USB 3.2 gen 2 (10 Gbit/s) + 4 gen 1 (5 Gbit/s) ports
(Z490 is similar.)

If two M.2 slots are installed as usually the case, transfers between them are clearly bottlenecked by DMI bandwidth (32 Gbit/s < 2x32 Gbit/s).
 

Katji

Major Contributor
Joined
Sep 26, 2017
Messages
3,001
Likes
2,221
If your RAM usage looks like that, you don't really have enough. I have 16 gigs and that's commonly more than half full.
Maybe I exaggerated, or there was nore happening /more loaded then.
Not too bad, considering:

1657632637995.png


1657632541288.png



:rolleyes:

1657632765716.png



I suppose I shouldn't think of getting an iPad.
 
Top Bottom