Handy to know shizzle
Share
Explore

icon picker
Troubleshooting 10GbE throughput stability

Author:
First version: 2023-June. Last update: NA. Published: 2025-04-19.

TL;DR

Review my . The solution to my problems involved disabling Windows NIC/adapter power management settings and tuning the receive buffer. So these are key issues to check in your setup.
🎵 If you prefer to listen, I had NotebookLM interpret the PDF version of this post. the result here:
Troubleshooting 10GbE Throughput Stability in Windows.mp3
11.1 MB
📽 If you prefer to watch, I have recorded part 5 of the series, which concludes and reviews the content of this post.

The quest for 20 Gbps symmetrical full-duplex throughput

10 GbE adapters can handle up to 20 Gbps symmetrical full-duplex throughput. 1 GbE adapters can handle up to 2 Gbps symmetrical full-duplex throughput. Read more about full-duplex
.
For example, from the Windows task manager:
Windows 10 - 20 Gbps symmetrical full-duplex throughput Between a prosumer desktop and an enterprise Linux server
image.png
Windows 11 - 2 Gbps symmetrical full-duplex throughput Between a laptop and an enterprise Linux server
image.png
Frustratingly I was having issues achieving these results on Windows, which motivated me to write this post. As I’ve tried to document in this post, I believe I’ve managed to stabilise the 10 GbE results and setup for my hardware.
Under Windows, the test results for 1 GbE adapters aiming for 2 Gbps full-duplex are hit and miss, and I haven't been able to get the setup to a stable point that I'm happy with. I tried two laptops with various connection setups. This is weird because I know that all the intermediate components can sustain up to 20 Gbps symmetrical full-duplex throughput. TX and RX tests individually (effectively half-duplex) are able to sustain 1Gbps consistently.

What is my definition of stable symmetrical full-duplex test?

I consider a setup stable when the following is true:
Hosts can perform multiple bi-directional iperf3 tests, test duration 5 minutes. That the NICs can sustain full-duplex line speed without significant performance degradation throughout the tests. Note that the --bidir option often doesn't work as expected, manual bi-directional testing is required. I.e. Start an iperf server on both hosts and then point the test clients at one another.
That a host can enter sleep/suspend mode and wake up and perform the same as point 1)

An example of a bad 1GbE test from a Windows 11 laptop

Here is a image which includes some analysis of a Windows 11 laptop connected a 1 GbE suffering degradation during testing:
image.png
Figure 1

Observations on Figure 1

CPU and Memory are within normal thresholds
At point (1) an RX test is running at full line speed ~1 Gbps
At point (2) the TX test starts
At point (3) RX throughput drops off a cliff as soon as the the TX test starts
TX can sustain ~1Gbps - RX cannot
If the TX test is stopped, RX recovers to ~1 Gbps
As per my documented I’ve tried to rule out as many things as possible that could be causing this degradation. As I’ve tried to document in this post, I’ve also tried to tune out the degradation via the adapter settings. For my 10GbE setup this has been successful but not for the 1GbE setups. I tried two laptops with various connection setups.
The remainder of the post covers my testing, tuning and verification of my setup. I hope you find it useful.

Topics covered in this post

Internet download bandwidth issue when desktop link speed is > 1 GbE. Covered in .
Local 10GbE performance stability on desktop. Covered in videos Part 2 to 5. See: .
Windows desktop wake up from sleep 10GbE performance stability. As above 👆 mainly in .
My investigation began when I accidentally noticed that my main desktop was not downloading from the internet at the expected speed (see ). I have an asymmetric "cable internet" business line coming into the property, which serves our small business and home.
This performance problem was discovered at an off-peak time, when bandwidth would have been fully available for my internet location, and congestion and contention would have been low.
So I started investigating and found some rather surprising results. Some of the screencasts I’ve captured and catalogued here were shared with the 10GbE switch vendor - MickroTik, who offered support and insight into what the problem might be. In fact, they were able to reproduce one of the problems in their lab.
Videos published on this topic
Published Date
Video
2023-Jun-26
2023-Jul-3
2023-Oct-20
2025-Apr-11
2025-Apr-12
TODO: Add video Part 5
There are no rows in this table

Testing / verification checklist

To try and rule out any faults/issues along the path of the test testing, I have checked the following:
Verified that switches carrying traffic were quiescent before testing
The patch panel is CAT6 capable
The in-wall cabling is CAT7
The desktop 10GbE RJ45 cable is CAT7 and <= 3m in length
The patch panel cabling for the desktop is CAT7 and <= 50cm in length
The switch to hypervisor is a 10GbE DAC cable 1m in length
The switch to switch Internet uplink is a 10GbE DAC cable 2m in length (link speed 1GbE)
I’ve tested/swapped between different wall/patch ports
I’ve tested/swapped all possible Ethernet/DAC cabling along the path
I’ve tested/swapped relevant SFP+ modules
I’m running the latest available NIC drivers and firmware on the Windows 10 machine
I'm running the latest firmware on the 10GbE switch
I've tested various flow control configurations
Checked for any issues with electromagnetic interference (EMI)
Checked for an any potential grounding/earthing issues
For 1GbE tests - tested D-Link ↔ MikroTik, inter D-Link and inter MikroTik
Check and tune the Windows NIC settings and adaptor power management settings

Network hardware info

The Windows desktop has a v1 ASUS XG-C100C RJ45 10GbE adapter
Note: there was at least one hardware revision of this NIC to v2.
The Linux server is a Proxmox Hypervisor with an ASUS MCB-10G-2S
OCP Network Mezzanine Card (2x SPF+ ports)
10GbE tests were conducted on a MikroTik CRS305-1G-4S+IN switch
1 GbE tests were conducted on/between a DGS-1210-24 switch

Firmware info

Here are the ASUS packaged drivers for the XG-C100C I’ve tested with:
image.png
image.png
Here the firmware updater advises that the XG-C100C firmware is already up-to-date at 3.1.88. I must of done it in the past.
image.png
Side note: Interestingly, it is not possible to apply the firmware if the driver is “too new”, first you have to uninstall the newer 3.x.x driver and use a 2.x.x driver.
image.png
So the desktop card is running the latest published ASUS branded firmware 3.1.88. There does appear to be newer firmware available 3.1.121, but this seems to be locked to Marvell branded cards. The firmware updater refused start on the ASUS branded card.
image.png

What happened in video parts 1 to 3?

TODO

What has changed since video part 3?

XG-C100C driver update

I've tested the following driver versions and I’m now using the Marvell 3.1.10.0 driver marked → below.
ASUS-10GbE-NIC-DR_XG_C100C_5022 | Version 5.0.2.2 | 2020-08-25 | looks like the original release (not compatible hardware V2, V3) ASUS-10GbE-NIC-DR_XG_C100C_5302 | Version 5.3.0.2 | 2024-10-17 | driver version 3.1.7.0 | driver date 2022-06-02 ​
Marvell_AQtion_Win_v3.1.10_09.01.2025 | Version 3.1.10.0 | 2025-01-09 | driver date 2024-04-23 🔗
The latest Marvell driver that could be and was installed:
image.png

Switch/Router firmware update

CRS305-1G-4S+ firmware updated from 7.12rc2 to 7.19beta8

Observations

Overall it would seem that improvements in the MikroTik firmware since 2023-Oct have helped to mitigate bandwidth issues between switch ports at different link speeds, as covered in early videos.
💡 In 2023 switch ports operating at different link speeds, e.g. 1 GbE switching traffic to a 10 GbE port, were experiencing performance degradation.
Back in 2023, this performance issue was reproduced by MikroTek support and they provided a dev firmware for me to test. Unfortunately at the time it didn’t resolve the issue but since then, one of the firmware improvements has resolved the performance issue between ports with differing link speeds.
Fast forward to 2025-Q1 - this means I can now run the desktop at 10 GbE and not have to worry about Internet download performance issues due to the Internet uplink on the MikroTik switch being at a different port speed to the desktop (or other home lab/home office hosts).
This is is great! It means I can take advantage of the 10GbE speeds for the hosts that support it.
The following graph is a test from Windows 10 → Debian Linux OMV KVM, copying a 16 GiB VHD file.
The peaks and troughs the graph are expected, related to memory and drive buffers filling and emptying. The write pipeline is roughly as follows: Windows → CIFS/SMB → mergerfs → KVM host page cache → KVM host io subsystem → Hypervisor ZFS ARC cache → ZFS pool dataset (sync=always) → Intel 900P slog → 5TB 2.5” hard drive
46GB transfer to omv.png
Given the underlying storage is a single 2.5” HDD the performance is pretty decent.

Testing 20 Gbps symmetrical full-duplex throughput

I was testing between two hosts with iperf3
km-5900x Windows 10 22H2 Build 19045.4170
viper Linux Proxmox 8.1.3, kernel 6.5.11-7-pve
Bidirectional testing was going well, for example:
image.png
Until for some inexplicable reason, the receive bandwidth on km-5900x* dropped from ~10 Gbps to ~5-7 Gbps.
💡 This issue ☝️ appears to of been the power mgmt settings on the adapter after sleeping/waking up the desktop?
Here you can see an example of asymmetrical bandwidth:
image.png
Here is what the asymmetry looks like in the Windows 10 Task Manager:
image.png
After spending some hours ruling out nearly everything (see ), trying different cables, re-patching to test different wall ports, trying different ports on the 10GbE switch, changing SFP+ adapters etc... I finally gave up and disabled and then enabled the NIC via the Windows Control panel:
image.png
follow by
image.png
iperf3 -P 4 -c 192.168.169.130 -t 120 -b0 --bidir
image.png
ARE YOU F**KING KIDDING ME RIGHT NOW?!?!?!?
BUT WAIT... IT WAS A FLUKE... or at least only a temporary solve...
After this good test Windows 10 still had some kind of issue with RX (receive) bandwidth and the good result above was not easy to reproduce without the disable/enable workaround... 🤬
Time to break out Linux...
image.png
... to the rescue ... 😆
For this, I used a ZFS flavoured fork of SystemRescue
A reliable tool in the box. For this test any flavour of SystemRescue will do.
I created a bootable systemrescue-zfs USB drive (~1GiB) with Rufus and rebooted km-5900x. I booted into SystemRescue, which under the hood is Arch Linux.
Using iperf3 between the systemrescue host* and viper I was able to make consistent 10GbE bi-directional full-duplex test runs. ​* previously the Windows 10 km-5900x host.
Great success! OK then... the problem is in Windows somewhere...
Did a bit of web research regarding optimising Windows 10 for 10GbE+ networking and found a useful blog post from a gent named Carsten Mauel
What caught my eye in his post was:
In the Advanced tab select Jumbo Frame and change to 9014 Bytes.
...
Change the value for Receive- and Send-Buffer-Size to be between 2 and 4GB.
I realised while testing the advice to enable Jumbo packets/frames that this also increased the MTU which I believe will have unwanted side effects for low latency traffic so I reverted that change.
What did seem to do the help significantly was increasing the Receive Buffers to 4096:
image.png
The original value was 512 I kept nudging the value up until I started getting consistent iperf3 results.
The Transmit Buffers were left default at 2048 (at least for for this specific adapter). It is unclear which unit this integer is in. I suspect it might be in MiB as I suspect smaller orders of magnitude would not have a significant impact / resolve the receive bandwidth issues. Carsten Mauel also mentions “between 2 and 4GB
A snapshot of the MikroTik switch Ethernet stats during a 5m test run
image.png
Same for the Windows 10 Task Manager
image.png
NetData view of the 5m iperf3 test.
image.png
With the tweaked Receive Buffers setting and disabling of the NIC power saving option I can now consistently reproduce this 👆 result.
Disabling the following setting resolves wake from sleep RX-receive throughput performance issues. 📽 I cover this in detail in .
image.png

Ideas / TODO

In the desktop, I could swap the PCI-X card XG-C100F (RJ45 10GbE) for the XG-C100F (SFP+ 10GbE) to see if it suffers the same issue? It's probably suboptimal because the wall port is RJ45 and I'd need to use an SFP+ Ethernet adapter. 👆 Not required for now because I think I’ve solved the issue.
In my setup the CIFS hosted on the OMV KVM supports 4-5 Gbps / 400-500 MiB/s write speeds (utilising page caching). What about local writes? For example when recursive-downloader is writing? I guess this relates to buffered vs. non-buffered IO and how the writing program sets up IO calls to the Kernel? For example there is the program nocache
which disables page caching when launching a command. ​Q:
Is there a program that does the opposite of nocache and forces a command to use page caching? Perhaps using the page cache is the default and I just need to do more testing? How is aria2c using the page cache or not? By default it performs file pre-allocation... and then?
Boot systemrescue-zfs on one of the laptops and re-run the 1 GbE tests to see if 2 Gbps symmetrical full-duplex throughput is more stable under Linux than under Windows.

Appendix

ASUS driver packing issue?

⚠ I think there is a problem with the ASUS driver packaging for the XG-C100C? Should the 3.1.7.0 driver have a 2024 year not 2022?
image.png
⚠ There was a mislabelled version on the ASUS support site: compare the most recent two drivers per screenshot
image.png

Testing approach checklist

Test which wall port sustains bidirectional 10GbE
Office right port pair, left port - patch panel port 12
Test the internet speed at 10GbE on other speeds
Looks like office left port pair, right port - patch panel port 3 is slower than the right port pair
10 GbE bi-directional full duplex
Office right port pair, left port - patch panel port 12
Office right port pair, right port - patch panel port 11
Upgrade the RouterOS and retest
At least on the latest firmware I cannot reproduce the issue
Test with the internet plugged directly into the Mikrotik Ethernet port
Test with the internet plugged directly into the Mikrotik via SFP+ Ethernet adapter
Test client internet via the D-Link are not degraded
Note: I skipped the last 3 tests because I feel that my changes have solved the issue.

Observations from testing UDP performance with iperf3

When I was testing with iperf3, which defaults to TCP mode, I was curious to see what the UDP throughput would be. It was less than expected and it is important to disable iperf3 default bandwidth limit when running UDP tests.
Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.