iperf3 (3.16) udp test lots of packet loss when server set CPU affinity
Context
OS: Ubuntu 24.04 (Server / Client) iperf3 version: 3.16
Bug Report
Server set CPU affinity (packet loss)
Test commands: Server: iperf3 -s -p 10000 -V -d -A 0 Client: iperf3 -c $REMOTE_HOST -i 10 -l 16 -u -b 10M -p 10000 -t 60 -A 0
Test result: (Client) [ ID] Interval Transfer Bitrate Total Datagrams [ 5] 0.00-10.00 sec 11.9 MBytes 10.0 Mbits/sec 781205 [ 5] 10.00-20.00 sec 11.9 MBytes 10.0 Mbits/sec 781250 [ 5] 20.00-30.00 sec 11.9 MBytes 10.0 Mbits/sec 781250 [ 5] 30.00-40.00 sec 11.9 MBytes 10.0 Mbits/sec 781250 [ 5] 40.00-50.00 sec 11.9 MBytes 10.0 Mbits/sec 781250 [ 5] 50.00-60.00 sec 11.9 MBytes 10.0 Mbits/sec 781250
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams [ 5] 0.00-60.00 sec 71.5 MBytes 10.0 Mbits/sec 0.000 ms 0/4687455 (0%) sender [ 5] 0.00-60.00 sec 59.3 MBytes 8.29 Mbits/sec 0.003 ms 801112/4687455 (17%) receiver
iperf Done.
Server not set CPU affinity (no packet loss)
Test commands: Server: iperf3 -s -p 10000 -V -d Client: iperf3 -c $REMOTE_HOST -i 10 -l 16 -u -b 10M -p 10000 -t 60 -A 0
[ ID] Interval Transfer Bitrate Total Datagrams [ 5] 0.00-10.00 sec 11.9 MBytes 10.0 Mbits/sec 781205 [ 5] 10.00-20.00 sec 11.9 MBytes 10.0 Mbits/sec 781250 [ 5] 20.00-30.00 sec 11.9 MBytes 10.0 Mbits/sec 781250 [ 5] 30.00-40.00 sec 11.9 MBytes 10.0 Mbits/sec 781250 [ 5] 40.00-50.00 sec 11.9 MBytes 10.0 Mbits/sec 781251 [ 5] 50.00-60.00 sec 11.9 MBytes 10.0 Mbits/sec 781249
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams [ 5] 0.00-60.00 sec 71.5 MBytes 10.0 Mbits/sec 0.000 ms 0/4687455 (0%) sender [ 5] 0.00-60.00 sec 71.5 MBytes 10.0 Mbits/sec 0.002 ms 0/4687455 (0%) receiver
iperf Done.
It may be that core 0 (per the -A 0 setting) is loaded, and there are other cores that are free. Did you try running the server on other cores? You can just set the -A value to 1,2,... and see the results.
Also, you can try running the server on the core that is allocated to it when-A is not set: while a test is running find the core that the system allocated to the server from ps -aeF or ps -o psr <server PID>; then, run the server on this core using the -A <core>.
Hi @davidBar-On , Thank you for your reply. After upgrade iperf3 to 3.18 version, the issue disappears. The issues seems fixed, may relevant to the following tickets. https://github.com/esnet/iperf/discussions/1707 https://github.com/esnet/iperf/issues/1741 https://github.com/esnet/iperf/pull/1787
It may be that core 0 (per the
-A 0setting) is loaded, and there are other cores that are free. Did you try running the server on other cores? You can just set the-Avalue to 1,2,... and see the results.Also, you can try running the server on the core that is allocated to it when
-Ais not set: while a test is running find the core that the system allocated to the server fromps -aeForps -o psr <server PID>; then, run the server on this core using the-A <core>.
- I have also tried bind to other cpu core (not 0), the test result was almost the same.
- When run without -A parameter, packet loss greatly improved, decrease to 0.0x% level.