bytes/blockcount with reverse mode
iperf 3.1.2 CentOS 6.4 x86_64
While testing with iperf3 blockcount (-k) option (similar behavior with bytes (-n) option), I see the following observations
-
Normal mode Client: iperf3 -p 25000 -c <server_ip> -u -l 1370 -b 3m -k 2000 Server: iperf3 -p 25000 -s -i 1 Client sends ~2000 packets to the server irrespective of the packets lost in the network. Hence, even if server received only 1500 packets due to nw losses, the client stopped sending after transmitting 2000 packets.
-
Reverse mode Client: iperf3 -p 25000 -c <server_ip> -u -l 1370 -b 3m -k 2000 -R Server: iperf3 -p 25000 -s -i 1 Server sends packets till the client receives ~2000 packets. Hence, if this is a lossy nw such that 25% packets are lost in transmission, then server effectively sends 2500 packets and client receives 2000 packets and only then the test ends.
Please clarify if this is the intended implementation.
The problem with this is similar to #594, where the server knows when the test is done but doesn't have an easy way to tell the client that (and the client is the side that controls stopping the test...this is regardless of whether --reverse is used or not). Needs some thought to see whether there's a straightforward solution to this problem or not.
Thanks for the clarifications
Any news on this issue? Because I have the exact same problem with -n (set number bytes) and -R (reverse mode).
In fact I setup an script to count the number of bytes from iperf output and continuous test until the limit is reached, but this is not a good solution :(
Hi, I observed a little bit different behavior with iperf 3.9 on Fedora but it is related to the combination of -k/-n and -R. It seems that if I use -k or -n together with -R, the test actually runs indefinitely. This is how I used it:
# iperf3 --server --daemon --port 5201
# iperf3 -c 127.0.0.1 -p 5201 -i2 -VR --forceflush --debug -l 1M -N --parallel 3 --dscp 0 --fq-rate=0 [-n|-k] 10 -T'dope'
After this, no matter whether I use -k or -n, the test never ends. (Well I don't want to say never but it ran for an hour so I killed it). Any idea why this happens? Is this issue related to the issue here?
Thanks for any input on this. Michal
In my local computer I see the same problem using iperf3 version 3.7, but it doesn't exist when I am using 3.10.
I can confirm that I'm also seeing this issue fixed with the latest on main (post release 3.17.1) , specifically, both -k and -n with -R do stop as expected when performing a UDP test. It's the client version that matters (upgrading the server end doesn't do it). I believe this issue can be closed.
@tve, can you describe the test you did? I am asking since as far as I know this issue was not resoled, as it requires that the server will notify the client that the test ended (see this comment above).
Actually, since version 3.16 (multi-thread) issue is worse, since even after all packages are sent/received, with or without -R, the test continues until the current interval (-i) ends (see issue #1768 discussions).
Sure:
# iperf3 -c example.com -p 9876 -u -b 200M -n 3100K -R -i 0.1
Connecting to host example.com, port 9876
Reverse mode, remote host example.com is sending
[ 5] local 192.168.x.x port 41675 connected to 52.x.x.x port 9876
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
[ 5] 0.00-0.10 sec 1.38 MBytes 115 Mbits/sec 0.099 ms 9/1006 (0.89%)
[ 5] 0.10-0.20 sec 1.57 MBytes 132 Mbits/sec 0.084 ms 0/1138 (0%)
[ 5] 0.20-0.30 sec 1.59 MBytes 133 Mbits/sec 0.079 ms 443/1593 (28%)
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
[ 5] 0.00-0.38 sec 9.15 MBytes 200 Mbits/sec 0.000 ms 0/6624 (0%) sender
[ 5] 0.00-0.30 sec 4.54 MBytes 127 Mbits/sec 0.079 ms 452/3737 (12%) receiver
and on the server I see:
Accepted connection from 70.x.x.x, port 52296
[ 5] local 10.200.x.x port 9876 connected to 70.x.x.x port 41675
[ ID] Interval Transfer Bitrate Total Datagrams
[ 5] 0.00-0.38 sec 9.15 MBytes 200 Mbits/sec 6624
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
[ 5] 0.00-0.38 sec 9.15 MBytes 200 Mbits/sec 0.000 ms 0/6624 (0%) sender
Server: iperf 3.17.1+ (cJSON 1.7.15) Client: iperf 3.17.1 (cJSON 1.7.15)
I am not sure that the test shows that the issue was fixed. If the issue was fixed then the server should have sent 3.03MB (3100KB), but it sent 9.15MB.
Even the client does not stop the test after receiving 3.03MB (it received 4.54MB) because of the other issue I mentioned above - it ends the test only at the end of the 0.3 seconds interval (up to the end of the 0.2 seconds interval it received 2.95MB).
Yes, you're right, calling it "fixed" is perhaps not right. For me the way it works, i.e., the receiver stops the transfer at the next interval after it gets the requested amount makes it totally usable. It gets into the "bug or feature" grey zone as opposed to previously being totally unusable (when it continued forever).
... the receiver stops the transfer at the next interval after it gets the requested amount makes it totally usable. It gets into the "bug or feature" grey zone as opposed to previously being totally unusable (when it continued forever).
OK, I understand. I submitted now PR #1800 that suggests to make the current behavior a feature of iperf3, buy documenting it in the help message.