UDP GSO support
Implementation highlights
- Batching iovecs to utilize UDP-GSO in
send_packets_outcallback * Packets of equal length are batched together. The last packet can be of a smaller length. *setup_ctl_msgupdated to set the CMSG hdr of UDP_SEGMENT with gso_size passed as pktlen * send_gso internally leveragessend_packets_one_by_one()to send batched iovecs with the GSO CMSG. - -O BURST command-line option added ... BURST is the max packets to coalesce in single call. * Even if -O is specified, a runtime check is added to check for UDP-GSO support since the binaries may be transferred to another system using a different kernel.
Preliminary Data
Scenario: Transfer 1GB file locally using http_client/http_server app and use perf record to profile only the server who serves the file.
Note: A lot depends on how the sample application calls the stream_write()/stream_flush() APIs. However, without any changes to the sample http client/server app I was able to get an aggregation of roughly ~6 packets. I tried GSO with burst size of 10 packets and 20 packets. One can check the aggregation efficiency by enabling debug logs and checking LSQ_DEBUG("GSO with burst:%d", vcnt).
Following is the perf-record and diff for:
- send_packets_one_by_one()
- send_packets_using_sendmmsg()
- send_packets_using_gso()}

I have my own application which simulates RPC scenario with user-space lsquic pacing disabled and simulates long responses involving 1.2KB to 100KB and I easily get a batching/aggregation of 10/20/30 packets and much better CPU usage reduction.
Thank you -- this is interesting!
How I can add a new congestion algorithm in lsquic
How I can add a new congestion algorithm in lsquic
I certainly believe this question has no relevance to this PR. I suggest you raise this query as a general issue. To augment my experience in context to congestion algorithm handling.... lsquic nicely decouples these algos and provides clear entry points for congestion events .. You can check existing implemented cubic/bbr algos and it should be easy to figure out how to add a new algo.
Do I read the benchmarking results correctly that the "sendmmsg" approach reduces CPU usage more than GSO?
with user-space lsquic pacing disabled
This is interesting. Did you do this to get better results?
Do I read the benchmarking results correctly that the "sendmmsg" approach reduces CPU usage more than GSO?
Yes in the context sendmmsg was actually more optimal than GSO. This may be due to the fact that I could not achieve optimized batching with the default HTTP file transfer example. Batching improves a lot when the application developer does stream flush after writing multiple data sets. I wrote a different app that does multiple stream write and then a flush and the performance of GSO is much better than sendmmsg but I didn't showcase this data since I didn't have that sample app in public.
with user-space lsquic pacing disabled
This is interesting. Did you do this to get better results?
There were two reasons: lsquic was not able to fully utilize the bandwidth with pacing enabled (with or without GSO). With pacing disabled the utilization was much improved. Secondly, I found that with pacing the iovec batches that were spewed from lsquic was also limited. I didn't end up properly debugging these points and locating the root cause.
I wrote a different app that does multiple stream write and then a flush and the performance of GSO is much better than sendmmsg
I wonder why this would be. I'll have to think about this.
but I didn't showcase this data since I didn't have that sample app in public.
Does it mean you have at least one app in public (with source available)? I'd be curious to see how others use lsquic.
lsquic was not able to fully utilize the bandwidth with pacing enabled
What were the path characteristics? Did you use BBR or Cubic?
I wrote a different app that does multiple stream write and then a flush and the performance of GSO is much better than sendmmsg
I wonder why this would be. I'll have to think about this.
but I didn't showcase this data since I didn't have that sample app in public.
Does it mean you have at least one app in public (with source available)? I'd be curious to see how others use lsquic.
The public app I used is the same HTTP client/server app that lsquic has.
lsquic was not able to fully utilize the bandwidth with pacing enabled
What were the path characteristics? Did you use BBR or Cubic?
I used both BBR/Cubic and the performance was slightly worse in case of no-loss conditions but with >=1% loss, the performance was much worse with lsquic. I compared it with Linux kernel TCP-Cubic. I wish I could have sent those numbers back then but I waited to analyze the root cause myself and never reached there. I have since then moved out of that organization and do not have access to those results.