Server traffic shaping

This article covers implementing server traffic shaping on Linux with CAKE. The aim is to provide fair usage of bandwidth between clients and consistently low latency for dedicated and virtual servers provided by companies like OVH and others.

Traffic shaping is generally discussed in the context of a router shaping traffic for a local network with assorted clients connected. It also has a lot to offer on a server where you don't control the network. If you control your own infrastructure from the server to the ISP, you probably want to do this on the routers instead.

This article was motivated by the serious lack of up-to-date information on this topic elsewhere. It's very easy to implement on modern Linux kernels and the results are impressive from extremely simple test cases to heavily loaded servers.

Problem

A server will generally be provisioned with a specific amount of bandwidth enforced by a router in close proximity. This router acts as the bottleneck and ends up being in charge of most of the queuing and congestion decisions. Unless that's under your control, the best you can hope for is that the router is configured to use fq_codel as the queuing discipline (qdisc) to provide fair queuing between streams and low latency by preventing a substantial backlog of data.

Unfortunately, the Linux kernel still defaults to pfifo_fast instead of the much saner fq_codel algorithm. This is changed by a configuration file shipped with systemd, so most distributions using systemd as init end up with a sane default. Debian removes that configuration and doesn't set a sane default itself, and is widely used. Many server providers like OVH do not appear to consistently use modern queue disciplines like fq_codel within their networks, particularly at artificial bottlenecks implementing rate limiting based on product tiers.

If the bottleneck doesn't use fair queuing, division of bandwidth across streams is very arbitrary and latency suffers under congestion. These issues are often referred to as bufferbloat, and fq_codel is quite good at resolving it.

The fq_codel algorithm is far from perfect. It has issues with hash collisions and more importantly only does fair queuing between streams. Buffer bloat also isn't the only relevant issue. Clients with multiple connections receive more bandwidth and a client can open a large number of connections to maximize their bandwidth usage at the expense of others. Fair queuing is important beyond as a solution to bufferbloat and there's more to fair queuing than doing it only based on streams.

Traditionally, web browsers open a bunch of HTTP/1.1 connections to each server which ends up giving them an unfair amount of bandwidth. HTTP/2 is much friendlier since it uses a single connection to each server for the entire browser. Download managers take this to the extreme and intentionally use many connections to bypass server limits and game the division of resources between clients.

Solution

Linux 4.19 and later makes it easy to solve all of these problems. The CAKE queuing discipline provides sophisticated fair queuing based on destination and source addresses with finer-grained fairness for individual streams.

Unfortunately, simply enabling it as your queuing discipline isn't enough since it's highly unlikely that your server is the network bottleneck. You need to configure it with a bandwidth limit based on the provisioned bandwidth to move the bottleneck under your control where you can control how traffic is queued.

Results

We've used an 100mbit OVH server for as a test platform for a case where clients can easily max out the server bandwidth on their own. As a very simple example, consider 2 clients with more than 100mbit of bandwidth each downloading a large file. These are (rounded) real world results with CAKE:

CAKE with flows instead of the default triple-isolate to mimic fq_codel at a bottleneck:

The situation without traffic shaping is a mess. Latency takes a serious hit that's very noticeable via SSH. Bandwidth is consistently allocated very unevenly and ends up fluctuating substantially between test runs. The connections tend to settle near a rate, often significantly lower or higher than the fair 9mbit amount. It's generally something like this, but the range varies a lot:

CAKE continues working as expected with a far higher number of connections. It technically has a higher CPU cost than fq_codel, but that's much more of a concern for low end router hardware. It hardly matters on a server, even one that's under heavy CPU load. The improvement in user experience is substantial and it's very noticeable in web page load speeds when a server is under load.

Implementation

For a server with 2000mbit of bandwidth provisioned, you could start by trying it with 99.75% of the provisioned bandwidth:

tc qdisc replace dev eth0 root cake bandwidth 1995mbit besteffort

On a server, setting it to use 100% of the provisioned bandwidth may work fine in practice. Unlike a local network connected to a consumer ISP, you shouldn't need to sacrifice anywhere close to the typically recommended 5-10% of your bandwidth for traffic shaping.

This also sets besteffort for the common case where the server doesn't have appropriate Quality of Service markings set up via Diffserv. Fair scheduling is already great at providing low latency by cycling through the hosts and streams without needing this kind of configuration. The defaults for Diffserv traffic classes like real-time video are set up to yield substantial bandwidth in exchange for lower latency. It's easy to set this up wrong and it usually won't make much sense on a server. You might want to set up marking low priority traffic like system updates, but it will already get a tiny share of the overall traffic on a loaded server due to fair scheduling between hosts and streams.

You can use the tc -s qdisc command to monitor CAKE:

tc -s qdisc show dev eth0

If you want to keep an eye on how it changes over time:

watch -n 1 tc -s qdisc show dev eth0

This is very helpful for figuring out if you've successfully moved the bottleneck to the server. If the bandwidth is being fully used, it should consistently have a backlog of data where it's applying the queuing discipline. The backlog shouldn't be draining to near zero under full bandwidth usage as that indicates the bottleneck is the server application itself or a different network bottleneck.

If you use systemd-network, you can add a CAKE configuration section to the network configuration file instead of manually running the tc command with a Type=oneshot service on boot:

[CAKE]
Bandwidth=1995M
PriorityQueueingPreset=besteffort

Quicker backpressure propagation

The Linux kernel can be tuned to more quickly propagate TCP backpressure up to applications while still maximizing bandwidth usage. This is incredibly useful for interactive applications aiming to send the freshest possible copy of data and for protocols like HTTP/2 multiplexing streams/messages with different priorities over the same TCP connection. This can also substantially reduce memory usage for TCP by reducing buffer sizes closer to the optimal amount for maximizing bandwidth use without wasting memory. The downside to quicker backpressure propagation is increased CPU usage from additional system calls and context switches.

The Linux kernel automatically adjusts the size of the write queue to maximize bandwidth usage. The write queue is divided into unacknowledged bytes (TCP window size) and unsent bytes. As acknowledgements of transmitted data are received, it frees up space for the application to queue more data. The queue of unsent bytes provides the leeway needed to wake the application and obtain more data. This can be reduced using net.ipv4.tcp_notsent_lowat to reduce the default and the TCP_NOTSENT_LOWAT socket option to override it per-socket.

A reasonable choice for internet-based workloads concerned about latency and particularly prioritization within TCP connections but unwilling to sacrifice throughput is 128kiB. To configure this, set the following in /etc/sysctl.d/local.conf or another sysctl configuration file and load it with sysctl --system:

net.ipv4.tcp_notsent_lowat = 131072

Using values as low as 16384 can make sense to further improve latency and prioritization. However, it's more likely to negatively impact throughput and will further increase CPU usage. Use at least 128k or the default of not limiting the automatic unsent buffer size unless you're going to do substantial testing to make sure there's not a negative impact for the workload.

If you decide to use tcp_notsent_lowat, be aware that newer Linux kernels (Linux 5.0+ with a further improvement for Linux 5.10+) are recommended to substantially reduce system calls / context switches by not triggering the application to provide more data until over half the unsent byte buffer is empty.

Future

Ideally, data centers would deploy CAKE throughout their networks with the default triple-isolate flow isolation. This may mean they need to use more powerful hardware for routing. If the natural bottlenecks used CAKE, setting up traffic shaping on the server wouldn't be necessary. This doesn't seem likely any time soon. Deploying fq_codel is much more realistic and tackles buffer bloat but not the issue of fairness between hosts rather than only streams.