Skip to content

Keeping FreeBSD TCP performance in the midst of a highly-buffered connection

I was perplexed recently, when I began an rsync job to a raspberry pi server. I know exactly what limits the bandwidth of this connection–it is the CPU (or network) on the Raspberry Pi, which cannot accept data fast enough.

So, even though my server is on a 1 Gbit/s interface, and the Raspberry Pi is on a 100 Mbit/s interface, the transfer rate is ~ 10 Mbit/s. Fair enough.

But what really perplexed me is that the presence of this rsync connection severly limited other connections–notably Samba. The Simpson’s show in the living room had audio that was noticeably stuttering.

So, I began to investigate. This same low-rate occurred with iperf. It seemed a little better from my basement computer than the living room machine. Here is an iperf from the basement to the FreeBSD server:


C:\Users\Poojan\Downloads\iperf-2.0.5-2-win32>iperf -c server
------------------------------------------------------------
Client connecting to server, TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.1.20 port 64155 connected with 192.168.1.8 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.1 sec 25.1 MBytes 21.0 Mbits/sec

Whereas without rsync going, it would be around 670 Mbit/s or so.

I started playing around with buffers. Curiously, reducing sendbuf_max helped:


Poojan@server ~ >sudo sysctl net.inet.tcp.sendbuf_max
net.inet.tcp.sendbuf_max: 262144
Poojan@server ~ >sudo sysctl net.inet.tcp.sendbuf_max=65536
net.inet.tcp.sendbuf_max: 262144 -> 65536

Which yielded:

C:\Users\Poojan\Downloads\iperf-2.0.5-2-win32>iperf -c server
------------------------------------------------------------
Client connecting to server, TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.1.20 port 64171 connected with 192.168.1.8 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 788 MBytes 661 Mbits/sec

I posited that maybe there’s some overall limit to the buffers, and rsync was stealing all of them, so making them smaller allowed more buffers to be available to iperf. I went hunting for this limit.

I tried doubling kern.ipc.maxsockbuf:


Poojan@server ~ >sudo sysctl -w kern.ipc.maxsockbuf=524288
kern.ipc.maxsockbuf: 262144 -> 524288

which yielded:

C:\Users\Poojan\Downloads\iperf-2.0.5-2-win32>iperf -c server
------------------------------------------------------------
Client connecting to server, TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.1.20 port 64216 connected with 192.168.1.8 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 25.0 MBytes 20.9 Mbits/sec

No luck. Note: I realized that the above was with Jumbo frames enabled on both server & client. I disabled jumbo on client.

I then did a netstat -m, just in case:


1470/5175/6645 mbufs in use (current/cache/total)
271/2635/2906/10485760 mbuf clusters in use (current/cache/total/max)
271/2635 mbuf+clusters out of packet secondary zone in use (current/cache)
85/335/420/762208 4k (page size) jumbo clusters in use (current/cache/total/max)
1041/361/1402/225839 9k jumbo clusters in use (current/cache/total/max)
0/0/0/127034 16k jumbo clusters in use (current/cache/total/max)
10618K/11152K/21771K bytes allocated to network (current/cache/total)
1106/2171/531 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
0/0/0 requests for mbufs delayed (mbufs/clusters/mbuf+clusters)
0/0/0 requests for jumbo clusters delayed (4k/9k/16k)
361/1345/0 requests for jumbo clusters denied (4k/9k/16k)
0 requests for sfbufs denied
0 requests for sfbufs delayed
0 requests for I/O initiated by sendfile

This didn’t really show any indication that buffers were being over-subscribed, at least not during the tests.

But now, with a sendbuf_max size of 262144, and a maxsockbuf size of 524288, my iperf reading went down:


C:\Users\Poojan\Downloads\iperf-2.0.5-2-win32>iperf -c server
------------------------------------------------------------
Client connecting to server, TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.1.20 port 64438 connected with 192.168.1.8 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.7 sec 2.88 MBytes 2.25 Mbits/sec

From reading this summary of FreeBSD buffers, it seems that kern.ipc.maxsockbuf operates at a different level than net.inet.tcp.sendbuf. And, in fact, both these being large is impacting the performance. So, maybe this is just pure buffer bloat.

But, then I realized that my better results were when the sendbuf was less than 64k. So, I disabled RFC1323 (which allows for buffers larger than 64k, in addition to time-stamps). And voila!


C:\Users\Poojan\Downloads\iperf-2.0.5-2-win32>iperf -c server
------------------------------------------------------------
Client connecting to server, TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.1.20 port 65203 connected with 192.168.1.8 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 798 MBytes 669 Mbits/sec

Be the first to like.

Post a Comment

Your email is never published nor shared. Required fields are marked *
*
*