[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <471E0F3B.2020703@myri.com>
Date: Tue, 23 Oct 2007 11:11:55 -0400
From: Andrew Gallatin <gallatin@...i.com>
To: netdev <netdev@...r.kernel.org>
CC: ossthema@...ibm.com
Subject: [PATCH] LRO ack aggregation
Hi,
We recently did some performance comparisons between the new inet_lro
LRO support in the kernel, and our Myri10GE in-driver LRO.
For receive, we found they were nearly identical. However, for
transmit, we found that Myri10GE's LRO shows much lower CPU
utilization. We traced the CPU utilization difference to our driver
LRO aggregating TCP acks, and the inet_lro module not doing so.
I've attached a patch which adds support to inet_lro for aggregating
pure acks. Aggregating pure acks (segments with TCP_PAYLOAD_LENGTH ==
0) entails freeing the skb (or putting the page in the frags case).
The patch also handles trimming (typical for 54-byte pure ack frames
which have been padded to the ethernet minimum 60 byte frame size).
In the frags case, I tried to keep things simple by only doing the
trim when the entire frame fits in the first frag. To be safe, I
ensure that the padding is all 0 (or, more exactly, was some pattern
whose checksum is -0) so that it doesn't impact hardware checksums.
This patch also fixes a small bug in the skb LRO path dealing with
vlans that I found when doing my own testing. Specifically, in the
desc->active case, the existing code always fails the
lro_tcp_ip_check() for NICs without LRO_F_EXTRACT_VLAN_ID, because it
fails to subtract the vlan_hdr_len from the skb->len.
Jan-Bernd Themann (ossthema@...ibm.com) has tested the patch using
the eHEA driver (skb codepath), and I have tested it using Myri10GE
(both frags and skb codepath).
Using a pair of identical low-end 2.0GHz Athlon 64 x2 3800+ with
Myri10GE 10GbE NICs, I ran 10 iterations of netperf TCP_SENDFILE
tests, taking the median run for comparison purposes. The receiver
was running Myri10GE + patched inet_lro:
TCP SENDFILE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to rome-my
(192.168.1.16) port 0 AF_INET : cpu bind
Recv Send Send Utilization Service
Demand
Socket Socket Message Elapsed Send Recv Send Recv
Size Size Size Time Throughput local remote local
remote
bytes bytes bytes secs. 10^6bits/s % S % S us/KB us/KB
Myri10GE driver-specific LRO:
87380 65536 65536 60.02 9442.65 16.24 69.31 0.282 1.203
Myri10GE + unpatched inet_lro:
87380 65536 65536 60.02 9442.88 20.10 69.11 0.349 1.199
Myri10GE + patched inet_lro:
87380 65536 65536 60.02 9443.30 16.95 68.97 0.294 1.197
The important bit here is the sender's CPU utilization, and service
demand (cost per byte). As you can see, without aggregating ack's,
the overhead on the sender is roughly 20% higher, even when sending to
a receiver which uses LRO. The differences are even more dramatic
when sending to a receiver which does not use LRO (and hence sends
more frequent acks).
Below is the same benchmark, run between a pair of 4-way 3.0GHz Xeon
5160 machines (Dell 2950) with Myri10GE NICs. The receiver is running
Solaris 10U4, which does not do LRO, and is acking at approximately
8:1 (or ~100K acks/sec):
Myri10GE driver-specific LRO:
196712 65536 65536 60.01 9280.09 7.14 45.37 0.252 1.602
Myri10GE + unpatched inet_lro:
196712 65536 65536 60.01 8530.80 10.51 44.60 0.404 1.713
Myri10GE + patched inet_lro:
196712 65536 65536 60.00 9249.65 7.21 45.90 0.255 1.626
Signed off by: Andrew Gallatin <gallatin@...i.com>
Andrew Gallatin
Myricom Inc.
View attachment "ack_aggr.diff" of type "text/plain" (3790 bytes)
Powered by blists - more mailing lists