[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110328115726.4cca214d@kryten>
Date: Mon, 28 Mar 2011 11:57:26 +1100
From: Anton Blanchard <anton@...ba.org>
To: davem@...emloft.net, eric.dumazet@...il.com,
herbert@...dor.apana.org.au
Cc: netdev@...r.kernel.org
Subject: [PATCH] net: Always allocate at least 16 skb frags regardless of
page size
When analysing performance of the cxgb3 on a ppc64 box I noticed that
we weren't doing much GRO merging. It turns out we are limited by the
number of SKB frags:
#define MAX_SKB_FRAGS (65536/PAGE_SIZE + 2)
With a 4kB page size we have 18 frags, but with a 64kB page size we
only have 3 frags.
I ran a single stream TCP bandwidth test to compare the performance of
different values of MAX_SKB_FRAGS on the receiver:
MAX_SKB_FRAGS Mbps
3 7080
8 7931 (+12%)
16 8335 (+17%)
32 8349 (+17%)
Performance continues to increase up to 16 frags then levels off so
the patch below puts a lower bound of 16 on MAX_SKB_FRAGS.
Signed-off-by: Anton Blanchard <anton@...ba.org>
---
Index: powerpc.git/include/linux/skbuff.h
===================================================================
--- powerpc.git.orig/include/linux/skbuff.h 2011-03-28 09:41:25.392124844 +1100
+++ powerpc.git/include/linux/skbuff.h 2011-03-28 10:18:58.253050000 +1100
@@ -122,8 +122,14 @@ struct sk_buff_head {
struct sk_buff;
-/* To allow 64K frame to be packed as single skb without frag_list */
+/* To allow 64K frame to be packed as single skb without frag_list. Since
+ * GRO uses frags we allocate at least 16 regardless of page size.
+ */
+#if (65536/PAGE_SIZE + 2) < 16
+#define MAX_SKB_FRAGS 16
+#else
#define MAX_SKB_FRAGS (65536/PAGE_SIZE + 2)
+#endif
typedef struct skb_frag_struct skb_frag_t;
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists