[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20181016170542.323641535@linuxfoundation.org>
Date: Tue, 16 Oct 2018 19:09:56 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org, netdev@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Eric Dumazet <edumazet@...gle.com>,
"David S. Miller" <davem@...emloft.net>
Subject: [PATCH 4.9 59/71] inet: frags: get rid of ipfrag_skb_cb/FRAG_CB
4.9-stable review patch. If anyone has any objections, please let me know.
------------------
From: Eric Dumazet <edumazet@...gle.com>
ip_defrag uses skb->cb[] to store the fragment offset, and unfortunately
this integer is currently in a different cache line than skb->next,
meaning that we use two cache lines per skb when finding the insertion point.
By aliasing skb->ip_defrag_offset and skb->dev, we pack all the fields
in a single cache line and save precious memory bandwidth.
Note that after the fast path added by Changli Gao in commit
d6bebca92c66 ("fragment: add fast path for in-order fragments")
this change wont help the fast path, since we still need
to access prev->len (2nd cache line), but will show great
benefits when slow path is entered, since we perform
a linear scan of a potentially long list.
Also, note that this potential long list is an attack vector,
we might consider also using an rb-tree there eventually.
Signed-off-by: Eric Dumazet <edumazet@...gle.com>
Signed-off-by: David S. Miller <davem@...emloft.net>
(cherry picked from commit bf66337140c64c27fa37222b7abca7e49d63fb57)
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
include/linux/skbuff.h | 5 +++++
1 file changed, 5 insertions(+)
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -645,6 +645,11 @@ struct sk_buff {
};
struct rb_node rbnode; /* used in netem & tcp stack */
};
+
+ union {
+ int ip_defrag_offset;
+ };
+
struct sock *sk;
struct net_device *dev;
Powered by blists - more mailing lists