[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1534339268-111834-6-git-send-email-maowenan@huawei.com>
Date: Wed, 15 Aug 2018 21:21:04 +0800
From: Mao Wenan <maowenan@...wei.com>
To: <dwmw2@...radead.org>, <gregkh@...ux-foundation.org>,
<netdev@...r.kernel.org>, <eric.dumazet@...il.com>,
<edumazet@...gle.com>, <davem@...emloft.net>, <ycheng@...gle.com>,
<jdw@...zon.de>
Subject: [PATCH stable 4.4 5/9] tcp: free batches of packets in tcp_prune_ofo_queue()
From: Eric Dumazet <edumazet@...gle.com>
Juha-Matti Tilli reported that malicious peers could inject tiny
packets in out_of_order_queue, forcing very expensive calls
to tcp_collapse_ofo_queue() and tcp_prune_ofo_queue() for
every incoming packet. out_of_order_queue rb-tree can contain
thousands of nodes, iterating over all of them is not nice.
Before linux-4.9, we would have pruned all packets in ofo_queue
in one go, every XXXX packets. XXXX depends on sk_rcvbuf and skbs
truesize, but is about 7000 packets with tcp_rmem[2] default of 6 MB.
Since we plan to increase tcp_rmem[2] in the future to cope with
modern BDP, can not revert to the old behavior, without great pain.
Strategy taken in this patch is to purge ~12.5 % of the queue capacity.
Fixes: 36a6503fedda ("tcp: refine tcp_prune_ofo_queue() to not drop all packets")
Signed-off-by: Eric Dumazet <edumazet@...gle.com>
Reported-by: Juha-Matti Tilli <juha-matti.tilli@....fi>
Acked-by: Yuchung Cheng <ycheng@...gle.com>
Acked-by: Soheil Hassas Yeganeh <soheil@...gle.com>
Signed-off-by: David S. Miller <davem@...emloft.net>
Signed-off-by: root <root@...alhost.localdomain>
Signed-off-by: Mao Wenan <maowenan@...wei.com>
---
net/ipv4/tcp_input.c | 16 ++++++++++------
1 file changed, 10 insertions(+), 6 deletions(-)
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index 12edc4f..32225dc 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -4893,22 +4893,26 @@ static bool tcp_prune_ofo_queue(struct sock *sk)
{
struct tcp_sock *tp = tcp_sk(sk);
struct rb_node *node, *prev;
+ int goal;
if (RB_EMPTY_ROOT(&tp->out_of_order_queue))
return false;
NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_OFOPRUNED);
-
+ goal = sk->sk_rcvbuf >> 3;
node = &tp->ooo_last_skb->rbnode;
do {
prev = rb_prev(node);
rb_erase(node, &tp->out_of_order_queue);
+ goal -= rb_to_skb(node)->truesize;
__kfree_skb(rb_to_skb(node));
- sk_mem_reclaim(sk);
- if (atomic_read(&sk->sk_rmem_alloc) <= sk->sk_rcvbuf &&
- !tcp_under_memory_pressure(sk))
- break;
-
+ if (!prev || goal <= 0) {
+ sk_mem_reclaim(sk);
+ if (atomic_read(&sk->sk_rmem_alloc) <= sk->sk_rcvbuf &&
+ !tcp_under_memory_pressure(sk))
+ break;
+ goal = sk->sk_rcvbuf >> 3;
+ }
node = prev;
} while (node);
tp->ooo_last_skb = rb_entry(prev, struct sk_buff, rbnode);
--
1.8.3.1
Powered by blists - more mailing lists