[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180804082706.727601513@linuxfoundation.org>
Date: Sat, 4 Aug 2018 11:01:43 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Eric Dumazet <edumazet@...gle.com>,
Jann Horn <jannh@...gle.com>, Florian Westphal <fw@...len.de>,
Peter Oskolkov <posk@...gle.com>,
Paolo Abeni <pabeni@...hat.com>,
"David S. Miller" <davem@...emloft.net>
Subject: [PATCH 4.4 114/124] inet: frag: enforce memory limits earlier
4.4-stable review patch. If anyone has any objections, please let me know.
------------------
From: Eric Dumazet <edumazet@...gle.com>
[ Upstream commit 56e2c94f055d328f5f6b0a5c1721cca2f2d4e0a1 ]
We currently check current frags memory usage only when
a new frag queue is created. This allows attackers to first
consume the memory budget (default : 4 MB) creating thousands
of frag queues, then sending tiny skbs to exceed high_thresh
limit by 2 to 3 order of magnitude.
Note that before commit 648700f76b03 ("inet: frags: use rhashtables
for reassembly units"), work queue could be starved under DOS,
getting no cpu cycles.
After commit 648700f76b03, only the per frag queue timer can eventually
remove an incomplete frag queue and its skbs.
Fixes: b13d3cbfb8e8 ("inet: frag: move eviction of queues to work queue")
Signed-off-by: Eric Dumazet <edumazet@...gle.com>
Reported-by: Jann Horn <jannh@...gle.com>
Cc: Florian Westphal <fw@...len.de>
Cc: Peter Oskolkov <posk@...gle.com>
Cc: Paolo Abeni <pabeni@...hat.com>
Acked-by: Florian Westphal <fw@...len.de>
Signed-off-by: David S. Miller <davem@...emloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
net/ipv4/inet_fragment.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
--- a/net/ipv4/inet_fragment.c
+++ b/net/ipv4/inet_fragment.c
@@ -364,11 +364,6 @@ static struct inet_frag_queue *inet_frag
{
struct inet_frag_queue *q;
- if (frag_mem_limit(nf) > nf->high_thresh) {
- inet_frag_schedule_worker(f);
- return NULL;
- }
-
q = kmem_cache_zalloc(f->frags_cachep, GFP_ATOMIC);
if (!q)
return NULL;
@@ -405,6 +400,11 @@ struct inet_frag_queue *inet_frag_find(s
struct inet_frag_queue *q;
int depth = 0;
+ if (!nf->high_thresh || frag_mem_limit(nf) > nf->high_thresh) {
+ inet_frag_schedule_worker(f);
+ return NULL;
+ }
+
if (frag_mem_limit(nf) > nf->low_thresh)
inet_frag_schedule_worker(f);
Powered by blists - more mailing lists