lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sun,  5 Feb 2017 20:21:02 +0100
From:   Willy Tarreau <w@....eu>
To:     linux-kernel@...r.kernel.org, stable@...r.kernel.org,
        linux@...ck-us.net
Cc:     Michal Kubecek <mkubecek@...e.cz>, Jiri Slaby <jslaby@...e.cz>,
        Willy Tarreau <w@....eu>
Subject: [PATCH 3.10 240/319] net: disable fragment reassembly if high_thresh is set to zero

From: Michal Kubecek <mkubecek@...e.cz>

commit 30759219f562cfaaebe7b9c1d1c0e6b5445c69b0 upstream.

Before commit 6d7b857d541e ("net: use lib/percpu_counter API for
fragmentation mem accounting"), setting high threshold to 0 prevented
fragment reassembly as first fragment would be always evicted before
second could be added to the queue. While inefficient, some users
apparently relied on it.

Since the commit mentioned above, a percpu counter is used for
reassembly memory accounting and high batch size avoids taking slow path
in most common scenarios. As a result, a whole full sized packet can be
reassembled without the percpu counter's main counter changing its
value so that even with high_thresh set to 0, fragmented packets can be
still reassembled and processed.

Add explicit checks preventing reassembly if high threshold is zero.

[mk] backport to 3.12

Signed-off-by: Michal Kubecek <mkubecek@...e.cz>
Signed-off-by: Jiri Slaby <jslaby@...e.cz>
Signed-off-by: Willy Tarreau <w@....eu>
---
 net/ipv4/ip_fragment.c                  | 4 ++++
 net/ipv6/netfilter/nf_conntrack_reasm.c | 3 +++
 net/ipv6/reassembly.c                   | 4 ++++
 3 files changed, 11 insertions(+)

diff --git a/net/ipv4/ip_fragment.c b/net/ipv4/ip_fragment.c
index 4d98a6b..04c7e46 100644
--- a/net/ipv4/ip_fragment.c
+++ b/net/ipv4/ip_fragment.c
@@ -656,6 +656,9 @@ int ip_defrag(struct sk_buff *skb, u32 user)
 	net = skb->dev ? dev_net(skb->dev) : dev_net(skb_dst(skb)->dev);
 	IP_INC_STATS_BH(net, IPSTATS_MIB_REASMREQDS);
 
+	if (!net->ipv4.frags.high_thresh)
+		goto fail;
+
 	/* Start by cleaning up the memory. */
 	ip_evictor(net);
 
@@ -672,6 +675,7 @@ int ip_defrag(struct sk_buff *skb, u32 user)
 		return ret;
 	}
 
+fail:
 	IP_INC_STATS_BH(net, IPSTATS_MIB_REASMFAILS);
 	kfree_skb(skb);
 	return -ENOMEM;
diff --git a/net/ipv6/netfilter/nf_conntrack_reasm.c b/net/ipv6/netfilter/nf_conntrack_reasm.c
index 7cd6235..c11a40c 100644
--- a/net/ipv6/netfilter/nf_conntrack_reasm.c
+++ b/net/ipv6/netfilter/nf_conntrack_reasm.c
@@ -569,6 +569,9 @@ struct sk_buff *nf_ct_frag6_gather(struct sk_buff *skb, u32 user)
 	if (find_prev_fhdr(skb, &prevhdr, &nhoff, &fhoff) < 0)
 		return skb;
 
+	if (!net->nf_frag.frags.high_thresh)
+		return skb;
+
 	clone = skb_clone(skb, GFP_ATOMIC);
 	if (clone == NULL) {
 		pr_debug("Can't clone skb\n");
diff --git a/net/ipv6/reassembly.c b/net/ipv6/reassembly.c
index a1fb511..1a5318e 100644
--- a/net/ipv6/reassembly.c
+++ b/net/ipv6/reassembly.c
@@ -556,6 +556,9 @@ static int ipv6_frag_rcv(struct sk_buff *skb)
 		return 1;
 	}
 
+	if (!net->ipv6.frags.high_thresh)
+		goto fail_mem;
+
 	evicted = inet_frag_evictor(&net->ipv6.frags, &ip6_frags, false);
 	if (evicted)
 		IP6_ADD_STATS_BH(net, ip6_dst_idev(skb_dst(skb)),
@@ -575,6 +578,7 @@ static int ipv6_frag_rcv(struct sk_buff *skb)
 		return ret;
 	}
 
+fail_mem:
 	IP6_INC_STATS_BH(net, ip6_dst_idev(skb_dst(skb)), IPSTATS_MIB_REASMFAILS);
 	kfree_skb(skb);
 	return -1;
-- 
2.8.0.rc2.1.gbe9624a

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ