[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <lsq.1560868082.489417250@decadent.org.uk>
Date: Tue, 18 Jun 2019 15:28:02 +0100
From: Ben Hutchings <ben@...adent.org.uk>
To: linux-kernel@...r.kernel.org, stable@...r.kernel.org
CC: akpm@...ux-foundation.org, Denis Kirjanov <kda@...ux-powerpc.org>,
"David S. Miller" <davem@...emloft.net>,
"Jonathan Lemon" <jonathan.lemon@...il.com>,
"Jonathan Looney" <jtl@...flix.com>,
"Yuchung Cheng" <ycheng@...gle.com>,
"Eric Dumazet" <edumazet@...gle.com>,
"Tyler Hicks" <tyhicks@...onical.com>,
"Neal Cardwell" <ncardwell@...gle.com>,
"Bruce Curtis" <brucec@...flix.com>
Subject: [PATCH 3.16 08/10] tcp: tcp_fragment() should apply sane memory
limits
3.16.69-rc1 review patch. If anyone has any objections, please let me know.
------------------
From: Eric Dumazet <edumazet@...gle.com>
commit f070ef2ac66716357066b683fb0baf55f8191a2e upstream.
Jonathan Looney reported that a malicious peer can force a sender
to fragment its retransmit queue into tiny skbs, inflating memory
usage and/or overflow 32bit counters.
TCP allows an application to queue up to sk_sndbuf bytes,
so we need to give some allowance for non malicious splitting
of retransmit queue.
A new SNMP counter is added to monitor how many times TCP
did not allow to split an skb if the allowance was exceeded.
Note that this counter might increase in the case applications
use SO_SNDBUF socket option to lower sk_sndbuf.
CVE-2019-11478 : tcp_fragment, prevent fragmenting a packet when the
socket is already using more than half the allowed space
Signed-off-by: Eric Dumazet <edumazet@...gle.com>
Reported-by: Jonathan Looney <jtl@...flix.com>
Acked-by: Neal Cardwell <ncardwell@...gle.com>
Acked-by: Yuchung Cheng <ycheng@...gle.com>
Reviewed-by: Tyler Hicks <tyhicks@...onical.com>
Cc: Bruce Curtis <brucec@...flix.com>
Cc: Jonathan Lemon <jonathan.lemon@...il.com>
Signed-off-by: David S. Miller <davem@...emloft.net>
[Salvatore Bonaccorso: Adjust context for backport to 4.9.168]
[bwh: Backported to 3.16: adjust context]
Signed-off-by: Ben Hutchings <ben@...adent.org.uk>
---
include/uapi/linux/snmp.h | 1 +
net/ipv4/proc.c | 1 +
net/ipv4/tcp_output.c | 5 +++++
3 files changed, 7 insertions(+)
--- a/include/uapi/linux/snmp.h
+++ b/include/uapi/linux/snmp.h
@@ -265,6 +265,7 @@ enum
LINUX_MIB_TCPWANTZEROWINDOWADV, /* TCPWantZeroWindowAdv */
LINUX_MIB_TCPSYNRETRANS, /* TCPSynRetrans */
LINUX_MIB_TCPORIGDATASENT, /* TCPOrigDataSent */
+ LINUX_MIB_TCPWQUEUETOOBIG, /* TCPWqueueTooBig */
__LINUX_MIB_MAX
};
--- a/net/ipv4/proc.c
+++ b/net/ipv4/proc.c
@@ -286,6 +286,7 @@ static const struct snmp_mib snmp4_net_l
SNMP_MIB_ITEM("TCPWantZeroWindowAdv", LINUX_MIB_TCPWANTZEROWINDOWADV),
SNMP_MIB_ITEM("TCPSynRetrans", LINUX_MIB_TCPSYNRETRANS),
SNMP_MIB_ITEM("TCPOrigDataSent", LINUX_MIB_TCPORIGDATASENT),
+ SNMP_MIB_ITEM("TCPWqueueTooBig", LINUX_MIB_TCPWQUEUETOOBIG),
SNMP_MIB_SENTINEL
};
--- a/net/ipv4/tcp_output.c
+++ b/net/ipv4/tcp_output.c
@@ -1090,6 +1090,11 @@ int tcp_fragment(struct sock *sk, struct
if (nsize < 0)
nsize = 0;
+ if (unlikely((sk->sk_wmem_queued >> 1) > sk->sk_sndbuf)) {
+ NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPWQUEUETOOBIG);
+ return -ENOMEM;
+ }
+
if (skb_unclone(skb, gfp))
return -ENOMEM;
Powered by blists - more mailing lists