[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20201012132631.426074555@linuxfoundation.org>
Date: Mon, 12 Oct 2020 15:26:41 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Or Cohen <orcohen@...oaltonetworks.com>,
Eric Dumazet <edumazet@...gle.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Stefan Nuernberger <snu@...zon.com>,
David Woodhouse <dwmw@...zon.co.uk>,
Amit Shah <aams@...zon.com>
Subject: [PATCH 4.14 25/70] net/packet: fix overflow in tpacket_rcv
From: Or Cohen <orcohen@...oaltonetworks.com>
commit acf69c946233259ab4d64f8869d4037a198c7f06 upstream.
Using tp_reserve to calculate netoff can overflow as
tp_reserve is unsigned int and netoff is unsigned short.
This may lead to macoff receving a smaller value then
sizeof(struct virtio_net_hdr), and if po->has_vnet_hdr
is set, an out-of-bounds write will occur when
calling virtio_net_hdr_from_skb.
The bug is fixed by converting netoff to unsigned int
and checking if it exceeds USHRT_MAX.
This addresses CVE-2020-14386
Fixes: 8913336a7e8d ("packet: add PACKET_RESERVE sockopt")
Signed-off-by: Or Cohen <orcohen@...oaltonetworks.com>
Signed-off-by: Eric Dumazet <edumazet@...gle.com>
Signed-off-by: Linus Torvalds <torvalds@...ux-foundation.org>
[ snu: backported to pre-5.3, changed tp_drops counting/locking ]
Signed-off-by: Stefan Nuernberger <snu@...zon.com>
CC: David Woodhouse <dwmw@...zon.co.uk>
CC: Amit Shah <aams@...zon.com>
CC: stable@...r.kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
net/packet/af_packet.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
--- a/net/packet/af_packet.c
+++ b/net/packet/af_packet.c
@@ -2201,7 +2201,8 @@ static int tpacket_rcv(struct sk_buff *s
int skb_len = skb->len;
unsigned int snaplen, res;
unsigned long status = TP_STATUS_USER;
- unsigned short macoff, netoff, hdrlen;
+ unsigned short macoff, hdrlen;
+ unsigned int netoff;
struct sk_buff *copy_skb = NULL;
struct timespec ts;
__u32 ts_status;
@@ -2264,6 +2265,12 @@ static int tpacket_rcv(struct sk_buff *s
}
macoff = netoff - maclen;
}
+ if (netoff > USHRT_MAX) {
+ spin_lock(&sk->sk_receive_queue.lock);
+ po->stats.stats1.tp_drops++;
+ spin_unlock(&sk->sk_receive_queue.lock);
+ goto drop_n_restore;
+ }
if (po->tp_version <= TPACKET_V2) {
if (macoff + snaplen > po->rx_ring.frame_size) {
if (po->copy_thresh &&
Powered by blists - more mailing lists