lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 19 Jul 2017 12:07:45 +0200
From:   Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To:     linux-kernel@...r.kernel.org
Cc:     Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        stable@...r.kernel.org,
        Chris Cormier <chriscormier@...ulusnetworks.com>,
        Nikolay Aleksandrov <nikolay@...ulusnetworks.com>,
        David Ahern <dsahern@...il.com>,
        "David S. Miller" <davem@...emloft.net>
Subject: [PATCH 4.11 23/88] vrf: fix bug_on triggered by rx when destroying a vrf

4.11-stable review patch.  If anyone has any objections, please let me know.

------------------

From: Nikolay Aleksandrov <nikolay@...ulusnetworks.com>


[ Upstream commit f630c38ef0d785101363a8992bbd4f302180f86f ]

When destroying a VRF device we cleanup the slaves in its ndo_uninit()
function, but that causes packets to be switched (skb->dev == vrf being
destroyed) even though we're pass the point where the VRF should be
receiving any packets while it is being dismantled. This causes a BUG_ON
to trigger if we have raw sockets (trace below).
The reason is that the inetdev of the VRF has been destroyed but we're
still sending packets up the stack with it, so let's free the slaves in
the dellink callback as David Ahern suggested.

Note that this fix doesn't prevent packets from going up when the VRF
device is admin down.

[   35.631371] ------------[ cut here ]------------
[   35.631603] kernel BUG at net/ipv4/fib_frontend.c:285!
[   35.631854] invalid opcode: 0000 [#1] SMP
[   35.631977] Modules linked in:
[   35.632081] CPU: 2 PID: 22 Comm: ksoftirqd/2 Not tainted 4.12.0-rc7+ #45
[   35.632247] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.7.5-20140531_083030-gandalf 04/01/2014
[   35.632477] task: ffff88005ad68000 task.stack: ffff88005ad64000
[   35.632632] RIP: 0010:fib_compute_spec_dst+0xfc/0x1ee
[   35.632769] RSP: 0018:ffff88005ad67978 EFLAGS: 00010202
[   35.632910] RAX: 0000000000000001 RBX: ffff880059a7f200 RCX: 0000000000000000
[   35.633084] RDX: 0000000000000000 RSI: 0000000000000001 RDI: ffffffff82274af0
[   35.633256] RBP: ffff88005ad679f8 R08: 000000000001ef70 R09: 0000000000000046
[   35.633430] R10: ffff88005ad679f8 R11: ffff880037731cb0 R12: 0000000000000001
[   35.633603] R13: ffff8800599e3000 R14: 0000000000000000 R15: ffff8800599cb852
[   35.634114] FS:  0000000000000000(0000) GS:ffff88005d900000(0000) knlGS:0000000000000000
[   35.634306] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[   35.634456] CR2: 00007f3563227095 CR3: 000000000201d000 CR4: 00000000000406e0
[   35.634632] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[   35.634865] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[   35.635055] Call Trace:
[   35.635271]  ? __lock_acquire+0xf0d/0x1117
[   35.635522]  ipv4_pktinfo_prepare+0x82/0x151
[   35.635831]  raw_rcv_skb+0x17/0x3c
[   35.636062]  raw_rcv+0xe5/0xf7
[   35.636287]  raw_local_deliver+0x169/0x1d9
[   35.636534]  ip_local_deliver_finish+0x87/0x1c4
[   35.636820]  ip_local_deliver+0x63/0x7f
[   35.637058]  ip_rcv_finish+0x340/0x3a1
[   35.637295]  ip_rcv+0x314/0x34a
[   35.637525]  __netif_receive_skb_core+0x49f/0x7c5
[   35.637780]  ? lock_acquire+0x13f/0x1d7
[   35.638018]  ? lock_acquire+0x15e/0x1d7
[   35.638259]  __netif_receive_skb+0x1e/0x94
[   35.638502]  ? __netif_receive_skb+0x1e/0x94
[   35.638748]  netif_receive_skb_internal+0x74/0x300
[   35.639002]  ? dev_gro_receive+0x2ed/0x411
[   35.639246]  ? lock_is_held_type+0xc4/0xd2
[   35.639491]  napi_gro_receive+0x105/0x1a0
[   35.639736]  receive_buf+0xc32/0xc74
[   35.639965]  ? detach_buf+0x67/0x153
[   35.640201]  ? virtqueue_get_buf_ctx+0x120/0x176
[   35.640453]  virtnet_poll+0x128/0x1c5
[   35.640690]  net_rx_action+0x103/0x343
[   35.640932]  __do_softirq+0x1c7/0x4b7
[   35.641171]  run_ksoftirqd+0x23/0x5c
[   35.641403]  smpboot_thread_fn+0x24f/0x26d
[   35.641646]  ? sort_range+0x22/0x22
[   35.641878]  kthread+0x129/0x131
[   35.642104]  ? __list_add+0x31/0x31
[   35.642335]  ? __list_add+0x31/0x31
[   35.642568]  ret_from_fork+0x2a/0x40
[   35.642804] Code: 05 bd 87 a3 00 01 e8 1f ef 98 ff 4d 85 f6 48 c7 c7 f0 4a 27 82 41 0f 94 c4 31 c9 31 d2 41 0f b6 f4 e8 04 71 a1 ff 45 84 e4 74 02 <0f> 0b 0f b7 93 c4 00 00 00 4d 8b a5 80 05 00 00 48 03 93 d0 00
[   35.644342] RIP: fib_compute_spec_dst+0xfc/0x1ee RSP: ffff88005ad67978

Fixes: 193125dbd8eb ("net: Introduce VRF device driver")
Reported-by: Chris Cormier <chriscormier@...ulusnetworks.com>
Signed-off-by: Nikolay Aleksandrov <nikolay@...ulusnetworks.com>
Acked-by: David Ahern <dsahern@...il.com>
Signed-off-by: David S. Miller <davem@...emloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
 drivers/net/vrf.c |   11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

--- a/drivers/net/vrf.c
+++ b/drivers/net/vrf.c
@@ -788,15 +788,10 @@ static int vrf_del_slave(struct net_devi
 static void vrf_dev_uninit(struct net_device *dev)
 {
 	struct net_vrf *vrf = netdev_priv(dev);
-	struct net_device *port_dev;
-	struct list_head *iter;
 
 	vrf_rtable_release(dev, vrf);
 	vrf_rt6_release(dev, vrf);
 
-	netdev_for_each_lower_dev(dev, port_dev, iter)
-		vrf_del_slave(dev, port_dev);
-
 	free_percpu(dev->dstats);
 	dev->dstats = NULL;
 }
@@ -1247,6 +1242,12 @@ static int vrf_validate(struct nlattr *t
 
 static void vrf_dellink(struct net_device *dev, struct list_head *head)
 {
+	struct net_device *port_dev;
+	struct list_head *iter;
+
+	netdev_for_each_lower_dev(dev, port_dev, iter)
+		vrf_del_slave(dev, port_dev);
+
 	unregister_netdevice_queue(dev, head);
 }
 


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ