[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20171019134845.140047397@linuxfoundation.org>
Date: Thu, 19 Oct 2017 15:49:15 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org,
Ridge Kennedy <ridge.kennedy@...iedtelesis.co.nz>,
Guillaume Nault <g.nault@...halink.fr>,
"David S. Miller" <davem@...emloft.net>
Subject: [PATCH 3.18 09/27] l2tp: Avoid schedule while atomic in exit_net
3.18-stable review patch. If anyone has any objections, please let me know.
------------------
From: Ridge Kennedy <ridge.kennedy@...iedtelesis.co.nz>
[ Upstream commit 12d656af4e3d2781b9b9f52538593e1717e7c979 ]
While destroying a network namespace that contains a L2TP tunnel a
"BUG: scheduling while atomic" can be observed.
Enabling lockdep shows that this is happening because l2tp_exit_net()
is calling l2tp_tunnel_closeall() (via l2tp_tunnel_delete()) from
within an RCU critical section.
l2tp_exit_net() takes rcu_read_lock_bh()
<< list_for_each_entry_rcu() >>
l2tp_tunnel_delete()
l2tp_tunnel_closeall()
__l2tp_session_unhash()
synchronize_rcu() << Illegal inside RCU critical section >>
BUG: sleeping function called from invalid context
in_atomic(): 1, irqs_disabled(): 0, pid: 86, name: kworker/u16:2
INFO: lockdep is turned off.
CPU: 2 PID: 86 Comm: kworker/u16:2 Tainted: G W O 4.4.6-at1 #2
Hardware name: Xen HVM domU, BIOS 4.6.1-xs125300 05/09/2016
Workqueue: netns cleanup_net
0000000000000000 ffff880202417b90 ffffffff812b0013 ffff880202410ac0
ffffffff81870de8 ffff880202417bb8 ffffffff8107aee8 ffffffff81870de8
0000000000000c51 0000000000000000 ffff880202417be0 ffffffff8107b024
Call Trace:
[<ffffffff812b0013>] dump_stack+0x85/0xc2
[<ffffffff8107aee8>] ___might_sleep+0x148/0x240
[<ffffffff8107b024>] __might_sleep+0x44/0x80
[<ffffffff810b21bd>] synchronize_sched+0x2d/0xe0
[<ffffffff8109be6d>] ? trace_hardirqs_on+0xd/0x10
[<ffffffff8105c7bb>] ? __local_bh_enable_ip+0x6b/0xc0
[<ffffffff816a1b00>] ? _raw_spin_unlock_bh+0x30/0x40
[<ffffffff81667482>] __l2tp_session_unhash+0x172/0x220
[<ffffffff81667397>] ? __l2tp_session_unhash+0x87/0x220
[<ffffffff8166888b>] l2tp_tunnel_closeall+0x9b/0x140
[<ffffffff81668c74>] l2tp_tunnel_delete+0x14/0x60
[<ffffffff81668dd0>] l2tp_exit_net+0x110/0x270
[<ffffffff81668d5c>] ? l2tp_exit_net+0x9c/0x270
[<ffffffff815001c3>] ops_exit_list.isra.6+0x33/0x60
[<ffffffff81501166>] cleanup_net+0x1b6/0x280
...
This bug can easily be reproduced with a few steps:
$ sudo unshare -n bash # Create a shell in a new namespace
# ip link set lo up
# ip addr add 127.0.0.1 dev lo
# ip l2tp add tunnel remote 127.0.0.1 local 127.0.0.1 tunnel_id 1 \
peer_tunnel_id 1 udp_sport 50000 udp_dport 50000
# ip l2tp add session name foo tunnel_id 1 session_id 1 \
peer_session_id 1
# ip link set foo up
# exit # Exit the shell, in turn exiting the namespace
$ dmesg
...
[942121.089216] BUG: scheduling while atomic: kworker/u16:3/13872/0x00000200
...
To fix this, move the call to l2tp_tunnel_closeall() out of the RCU
critical section, and instead call it from l2tp_tunnel_del_work(), which
is running from the l2tp_wq workqueue.
Fixes: 2b551c6e7d5b ("l2tp: close sessions before initiating tunnel delete")
Signed-off-by: Ridge Kennedy <ridge.kennedy@...iedtelesis.co.nz>
Acked-by: Guillaume Nault <g.nault@...halink.fr>
Signed-off-by: David S. Miller <davem@...emloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
net/l2tp/l2tp_core.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
--- a/net/l2tp/l2tp_core.c
+++ b/net/l2tp/l2tp_core.c
@@ -1317,6 +1317,9 @@ static void l2tp_tunnel_del_work(struct
struct sock *sk = NULL;
tunnel = container_of(work, struct l2tp_tunnel, del_work);
+
+ l2tp_tunnel_closeall(tunnel);
+
sk = l2tp_tunnel_sock_lookup(tunnel);
if (!sk)
goto out;
@@ -1642,7 +1645,6 @@ EXPORT_SYMBOL_GPL(l2tp_tunnel_create);
int l2tp_tunnel_delete(struct l2tp_tunnel *tunnel)
{
l2tp_tunnel_inc_refcount(tunnel);
- l2tp_tunnel_closeall(tunnel);
if (false == queue_work(l2tp_wq, &tunnel->del_work)) {
l2tp_tunnel_dec_refcount(tunnel);
return 1;
Powered by blists - more mailing lists