[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200116170509.12787-80-sashal@kernel.org>
Date: Thu, 16 Jan 2020 11:59:41 -0500
From: Sasha Levin <sashal@...nel.org>
To: linux-kernel@...r.kernel.org, stable@...r.kernel.org
Cc: YueHaibing <yuehaibing@...wei.com>,
Guillaume Nault <gnault@...hat.com>,
"David S . Miller" <davem@...emloft.net>,
Sasha Levin <sashal@...nel.org>, netdev@...r.kernel.org
Subject: [PATCH AUTOSEL 4.19 343/671] l2tp: Fix possible NULL pointer dereference
From: YueHaibing <yuehaibing@...wei.com>
[ Upstream commit 638a3a1e349ddf5b82f222ff5cb3b4f266e7c278 ]
BUG: unable to handle kernel NULL pointer dereference at 0000000000000128
PGD 0 P4D 0
Oops: 0000 [#1
CPU: 0 PID: 5697 Comm: modprobe Tainted: G W 5.1.0-rc7+ #1
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.9.3-0-ge2fc41e-prebuilt.qemu-project.org 04/01/2014
RIP: 0010:__lock_acquire+0x53/0x10b0
Code: 8b 1c 25 40 5e 01 00 4c 8b 6d 10 45 85 e4 0f 84 bd 06 00 00 44 8b 1d 7c d2 09 02 49 89 fe 41 89 d2 45 85 db 0f 84 47 02 00 00 <48> 81 3f a0 05 70 83 b8 00 00 00 00 44 0f 44 c0 83 fe 01 0f 86 3a
RSP: 0018:ffffc90001c07a28 EFLAGS: 00010002
RAX: 0000000000000000 RBX: ffff88822f038440 RCX: 0000000000000000
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000128
RBP: ffffc90001c07a88 R08: 0000000000000001 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000001 R12: 0000000000000001
R13: 0000000000000000 R14: 0000000000000128 R15: 0000000000000000
FS: 00007fead0811540(0000) GS:ffff888237a00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000000000128 CR3: 00000002310da000 CR4: 00000000000006f0
Call Trace:
? __lock_acquire+0x24e/0x10b0
lock_acquire+0xdf/0x230
? flush_workqueue+0x71/0x530
flush_workqueue+0x97/0x530
? flush_workqueue+0x71/0x530
l2tp_exit_net+0x170/0x2b0 [l2tp_core
? l2tp_exit_net+0x93/0x2b0 [l2tp_core
ops_exit_list.isra.6+0x36/0x60
unregister_pernet_operations+0xb8/0x110
unregister_pernet_device+0x25/0x40
l2tp_init+0x55/0x1000 [l2tp_core
? 0xffffffffa018d000
do_one_initcall+0x6c/0x3cc
? do_init_module+0x22/0x1f1
? rcu_read_lock_sched_held+0x97/0xb0
? kmem_cache_alloc_trace+0x325/0x3b0
do_init_module+0x5b/0x1f1
load_module+0x1db1/0x2690
? m_show+0x1d0/0x1d0
__do_sys_finit_module+0xc5/0xd0
__x64_sys_finit_module+0x15/0x20
do_syscall_64+0x6b/0x1d0
entry_SYSCALL_64_after_hwframe+0x49/0xbe
RIP: 0033:0x7fead031a839
Code: 00 f3 c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 1f f6 2c 00 f7 d8 64 89 01 48
RSP: 002b:00007ffe8d9acca8 EFLAGS: 00000246 ORIG_RAX: 0000000000000139
RAX: ffffffffffffffda RBX: 0000560078398b80 RCX: 00007fead031a839
RDX: 0000000000000000 RSI: 000056007659dc2e RDI: 0000000000000003
RBP: 000056007659dc2e R08: 0000000000000000 R09: 0000560078398b80
R10: 0000000000000003 R11: 0000000000000246 R12: 0000000000000000
R13: 00005600783a04a0 R14: 0000000000040000 R15: 0000560078398b80
Modules linked in: l2tp_core(+) e1000 ip_tables ipv6 [last unloaded: l2tp_core
CR2: 0000000000000128
---[ end trace 8322b2b8bf83f8e1
If alloc_workqueue fails in l2tp_init, l2tp_net_ops
is unregistered on failure path. Then l2tp_exit_net
is called which will flush NULL workqueue, this patch
add a NULL check to fix it.
Fixes: 67e04c29ec0d ("l2tp: unregister l2tp_net_ops on failure path")
Signed-off-by: YueHaibing <yuehaibing@...wei.com>
Acked-by: Guillaume Nault <gnault@...hat.com>
Signed-off-by: David S. Miller <davem@...emloft.net>
Signed-off-by: Sasha Levin <sashal@...nel.org>
---
net/l2tp/l2tp_core.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/net/l2tp/l2tp_core.c b/net/l2tp/l2tp_core.c
index 52b5a2797c0c..e4dec03a19fe 100644
--- a/net/l2tp/l2tp_core.c
+++ b/net/l2tp/l2tp_core.c
@@ -1735,7 +1735,8 @@ static __net_exit void l2tp_exit_net(struct net *net)
}
rcu_read_unlock_bh();
- flush_workqueue(l2tp_wq);
+ if (l2tp_wq)
+ flush_workqueue(l2tp_wq);
rcu_barrier();
for (hash = 0; hash < L2TP_HASH_SIZE_2; hash++)
--
2.20.1
Powered by blists - more mailing lists