lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Wed,  6 Nov 2019 13:41:06 -0800
From:   Shannon Nelson <snelson@...sando.io>
To:     netdev@...r.kernel.org, davem@...emloft.net
Cc:     Shannon Nelson <snelson@...sando.io>
Subject: [PATCH net-next] net: disable preempt for processed counter

When a kernel is built with CONFIG_PREEMPT and CONFIG_DEBUG_PREEMPT,
and a network driver is processing inbound data, we see

 BUG: using __this_cpu_add() in preemptible [00000000] code: nmd/4170
 caller is __this_cpu_preempt_check+0x18/0x20
 CPU: 1 PID: 4170 Comm: nmd Tainted: G           O    4.14.18 #1
 Hardware name: (xxxxx)
 Call trace:
 [<ffff0000080895d0>] dump_backtrace+0x0/0x2a0
 [<ffff000008089894>] show_stack+0x24/0x30
 [<ffff0000085ed64c>] dump_stack+0xac/0xf0
 [<ffff0000083bd538>] check_preemption_disabled+0xf8/0x100
 [<ffff0000083bd588>] __this_cpu_preempt_check+0x18/0x20
 [<ffff000008510f64>] __netif_receive_skb_core+0xa4/0xa10
 [<ffff000008514258>] __netif_receive_skb+0x28/0x80
 [<ffff0000085183b0>] netif_receive_skb_internal+0x30/0x120

We found that this gets hit inside of check_preemption_disabled(),
which is called by __netif_receive_skb_core() wanting to do a safe
per-cpu statistic increment with __this_cpu_inc(softnet_data.processed).
In order to be a safe increment, preemption needs to be disabled, but
in this case there are no calls to preempt_disable()/preempt_enable().
Note that a few lines later preempt_disable/preempt_enable() are used
around the call into do_xdp_generic().

This patch adds the preempt_disable()/preempt_enable() around
the call to __this_cpu_inc() as has been done in a few other
cases in the kernel.

Signed-off-by: Shannon Nelson <snelson@...sando.io>
---
 net/core/dev.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/net/core/dev.c b/net/core/dev.c
index bb15800c8cb5..ffda48def391 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -4976,7 +4976,9 @@ static int __netif_receive_skb_core(struct sk_buff *skb, bool pfmemalloc,
 another_round:
 	skb->skb_iif = skb->dev->ifindex;
 
+	preempt_disable();
 	__this_cpu_inc(softnet_data.processed);
+	preempt_enable();
 
 	if (static_branch_unlikely(&generic_xdp_needed_key)) {
 		int ret2;
-- 
2.17.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ