lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 08 Jan 2020 13:35:06 -0800
From:   John Fastabend <john.fastabend@...il.com>
To:     bjorn.topel@...il.com, bpf@...r.kernel.org, toke@...hat.com
Cc:     netdev@...r.kernel.org, john.fastabend@...il.com, ast@...nel.org,
        daniel@...earbox.net
Subject: [bpf PATCH 2/2] bpf: xdp,
 remove no longer required rcu_read_{un}lock()

Now that we depend on rcu_call() and synchronize_rcu() to also wait
for preempt_disabled region to complete the rcu read critical section
in __dev_map_flush() is no longer relevant.

These originally ensured the map reference was safe while a map was
also being free'd. But flush by new rules can only be called from
preempt-disabled NAPI context. The synchronize_rcu from the map free
path and the rcu_call from the delete path will ensure the reference
here is safe. So lets remove the rcu_read_lock and rcu_read_unlock
pair to avoid any confusion around how this is being protected.

If the rcu_read_lock was required it would mean errors in the above
logic and the original patch would also be wrong.

Fixes: 0536b85239b84 ("xdp: Simplify devmap cleanup")
Signed-off-by: John Fastabend <john.fastabend@...il.com>
---
 kernel/bpf/devmap.c |    2 --
 1 file changed, 2 deletions(-)

diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c
index f0bf525..0129d4a 100644
--- a/kernel/bpf/devmap.c
+++ b/kernel/bpf/devmap.c
@@ -378,10 +378,8 @@ void __dev_map_flush(void)
 	struct list_head *flush_list = this_cpu_ptr(&dev_map_flush_list);
 	struct xdp_bulk_queue *bq, *tmp;
 
-	rcu_read_lock();
 	list_for_each_entry_safe(bq, tmp, flush_list, flush_node)
 		bq_xmit_all(bq, XDP_XMIT_FLUSH);
-	rcu_read_unlock();
 }
 
 /* rcu_read_lock (from syscall and BPF contexts) ensures that if a delete and/or

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ