lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Wed, 23 Jun 2021 13:07:20 +0200 From: Toke Høiland-Jørgensen <toke@...hat.com> To: bpf@...r.kernel.org, netdev@...r.kernel.org Cc: Martin KaFai Lau <kafai@...com>, Hangbin Liu <liuhangbin@...il.com>, Jesper Dangaard Brouer <brouer@...hat.com>, Magnus Karlsson <magnus.karlsson@...il.com>, "Paul E . McKenney" <paulmck@...nel.org>, Jakub Kicinski <kuba@...nel.org>, Toke Høiland-Jørgensen <toke@...hat.com>, Thomas Petazzoni <thomas.petazzoni@...tlin.com>, Marcin Wojtas <mw@...ihalf.com>, Russell King <linux@...linux.org.uk> Subject: [PATCH bpf-next v4 12/19] marvell: remove rcu_read_lock() around XDP program invocation The mvneta and mvpp2 drivers have rcu_read_lock()/rcu_read_unlock() pairs around XDP program invocations. However, the actual lifetime of the objects referred by the XDP program invocation is longer, all the way through to the call to xdp_do_flush(), making the scope of the rcu_read_lock() too small. This turns out to be harmless because it all happens in a single NAPI poll cycle (and thus under local_bh_disable()), but it makes the rcu_read_lock() misleading. Rather than extend the scope of the rcu_read_lock(), just get rid of it entirely. With the addition of RCU annotations to the XDP_REDIRECT map types that take bh execution into account, lockdep even understands this to be safe, so there's really no reason to keep it around. Cc: Thomas Petazzoni <thomas.petazzoni@...tlin.com> Cc: Marcin Wojtas <mw@...ihalf.com> Cc: Russell King <linux@...linux.org.uk> Signed-off-by: Toke Høiland-Jørgensen <toke@...hat.com> --- drivers/net/ethernet/marvell/mvneta.c | 2 -- drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c | 4 ---- 2 files changed, 6 deletions(-) diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c index 7d5cd9bc6c99..b9e5875b20bc 100644 --- a/drivers/net/ethernet/marvell/mvneta.c +++ b/drivers/net/ethernet/marvell/mvneta.c @@ -2370,7 +2370,6 @@ static int mvneta_rx_swbm(struct napi_struct *napi, /* Get number of received packets */ rx_todo = mvneta_rxq_busy_desc_num_get(pp, rxq); - rcu_read_lock(); xdp_prog = READ_ONCE(pp->xdp_prog); /* Fairness NAPI loop */ @@ -2448,7 +2447,6 @@ static int mvneta_rx_swbm(struct napi_struct *napi, xdp_buf.data_hard_start = NULL; sinfo.nr_frags = 0; } - rcu_read_unlock(); if (xdp_buf.data_hard_start) mvneta_xdp_put_buff(pp, rxq, &xdp_buf, &sinfo, -1); diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c index b2259bf1d299..521ed3c1cfe9 100644 --- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c +++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c @@ -3852,8 +3852,6 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi, int rx_done = 0; u32 xdp_ret = 0; - rcu_read_lock(); - xdp_prog = READ_ONCE(port->xdp_prog); /* Get number of received packets and clamp the to-do */ @@ -3988,8 +3986,6 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi, mvpp2_bm_pool_put(port, pool, dma_addr, phys_addr); } - rcu_read_unlock(); - if (xdp_ret & MVPP2_XDP_REDIR) xdp_do_flush_map(); -- 2.32.0
Powered by blists - more mailing lists