[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210624160609.292325-13-toke@redhat.com>
Date: Thu, 24 Jun 2021 18:06:02 +0200
From: Toke Høiland-Jørgensen <toke@...hat.com>
To: bpf@...r.kernel.org, netdev@...r.kernel.org
Cc: Martin KaFai Lau <kafai@...com>,
Hangbin Liu <liuhangbin@...il.com>,
Jesper Dangaard Brouer <brouer@...hat.com>,
Magnus Karlsson <magnus.karlsson@...il.com>,
"Paul E . McKenney" <paulmck@...nel.org>,
Jakub Kicinski <kuba@...nel.org>,
Toke Høiland-Jørgensen <toke@...hat.com>,
Thomas Petazzoni <thomas.petazzoni@...tlin.com>,
Marcin Wojtas <mw@...ihalf.com>,
Russell King <linux@...linux.org.uk>
Subject: [PATCH bpf-next v5 12/19] marvell: remove rcu_read_lock() around XDP program invocation
The mvneta and mvpp2 drivers have rcu_read_lock()/rcu_read_unlock() pairs
around XDP program invocations. However, the actual lifetime of the objects
referred by the XDP program invocation is longer, all the way through to
the call to xdp_do_flush(), making the scope of the rcu_read_lock() too
small. This turns out to be harmless because it all happens in a single
NAPI poll cycle (and thus under local_bh_disable()), but it makes the
rcu_read_lock() misleading.
Rather than extend the scope of the rcu_read_lock(), just get rid of it
entirely. With the addition of RCU annotations to the XDP_REDIRECT map
types that take bh execution into account, lockdep even understands this to
be safe, so there's really no reason to keep it around.
Cc: Thomas Petazzoni <thomas.petazzoni@...tlin.com>
Cc: Marcin Wojtas <mw@...ihalf.com>
Cc: Russell King <linux@...linux.org.uk>
Signed-off-by: Toke Høiland-Jørgensen <toke@...hat.com>
---
drivers/net/ethernet/marvell/mvneta.c | 2 --
drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c | 4 ----
2 files changed, 6 deletions(-)
diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index c15ce06427d0..ada4e26a5492 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -2373,7 +2373,6 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
/* Get number of received packets */
rx_todo = mvneta_rxq_busy_desc_num_get(pp, rxq);
- rcu_read_lock();
xdp_prog = READ_ONCE(pp->xdp_prog);
/* Fairness NAPI loop */
@@ -2451,7 +2450,6 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
xdp_buf.data_hard_start = NULL;
sinfo.nr_frags = 0;
}
- rcu_read_unlock();
if (xdp_buf.data_hard_start)
mvneta_xdp_put_buff(pp, rxq, &xdp_buf, &sinfo, -1);
diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
index 9bca8c8f9f8d..c31677527a02 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
@@ -3881,8 +3881,6 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi,
int rx_done = 0;
u32 xdp_ret = 0;
- rcu_read_lock();
-
xdp_prog = READ_ONCE(port->xdp_prog);
/* Get number of received packets and clamp the to-do */
@@ -4028,8 +4026,6 @@ static int mvpp2_rx(struct mvpp2_port *port, struct napi_struct *napi,
mvpp2_bm_pool_put(port, pool, dma_addr, phys_addr);
}
- rcu_read_unlock();
-
if (xdp_ret & MVPP2_XDP_REDIR)
xdp_do_flush_map();
--
2.32.0
Powered by blists - more mailing lists