[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <5e6c462a912bcd6ba9284cd87272c5bff18fd4cf.1574083275.git.lorenzo@kernel.org>
Date: Mon, 18 Nov 2019 15:33:44 +0200
From: Lorenzo Bianconi <lorenzo@...nel.org>
To: netdev@...r.kernel.org
Cc: davem@...emloft.net, ilias.apalodimas@...aro.org,
brouer@...hat.com, lorenzo.bianconi@...hat.com, mcroce@...hat.com,
jonathan.lemon@...il.com
Subject: [PATCH v4 net-next 1/3] net: mvneta: rely on page_pool_recycle_direct in mvneta_run_xdp
Rely on page_pool_recycle_direct and not on xdp_return_buff in
mvneta_run_xdp. This is a preliminary patch to limit the dma sync len
to the one strictly necessary
Signed-off-by: Lorenzo Bianconi <lorenzo@...nel.org>
---
drivers/net/ethernet/marvell/mvneta.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index 12e03b15f0ab..f7713c2c68e1 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -2097,7 +2097,8 @@ mvneta_run_xdp(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
err = xdp_do_redirect(pp->dev, xdp, prog);
if (err) {
ret = MVNETA_XDP_DROPPED;
- xdp_return_buff(xdp);
+ page_pool_recycle_direct(rxq->page_pool,
+ virt_to_head_page(xdp->data));
} else {
ret = MVNETA_XDP_REDIR;
}
@@ -2106,7 +2107,8 @@ mvneta_run_xdp(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
case XDP_TX:
ret = mvneta_xdp_xmit_back(pp, xdp);
if (ret != MVNETA_XDP_TX)
- xdp_return_buff(xdp);
+ page_pool_recycle_direct(rxq->page_pool,
+ virt_to_head_page(xdp->data));
break;
default:
bpf_warn_invalid_xdp_action(act);
--
2.21.0
Powered by blists - more mailing lists