[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f20c339f-5286-477c-9255-e2e1fbeba57c@intel.com>
Date: Mon, 13 Jan 2025 13:10:46 +0100
From: Alexander Lobakin <aleksander.lobakin@...el.com>
To: Furong Xu <0x1207@...il.com>
CC: <netdev@...r.kernel.org>, <linux-stm32@...md-mailman.stormreply.com>,
<linux-arm-kernel@...ts.infradead.org>, <linux-kernel@...r.kernel.org>,
Andrew Lunn <andrew+netdev@...n.ch>, "David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>, "Paolo
Abeni" <pabeni@...hat.com>, Maxime Coquelin <mcoquelin.stm32@...il.com>,
<xfr@...look.com>
Subject: Re: [PATCH net-next v1 3/3] net: stmmac: Optimize cache prefetch in
RX path
From: Furong Xu <0x1207@...il.com>
Date: Fri, 10 Jan 2025 17:53:59 +0800
> Current code prefetches cache lines for the received frame first, and
> then dma_sync_single_for_cpu() against this frame, this is wrong.
> Cache prefetch should be triggered after dma_sync_single_for_cpu().
>
> This patch brings ~2.8% driver performance improvement in a TCP RX
> throughput test with iPerf tool on a single isolated Cortex-A65 CPU
> core, 2.84 Gbits/sec increased to 2.92 Gbits/sec.
>
> Signed-off-by: Furong Xu <0x1207@...il.com>
> ---
> drivers/net/ethernet/stmicro/stmmac/stmmac_main.c | 7 +++----
> 1 file changed, 3 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> index c1aeaec53b4c..1b4e8b035b1a 100644
> --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
> @@ -5497,10 +5497,6 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
>
> /* Buffer is good. Go on. */
>
> - prefetch(page_address(buf->page) + buf->page_offset);
> - if (buf->sec_page)
> - prefetch(page_address(buf->sec_page));
> -
> buf1_len = stmmac_rx_buf1_len(priv, p, status, len);
> len += buf1_len;
> buf2_len = stmmac_rx_buf2_len(priv, p, status, len);
> @@ -5522,6 +5518,7 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
>
> dma_sync_single_for_cpu(priv->device, buf->addr,
> buf1_len, dma_dir);
> + prefetch(page_address(buf->page) + buf->page_offset);
>
> xdp_init_buff(&ctx.xdp, buf_sz, &rx_q->xdp_rxq);
> xdp_prepare_buff(&ctx.xdp, page_address(buf->page),
> @@ -5596,6 +5593,7 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
> } else if (buf1_len) {
> dma_sync_single_for_cpu(priv->device, buf->addr,
> buf1_len, dma_dir);
> + prefetch(page_address(buf->page) + buf->page_offset);
> skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,
> buf->page, buf->page_offset, buf1_len,
> priv->dma_conf.dma_buf_sz);
Are you sure you need to prefetch frags as well? I'd say this is a waste
of cycles, as the kernel core stack barely looks at payload...
Probably prefetching only header buffers would be enough.
> @@ -5608,6 +5606,7 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
> if (buf2_len) {
> dma_sync_single_for_cpu(priv->device, buf->sec_addr,
> buf2_len, dma_dir);
> + prefetch(page_address(buf->sec_page));
> skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,
> buf->sec_page, 0, buf2_len,
> priv->dma_conf.dma_buf_sz);
Thanks,
Olek
Powered by blists - more mailing lists