[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251022121132.GD1694476@ragnatech.se>
Date: Wed, 22 Oct 2025 14:11:32 +0200
From: Niklas Söderlund <niklas.soderlund@...natech.se>
To: Prabhakar <prabhakar.csengg@...il.com>
Cc: Paul Barker <paul@...rker.dev>, Andrew Lunn <andrew+netdev@...n.ch>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>,
Geert Uytterhoeven <geert+renesas@...der.be>,
Mitsuhiro Kimura <mitsuhiro.kimura.kc@...esas.com>,
netdev@...r.kernel.org, linux-renesas-soc@...r.kernel.org,
linux-kernel@...r.kernel.org, Biju Das <biju.das.jz@...renesas.com>,
Fabrizio Castro <fabrizio.castro.jz@...esas.com>,
Lad Prabhakar <prabhakar.mahadev-lad.rj@...renesas.com>,
stable@...r.kernel.org
Subject: Re: [PATCH v2 3/4] net: ravb: Enforce descriptor type ordering
Hi Lad,
Thanks for reworking this and making it very clear what's going on.
On 2025-10-17 16:18:29 +0100, Prabhakar wrote:
> From: Lad Prabhakar <prabhakar.mahadev-lad.rj@...renesas.com>
>
> Ensure the TX descriptor type fields are published in a safe order so the
> DMA engine never begins processing a descriptor chain before all descriptor
> fields are fully initialised.
>
> For multi-descriptor transmits the driver writes DT_FEND into the last
> descriptor and DT_FSTART into the first. The DMA engine begins processing
> when it observes DT_FSTART. Move the dma_wmb() barrier so it executes
> immediately after DT_FEND and immediately before writing DT_FSTART
> (and before DT_FSINGLE in the single-descriptor case). This guarantees
> that all prior CPU writes to the descriptor memory are visible to the
> device before DT_FSTART is seen.
>
> This avoids a situation where compiler/CPU reordering could publish
> DT_FSTART ahead of DT_FEND or other descriptor fields, allowing the DMA to
> start on a partially initialised chain and causing corrupted transmissions
> or TX timeouts. Such a failure was observed on RZ/G2L with an RT kernel as
> transmit queue timeouts and device resets.
>
> Fixes: 2f45d1902acf ("ravb: minimize TX data copying")
> Cc: stable@...r.kernel.org
> Co-developed-by: Fabrizio Castro <fabrizio.castro.jz@...esas.com>
> Signed-off-by: Fabrizio Castro <fabrizio.castro.jz@...esas.com>
> Signed-off-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@...renesas.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund+renesas@...natech.se>
> ---
> v1->v2:
> - Reflowed the code and updated the comment to clarify the ordering
> requirements.
> - Updated commit message.
> - Split up adding memory barrier change before ringing doorbell
> into a separate patch.
> ---
> drivers/net/ethernet/renesas/ravb_main.c | 16 ++++++++++++++--
> 1 file changed, 14 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c
> index a200e205825a..0e40001f64b4 100644
> --- a/drivers/net/ethernet/renesas/ravb_main.c
> +++ b/drivers/net/ethernet/renesas/ravb_main.c
> @@ -2211,13 +2211,25 @@ static netdev_tx_t ravb_start_xmit(struct sk_buff *skb, struct net_device *ndev)
>
> skb_tx_timestamp(skb);
> }
> - /* Descriptor type must be set after all the above writes */
> - dma_wmb();
> +
> if (num_tx_desc > 1) {
> desc->die_dt = DT_FEND;
> desc--;
> + /* When using multi-descriptors, DT_FEND needs to get written
> + * before DT_FSTART, but the compiler may reorder the memory
> + * writes in an attempt to optimize the code.
> + * Use a dma_wmb() barrier to make sure DT_FEND and DT_FSTART
> + * are written exactly in the order shown in the code.
> + * This is particularly important for cases where the DMA engine
> + * is already running when we are running this code. If the DMA
> + * sees DT_FSTART without the corresponding DT_FEND it will enter
> + * an error condition.
> + */
> + dma_wmb();
> desc->die_dt = DT_FSTART;
> } else {
> + /* Descriptor type must be set after all the above writes */
> + dma_wmb();
> desc->die_dt = DT_FSINGLE;
> }
> ravb_modify(ndev, TCCR, TCCR_TSRQ0 << q, TCCR_TSRQ0 << q);
> --
> 2.43.0
>
--
Kind Regards,
Niklas Söderlund
Powered by blists - more mailing lists