[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <999ff0a1-1f8c-4220-a9d9-6dc1e0bddda6@quicinc.com>
Date: Tue, 8 Oct 2024 17:36:06 +0530
From: Sarosh Hasan <quic_sarohasa@...cinc.com>
To: Jakub Kicinski <kuba@...nel.org>, Suraj Jaiswal <quic_jsuraj@...cinc.com>
CC: Alexandre Torgue <alexandre.torgue@...s.st.com>,
Jose Abreu
<joabreu@...opsys.com>,
"David S. Miller" <davem@...emloft.net>,
Eric Dumazet
<edumazet@...gle.com>, Paolo Abeni <pabeni@...hat.com>,
Maxime Coquelin
<mcoquelin.stm32@...il.com>, <netdev@...r.kernel.org>,
<linux-stm32@...md-mailman.stormreply.com>,
<linux-arm-kernel@...ts.infradead.org>, <linux-kernel@...r.kernel.org>,
Prasad Sodagudi <psodagud@...cinc.com>,
Andrew Halaney <ahalaney@...hat.com>, Rob Herring <robh@...nel.org>,
<kernel@...cinc.com>
Subject: Re: [PATCH v2] net: stmmac: allocate separate page for buffer
On 9/12/2024 5:21 AM, Jakub Kicinski wrote:
> On Tue, 10 Sep 2024 18:18:41 +0530 Suraj Jaiswal wrote:
>> Currently for TSO page is mapped with dma_map_single()
>> and then resulting dma address is referenced (and offset)
>> by multiple descriptors until the whole region is
>> programmed into the descriptors.
>> This makes it possible for stmmac_tx_clean() to dma_unmap()
>> the first of the already processed descriptors, while the
>> rest are still being processed by the DMA engine. This leads
>> to an iommu fault due to the DMA engine using unmapped memory
>> as seen below:
>>
>> arm-smmu 15000000.iommu: Unhandled context fault: fsr=0x402,
>> iova=0xfc401000, fsynr=0x60003, cbfrsynra=0x121, cb=38
>>
>> Descriptor content:
>> TDES0 TDES1 TDES2 TDES3
>> 317: 0xfc400800 0x0 0x36 0xa02c0b68
>> 318: 0xfc400836 0x0 0xb68 0x90000000
>>
>> As we can see above descriptor 317 holding a page address
>> and 318 holding the buffer address by adding offset to page
>> addess. Now if 317 descritor is cleaned as part of tx_clean()
>> then we will get SMMU fault if 318 descriptor is getting accessed.
>
> The device is completing earlier chunks of the payload before the entire
> payload is sent? That's very unusual, is there a manual you can quote
> on this?
Here if as part of tx clean if first descriptor is cleaned before tx complete of next
descriptor is done then we are running into this issue.
for non tso case if we see xmit code has logic to alloc different page for each fragments and same logic
we are trying for TSO case.
>> To fix this, let's map each descriptor's memory reference individually.
>> This way there's no risk of unmapping a region that's still being
>> referenced by the DMA engine in a later descriptor.
>
> This adds overhead. Why not wait with unmapping until the full skb is
> done? Presumably you can't free half an skb, anyway.
>
> Please added Fixes tag and use "PATCH net" as the subject tag/prefix.
Powered by blists - more mailing lists