[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHS8izO5_=w4x8rhnHujCWQn7nhEDzaNGgJSrcZEwOQ+dN_o3w@mail.gmail.com>
Date: Thu, 6 Feb 2025 08:54:23 -0800
From: Mina Almasry <almasrymina@...gle.com>
To: Yunsheng Lin <yunshenglin0825@...il.com>
Cc: Christoph Hellwig <hch@...radead.org>, Yunsheng Lin <linyunsheng@...wei.com>, davem@...emloft.net,
kuba@...nel.org, pabeni@...hat.com, zhangkun09@...wei.com,
liuyonglong@...wei.com, fanghaiqing@...wei.com,
Robin Murphy <robin.murphy@....com>, Alexander Duyck <alexander.duyck@...il.com>,
IOMMU <iommu@...ts.linux.dev>, Andrew Morton <akpm@...ux-foundation.org>,
Eric Dumazet <edumazet@...gle.com>, Simon Horman <horms@...nel.org>,
Jesper Dangaard Brouer <hawk@...nel.org>, Ilias Apalodimas <ilias.apalodimas@...aro.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, netdev@...r.kernel.org
Subject: Re: [RFC v8 3/5] page_pool: fix IOMMU crash when driver has already unbound
On Tue, Feb 4, 2025 at 6:23 AM Yunsheng Lin <yunshenglin0825@...il.com> wrote:
>
> On 1/28/2025 2:12 PM, Christoph Hellwig wrote:
> > On Mon, Jan 27, 2025 at 10:57:32AM +0800, Yunsheng Lin wrote:
> >> Note, the devmem patchset seems to make the bug harder to fix,
> >> and may make backporting harder too. As there is no actual user
> >> for the devmem and the fixing for devmem is unclear for now,
> >> this patch does not consider fixing the case for devmem yet.
> >
> > Is there another outstanding patchet? Or do you mean the existing
> > devmem code already merged? If that isn't actually used it should
> > be removed, but otherwise you need to fix it.
>
> The last time I checked, only the code for networking stack supporting
> the devmem had been merged.
>
> The first driver suppporting seems to be bnxt, which seems to be under
> review:
> https://lore.kernel.org/all/20241022162359.2713094-1-ap420073@gmail.com/
>
> As my understanding, this should work for the devmem too if the devmem
>From a quick look at this patch, it looks like you're handling
netmem/net_iovs in the implementation, so this implementation is
indeed considering netmem. I think the paragraph in the commit message
that Christoph is responding to should be deleted, because in recent
iterations you're handling netmem.
> provide a ops to do the per-netmem dma unmapping
> It would be good that devmem people can have a look at it and see if
> this fix works for the specific page_pool mp provider.
>
We set pool->dma_map==false for memory providers that do not need
mapping/unmapping, which you are checking in
__page_pool_release_page_dma.
--
Thanks,
Mina
Powered by blists - more mailing lists