[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6cb0a740-f597-4a13-8fe5-43f94d222c70@gmail.com>
Date: Sat, 5 Oct 2024 20:38:51 +0800
From: Yunsheng Lin <yunshenglin0825@...il.com>
To: Paolo Abeni <pabeni@...hat.com>, Yunsheng Lin <linyunsheng@...wei.com>,
Ilias Apalodimas <ilias.apalodimas@...aro.org>
Cc: liuyonglong@...wei.com, fanghaiqing@...wei.com, zhangkun09@...wei.com,
Robin Murphy <robin.murphy@....com>,
Alexander Duyck <alexander.duyck@...il.com>, IOMMU <iommu@...ts.linux.dev>,
Wei Fang <wei.fang@....com>, Shenwei Wang <shenwei.wang@....com>,
Clark Wang <xiaoning.wang@....com>, Eric Dumazet <edumazet@...gle.com>,
Tony Nguyen <anthony.l.nguyen@...el.com>,
Przemek Kitszel <przemyslaw.kitszel@...el.com>,
Alexander Lobakin <aleksander.lobakin@...el.com>,
Alexei Starovoitov <ast@...nel.org>, Daniel Borkmann <daniel@...earbox.net>,
Jesper Dangaard Brouer <hawk@...nel.org>,
John Fastabend <john.fastabend@...il.com>, Saeed Mahameed
<saeedm@...dia.com>, Leon Romanovsky <leon@...nel.org>,
Tariq Toukan <tariqt@...dia.com>, Felix Fietkau <nbd@....name>,
Lorenzo Bianconi <lorenzo@...nel.org>, Ryder Lee <ryder.lee@...iatek.com>,
Shayne Chen <shayne.chen@...iatek.com>, Sean Wang <sean.wang@...iatek.com>,
Kalle Valo <kvalo@...nel.org>, Matthias Brugger <matthias.bgg@...il.com>,
AngeloGioacchino Del Regno <angelogioacchino.delregno@...labora.com>,
Andrew Morton <akpm@...ux-foundation.org>, imx@...ts.linux.dev,
netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
intel-wired-lan@...ts.osuosl.org, bpf@...r.kernel.org,
linux-rdma@...r.kernel.org, linux-wireless@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org, linux-mediatek@...ts.infradead.org,
linux-mm@...ck.org, davem@...emloft.net, kuba@...nel.org
Subject: Re: [PATCH net v2 2/2] page_pool: fix IOMMU crash when driver has
already unbound
On 10/2/2024 3:37 PM, Paolo Abeni wrote:
> Hi,
>
> On 10/2/24 04:34, Yunsheng Lin wrote:
>> On 10/1/2024 9:32 PM, Paolo Abeni wrote:
>>> Is the problem only tied to VFs drivers? It's a pity all the page_pool
>>> users will have to pay a bill for it...
>>
>> I am afraid it is not only tied to VFs drivers, as:
>> attempting DMA unmaps after the driver has already unbound may leak
>> resources or at worst corrupt memory.
>>
>> Unloading PFs driver might cause the above problems too, I guess the
>> probability of crashing is low for the PF as PF can not be disable
>> unless it can be hot-unplug'ed, but the probability of leaking resources
>> behind the dma mapping might be similar.
>
> Out of sheer ignorance, why/how the refcount acquired by the page pool
> on the device does not prevent unloading?
I am not sure if I understand the reasoning behind that, but it seems
the driver unloading does not check on the refcount of the device from
the implementation of __device_release_driver().
>
> I fear the performance impact could be very high: AFICS, if the item
> array become fragmented, insertion will take linar time, with the quite
> large item_count/pool size. If so, it looks like a no-go.
The last checked index is recorded in pool->item_idx, so the insertion
mostly will not take linear, unless pool->items is almost full and the
old item came back to page_pool is just checked. The thought is that if
it comes to this point, the page_pool is likely not the bottleneck
anymore, and adding infinite pool->items might not make any difference.
If the insertion does turn out to be a bottleneck, 'struct llist_head'
can be used to records the old items lockless for the freeing side, and
llist_del_all() can be used to refill the old items for the allocing
side from freeing side, which is kind of like the pool->ring and
pool->alloc used currently in page_pool. As this patchset is already
complicated, doing this makes it more complicated, I am not sure it is
worth the effort right now as benefit does not seem obvious yet.
>
> I fear we should consider blocking the device removal until all the
> pages are returned/unmapped ?!? (I hope that could be easier/faster)
As Ilias pointed out, blocking the device removal until all the pages
are returned/unmapped might cause infinite delay in our testing:
https://lore.kernel.org/netdev/d50ac1a9-f1e2-49ee-b89b-05dac9bc6ee1@huawei.com/
>
> /P
>
Powered by blists - more mailing lists