[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ac2eec69-8f44-4adb-8182-02c78625851d@huawei.com>
Date: Sun, 29 Sep 2024 10:44:53 +0800
From: Yunsheng Lin <linyunsheng@...wei.com>
To: Ilias Apalodimas <ilias.apalodimas@...aro.org>
CC: Mina Almasry <almasrymina@...gle.com>, <davem@...emloft.net>,
<kuba@...nel.org>, <pabeni@...hat.com>, <liuyonglong@...wei.com>,
<fanghaiqing@...wei.com>, <zhangkun09@...wei.com>, Robin Murphy
<robin.murphy@....com>, Alexander Duyck <alexander.duyck@...il.com>, IOMMU
<iommu@...ts.linux.dev>, Wei Fang <wei.fang@....com>, Shenwei Wang
<shenwei.wang@....com>, Clark Wang <xiaoning.wang@....com>, Eric Dumazet
<edumazet@...gle.com>, Tony Nguyen <anthony.l.nguyen@...el.com>, Przemek
Kitszel <przemyslaw.kitszel@...el.com>, Alexander Lobakin
<aleksander.lobakin@...el.com>, Alexei Starovoitov <ast@...nel.org>, Daniel
Borkmann <daniel@...earbox.net>, Jesper Dangaard Brouer <hawk@...nel.org>,
John Fastabend <john.fastabend@...il.com>, Saeed Mahameed
<saeedm@...dia.com>, Leon Romanovsky <leon@...nel.org>, Tariq Toukan
<tariqt@...dia.com>, Felix Fietkau <nbd@....name>, Lorenzo Bianconi
<lorenzo@...nel.org>, Ryder Lee <ryder.lee@...iatek.com>, Shayne Chen
<shayne.chen@...iatek.com>, Sean Wang <sean.wang@...iatek.com>, Kalle Valo
<kvalo@...nel.org>, Matthias Brugger <matthias.bgg@...il.com>,
AngeloGioacchino Del Regno <angelogioacchino.delregno@...labora.com>, Andrew
Morton <akpm@...ux-foundation.org>, <imx@...ts.linux.dev>,
<netdev@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
<intel-wired-lan@...ts.osuosl.org>, <bpf@...r.kernel.org>,
<linux-rdma@...r.kernel.org>, <linux-wireless@...r.kernel.org>,
<linux-arm-kernel@...ts.infradead.org>, <linux-mediatek@...ts.infradead.org>,
<linux-mm@...ck.org>
Subject: Re: [PATCH net v2 2/2] page_pool: fix IOMMU crash when driver has
already unbound
On 2024/9/28 15:34, Ilias Apalodimas wrote:
...
>
> Yes, that wasn't very clear indeed, apologies for any confusion. I was
> trying to ask on a linked list that only lives in struct page_pool.
> But I now realize this was a bad idea since the lookup would be way
> slower.
>
>> If I understand question correctly, the single/doubly linked list
>> is more costly than array as the page_pool case as my understanding.
>>
>> For single linked list, it doesn't allow deleting a specific entry but
>> only support deleting the first entry and all the entries. It does support
>> lockless operation using llist, but have limitation as below:
>> https://elixir.bootlin.com/linux/v6.7-rc8/source/include/linux/llist.h#L13
>>
>> For doubly linked list, it needs two pointer to support deleting a specific
>> entry and it does not support lockless operation.
>
> I didn't look at the patch too carefully at first. Looking a bit
> closer now, the array is indeed better, since the lookup is faster.
> You just need the stored index in struct page to find the page we need
> to unmap. Do you remember if we can reduce the atomic pp_ref_count to
> 32bits? If so we can reuse that space for the index. Looking at it
For 64 bits system, yes, we can reuse that.
But for 32 bits system, we may have only 16 bits for each of them, and it
seems that there is no atomic operation for variable that is less than 32
bits.
> requires a bit more work in netmem, but that's mostly swapping all the
> atomic64 calls to atomic ones.
>
>>
>> For pool->items, as the alloc side is protected by NAPI context, and the
>> free side use item->pp_idx to ensure there is only one producer for each
>> item, which means for each item in pool->items, there is only one consumer
>> and one producer, which seems much like the case when the page is not
>> recyclable in __page_pool_put_page, we don't need a lock protection when
>> calling page_pool_return_page(), the 'struct page' is also one consumer
>> and one producer as the pool->items[item->pp_idx] does:
>> https://elixir.bootlin.com/linux/v6.7-rc8/source/net/core/page_pool.c#L645
>>
>> We only need a lock protection when page_pool_destroy() is called to
>> check if there is inflight page to be unmapped as a consumer, and the
>> __page_pool_put_page() may also called to unmapped the inflight page as
>> another consumer,
>
> Thanks for the explanation. On the locking side, page_pool_destroy is
> called once from the driver and then it's either the workqueue for
> inflight packets or an SKB that got freed and tried to recycle right?
> But do we still need to do all the unmapping etc from the delayed
> work? Since the new function will unmap all packets in
> page_pool_destroy, we can just skip unmapping when the delayed work
> runs
Yes, the pool->dma_map is clear in page_pool_item_uninit() after it does
the unmapping for all inflight pages with the protection of pool->destroy_lock,
so that the unmapping is skipped in page_pool_return_page() when those inflight
pages are returned back to page_pool.
>
Powered by blists - more mailing lists