[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87jz82n7j3.fsf@toke.dk>
Date: Wed, 02 Apr 2025 13:10:24 +0200
From: Toke Høiland-Jørgensen <toke@...hat.com>
To: Pavel Begunkov <asml.silence@...il.com>, "David S. Miller"
<davem@...emloft.net>, Jakub Kicinski <kuba@...nel.org>, Jesper Dangaard
Brouer <hawk@...nel.org>, Saeed Mahameed <saeedm@...dia.com>, Leon
Romanovsky <leon@...nel.org>, Tariq Toukan <tariqt@...dia.com>, Andrew
Lunn <andrew+netdev@...n.ch>, Eric Dumazet <edumazet@...gle.com>, Paolo
Abeni <pabeni@...hat.com>, Ilias Apalodimas <ilias.apalodimas@...aro.org>,
Simon Horman <horms@...nel.org>, Andrew Morton
<akpm@...ux-foundation.org>, Mina Almasry <almasrymina@...gle.com>,
Yonglong Liu <liuyonglong@...wei.com>, Yunsheng Lin
<linyunsheng@...wei.com>, Matthew Wilcox <willy@...radead.org>
Cc: netdev@...r.kernel.org, bpf@...r.kernel.org, linux-rdma@...r.kernel.org,
linux-mm@...ck.org, Qiuling Ren <qren@...hat.com>, Yuying Ma
<yuma@...hat.com>
Subject: Re: [PATCH net-next v6 2/2] page_pool: Track DMA-mapped pages and
unmap them when destroying the pool
Pavel Begunkov <asml.silence@...il.com> writes:
> On 4/1/25 10:27, Toke Høiland-Jørgensen wrote:
> ...
>> Reported-by: Yonglong Liu <liuyonglong@...wei.com>
>> Closes: https://lore.kernel.org/r/8743264a-9700-4227-a556-5f931c720211@huawei.com
>> Fixes: ff7d6b27f894 ("page_pool: refurbish version of page_pool code")
>> Suggested-by: Mina Almasry <almasrymina@...gle.com>
>> Reviewed-by: Mina Almasry <almasrymina@...gle.com>
>> Reviewed-by: Jesper Dangaard Brouer <hawk@...nel.org>
>> Tested-by: Jesper Dangaard Brouer <hawk@...nel.org>
>> Tested-by: Qiuling Ren <qren@...hat.com>
>> Tested-by: Yuying Ma <yuma@...hat.com>
>> Tested-by: Yonglong Liu <liuyonglong@...wei.com>
>> Acked-by: Jesper Dangaard Brouer <hawk@...nel.org>
>> Signed-off-by: Toke Høiland-Jørgensen <toke@...hat.com>
>
> I haven't looked into the bit carving, but the rest looks
> good to me. A few nits below,
>
> ...
>> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
>> index 7745ad924ae2d801580a6760eba9393e1cf67b01..52b5ddab7ecb405066fd55b8d61abfd4186b9dcf 100644
>> --- a/net/core/page_pool.c
>> +++ b/net/core/page_pool.c
>> @@ -227,6 +227,8 @@ static int page_pool_init(struct page_pool *pool,
>> return -EINVAL;
>>
>> pool->dma_map = true;
>> +
>> + xa_init_flags(&pool->dma_mapped, XA_FLAGS_ALLOC1);
>
> nit: might be better to init/destroy unconditionally, it doesn't
> allocate any memory.
Hmm, yeah, suppose both could work; I do think this makes it clearer
that it's tied to DMA mapping, but I won't insist. Not sure it's worth
respinning just for this, though (see below).
>> }
>>
>> if (pool->slow.flags & PP_FLAG_DMA_SYNC_DEV) {
>> @@ -276,9 +278,6 @@ static int page_pool_init(struct page_pool *pool,
>> /* Driver calling page_pool_create() also call page_pool_destroy() */
>> refcount_set(&pool->user_cnt, 1);
>>
>> - if (pool->dma_map)
>> - get_device(pool->p.dev);
>> -
>> if (pool->slow.flags & PP_FLAG_ALLOW_UNREADABLE_NETMEM) {
>> netdev_assert_locked(pool->slow.netdev);
>> rxq = __netif_get_rx_queue(pool->slow.netdev,
>> @@ -322,7 +321,7 @@ static void page_pool_uninit(struct page_pool *pool)
>> ptr_ring_cleanup(&pool->ring, NULL);
>>
>> if (pool->dma_map)
>> - put_device(pool->p.dev);
>> + xa_destroy(&pool->dma_mapped);
>>
>> #ifdef CONFIG_PAGE_POOL_STATS
>> if (!pool->system)
>> @@ -463,13 +462,21 @@ page_pool_dma_sync_for_device(const struct page_pool *pool,
>> netmem_ref netmem,
>> u32 dma_sync_size)
>> {
>> - if (pool->dma_sync && dma_dev_need_sync(pool->p.dev))
>> - __page_pool_dma_sync_for_device(pool, netmem, dma_sync_size);
>> + if (READ_ONCE(pool->dma_sync) && dma_dev_need_sync(pool->p.dev)) {
>> + rcu_read_lock();
>> + /* re-check under rcu_read_lock() to sync with page_pool_scrub() */
>> + if (READ_ONCE(pool->dma_sync))
>> + __page_pool_dma_sync_for_device(pool, netmem,
>> + dma_sync_size);
>> + rcu_read_unlock();
>> + }
>> }
>>
>> -static bool page_pool_dma_map(struct page_pool *pool, netmem_ref netmem)
>> +static bool page_pool_dma_map(struct page_pool *pool, netmem_ref netmem, gfp_t gfp)
>> {
>> dma_addr_t dma;
>> + int err;
>> + u32 id;
>>
>> /* Setup DMA mapping: use 'struct page' area for storing DMA-addr
>> * since dma_addr_t can be either 32 or 64 bits and does not always fit
>> @@ -483,15 +490,28 @@ static bool page_pool_dma_map(struct page_pool *pool, netmem_ref netmem)
>> if (dma_mapping_error(pool->p.dev, dma))
>> return false;
>>
>> - if (page_pool_set_dma_addr_netmem(netmem, dma))
>> + if (in_softirq())
>> + err = xa_alloc(&pool->dma_mapped, &id, netmem_to_page(netmem),
>> + PP_DMA_INDEX_LIMIT, gfp);
>> + else
>> + err = xa_alloc_bh(&pool->dma_mapped, &id, netmem_to_page(netmem),
>> + PP_DMA_INDEX_LIMIT, gfp);
>
> Is it an optimisation? bh disable should be reentrable and could
> just be xa_alloc_bh().
Yeah, it's an optimisation. We do the same thing in
page_pool_recycle_in_ring(), so I just kept the same pattern.
> KERN_{NOTICE,INFO} Maybe?
Erm? Was this supposed to be part of the comment below?
>> + if (err) {
>> + WARN_ONCE(1, "couldn't track DMA mapping, please report to netdev@");
>
> That can happen with enough memory pressure, I don't think
> it should be a warning. Maybe some pr_info?
So my reasoning here was that this code is only called in the alloc
path, so if we're under memory pressure, the page allocation itself
should fail before the xarray alloc does. And if it doesn't (i.e., if
the use of xarray itself causes allocation failures), we really want to
know about it so we can change things. Hence the loud warning.
@maintainers, given the comments above I'm not going to respin for this
unless you tell me to, but let me know! :)
-Toke
Powered by blists - more mailing lists