lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <24321916-549d-4b76-8ca5-a268432f54e7@huawei.com>
Date: Sat, 22 Feb 2025 16:11:51 +0800
From: Yunsheng Lin <linyunsheng@...wei.com>
To: Jesper Dangaard Brouer <hawk@...nel.org>, <davem@...emloft.net>,
	<kuba@...nel.org>, <pabeni@...hat.com>
CC: <zhangkun09@...wei.com>, <liuyonglong@...wei.com>,
	<fanghaiqing@...wei.com>, Robin Murphy <robin.murphy@....com>, Alexander
 Duyck <alexander.duyck@...il.com>, IOMMU <iommu@...ts.linux.dev>, Ilias
 Apalodimas <ilias.apalodimas@...aro.org>, Eric Dumazet <edumazet@...gle.com>,
	Simon Horman <horms@...nel.org>, <netdev@...r.kernel.org>,
	<linux-kernel@...r.kernel.org>
Subject: Re: [PATCH net-next v9 3/4] page_pool: support unlimited number of
 inflight pages

On 2025/2/21 18:12, Jesper Dangaard Brouer wrote:

...

>> @@ -513,10 +517,43 @@ static struct page_pool_item *page_pool_fast_item_alloc(struct page_pool *pool)
>>       return llist_entry(first, struct page_pool_item, lentry);
>>   }
>>   +#define PAGE_POOL_SLOW_ITEM_BLOCK_BIT            BIT(0)
>> +static struct page_pool_item *page_pool_slow_item_alloc(struct page_pool *pool)
>> +{
>> +    if (unlikely(!pool->slow_items.block ||
>> +             pool->slow_items.next_to_use >= ITEMS_PER_PAGE)) {
>> +        struct page_pool_item_block *block;
>> +        struct page *page;
>> +
>> +        page = alloc_pages_node(pool->p.nid, GFP_ATOMIC | __GFP_NOWARN |
>> +                    __GFP_ZERO, 0);
>> +        if (!page) {
>> +            alloc_stat_inc(pool, item_slow_failed);
>> +            return NULL;
>> +        }
> 
> We also need stats on how many pages we allocate for these item_blocks
> (and later free). This new scheme of keeping track of all pages
> allocated via page_pool, is obviously going to consume more memory.
> 
> I want to be able to find out how much memory a page_pool is consuming.
> (E.g. Kuba added a nice interface for querying inflight packets, even
> though this is kept as two different counters).

Does additional stats is needed? as I was thinking list_for_each_entry()
for pool->item_blocks might be used to tell how much memory it is used
for slow item, and how much each item_block is fragmented by looking at
the block->ref with the protection of pool->item_lock if needed.

> 
> What I worry about, is that fragmentation happens inside these
> item_blocks. (I hope you understand what I mean by fragmentation, else
> let me know).
> 
> Could you explain how code handles or avoids fragmentation?

Currently fragmentation is not handled or avoided yet.
For inflight pages which are using slow item, it seems there is hardly
anything we can do about that.

For pages which sit in the page_pool, it seems possible to change
the pages using slow item to use fast item when they are allocated
from or recycled back into page_pool if fast item is available, or
those pages are simply disconnected from page_pool by calling
page_pool_return_page() when page_pool_put_unrefed_netmem() is
called?

I am not sure how severe the fragmentation problem might become and
which way to handle it is better, maybe add interface to query the
fragmentation info as mentioned above first, and deal with it when
it does become a severe problem?

> 
> 
>> +
>> +        block = page_address(page);
>> +        block->pp = pool;
>> +        block->flags |= PAGE_POOL_SLOW_ITEM_BLOCK_BIT;
>> +        refcount_set(&block->ref, ITEMS_PER_PAGE);
>> +        pool->slow_items.block = block;
>> +        pool->slow_items.next_to_use = 0;
>> +
>> +        spin_lock_bh(&pool->item_lock);
>> +        list_add(&block->list, &pool->item_blocks);
>> +        spin_unlock_bh(&pool->item_lock);
>> +    }
>> +
>> +    return &pool->slow_items.block->items[pool->slow_items.next_to_use++];
>> +}
>> +

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ